Text generation web UI icon
Text generation web UI icon

Text generation web UI

 10 likes

A Gradio web UI for Large Language Models. Supports transformers, GPTQ, llama.cpp (GGUF), Llama models.

Text generation web UI screenshot 1

License model

  • FreeOpen Source

Application types

Platforms

  • Linux
  • Windows
  • Mac
5 / 5 Avg rating (1)
10likes
1comment
0news articles

Features

Suggest and vote on features

Properties

  1.  Privacy focused

Features

  1.  No registration required
  2.  Dark Mode
  3.  No Tracking
  4.  Extensible by Plugins/Extensions
  5.  Ad-free
  6.  AI Chatbot
  7.  AI Writing

Text generation web UI News & Activities

Highlights All activities

Recent activities

Show all activities

Text generation web UI information

  • Developed by

    oobabooga
  • Licensing

    Open Source (AGPL-3.0) and Free product.
  • Written in

  • Alternatives

    52 alternatives listed
  • Supported Languages

    • English

AlternativeTo Category

AI Tools & Services

GitHub repository

  •  43,967 Stars
  •  5,670 Forks
  •  2548 Open Issues
  •   Updated Jun 17, 2025 
View on GitHub

Our users have written 1 comments and reviews about Text generation web UI, and it has gotten 10 likes

Text generation web UI was added to AlternativeTo by Alx84 on Sep 19, 2023 and this page was last updated Sep 19, 2023.

Comments and Reviews

   
 Post comment/review
Top Positive Comment
Sam Lander
Apr 19, 2024
1

Probably the best privacy, offline gui there is currently. Allows you to not only chat and embed, but also do lora training.

A little bit more difficult to set up than GPT4ALL, but anyone could do it by following the directions.

What is Text generation web UI?

A Gradio web UI for Large Language Models. Its goal is to become the AUTOMATIC1111/stable-diffusion-webui of text generation.

Features

  • 3 interface modes: default (two columns), notebook, and chat
  • Multiple model backends: transformers, llama.cpp, ExLlama, ExLlamaV2, AutoGPTQ, GPTQ-for-LLaMa, CTransformers
  • Dropdown menu for quickly switching between different models
  • LoRA: load and unload LoRAs on the fly, train a new LoRA using QLoRA
  • Precise instruction templates for chat mode, including Llama-2-chat, Alpaca, Vicuna, WizardLM, StableLM, and many others
  • 4-bit, 8-bit, and CPU inference through the transformers library
  • Use llama.cpp models with transformers samplers (llamacpp_HF loader)
  • Multimodal pipelines, including LLaVA and MiniGPT-4
  • Extensions framework
  • Custom chat characters
  • Very efficient text streaming
  • Markdown output with LaTeX rendering, to use for instance with GALACTICA
  • API, including endpoints for websocket streaming (see the examples)