LocalAI
Self-hosted, community-driven, local OpenAI-compatible API. Drop-in replacement for OpenAI running LLMs on consumer-grade hardware. Free Open Source OpenAI alternative. No GPU required. LocalAI is an API to run ggml compatible models.
License model
- Free • Open Source
Platforms
- Linux
Features
LocalAI information
AlternativeTo Category
AI Tools & ServicesGitHub repository
- 23,737 Stars
- 1,813 Forks
- 375 Open Issues
- Updated Oct 6, 2024
What is LocalAI ?
LocalAI is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.
In a nutshell: -Local, OpenAI drop-in alternative REST API. You own your data. -NO GPU required. NO Internet access is required either -Optional, GPU Acceleration is available in llama.cpp-compatible LLMs. See also the build section. -Supports multiple models: -Text generation with GPTs (llama.cpp, gpt4all.cpp, ... and more) -Text to Audio -Audio to Text (Audio transcription with whisper.cpp) -Image generation with stable diffusion -Once loaded the first time, it keep models loaded in memory for faster inference -Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.
Comments and Reviews
looks promising. has nice webpage. uses 1000% of your CPU just launching or downloading gpt models. unable to download large models because for some reason not using any AI but downloading the model requires full load aswell.
all I am pointing out: this feels really beta and is not working well. gpt4all was much better.