LocalAI  icon
LocalAI  icon

LocalAI

Drop-In OpenAI replacement, On-device, local-first, Generate text/image/speech/music/etc... Backend Agnostic: (llama.cpp, diffusers, bark.cpp, etc...), Optional Distributed Inference(P2P/Federated).

Home Page

Cost / License

  • Free
  • Open Source

Platforms

  • Online
  • Self-Hosted
-
No reviews
9likes
1comment
0news articles

Features

Suggest and vote on features

Properties

  1.  Privacy focused

Features

  1.  Ad-free
  2.  No Tracking
  3.  Text to Image Generation
  4.  Text to Speech
  5.  Works Offline
  6.  Dark Mode
  7.  Image to Image Generation
  8.  AI Writing
  9.  AI Chatbot
  10.  Speech to text
  11.  Kubernetes

 Tags

LocalAI News & Activities

Highlights All activities

Recent News

No news, maybe you know any news worth sharing?
Share a News Tip

Recent activities

Show all activities

LocalAI information

  • Developed by

    go-skynet
  • Licensing

    Open Source (MIT) and Free product.
  • Written in

  • Alternatives

    33 alternatives listed
  • Supported Languages

    • English

AlternativeTo Categories

AI Tools & ServicesPhotos & Graphics

GitHub repository

  •  40,680 Stars
  •  3,283 Forks
  •  197 Open Issues
  •   Updated  
View on GitHub

Our users have written 1 comments and reviews about LocalAI , and it has gotten 9 likes

LocalAI was added to AlternativeTo by 78jmb6dk on and this page was last updated .

Comments and Reviews

   
 Post comment/review
thiscouldbeyourusername
0

looks promising. has nice webpage. uses 1000% of your CPU just launching or downloading gpt models. unable to download large models because for some reason not using any AI but downloading the model requires full load aswell.

all I am pointing out: this feels really beta and is not working well. gpt4all was much better.

What is LocalAI ?

Drop-in replacement for OpenAI API, local/on-prem inference with consumer grade hardware, supporting multiple model families and backends that are compatible with standard formats like GGUF.

In a nutshell:

  • Local, OpenAI drop-in alternative REST API. You own your data.

  • NO GPU Required & Local/On-Device Inference (Offline).

  • Optional, GPU/NPU Acceleration is available in llama.cpp-compatible LLMs. See also the build section.

  • Model Inference Pipeline/Backend Agnostic! (install inference backends through Gallery WebUI or via the CLI)

  • Task Type's Supported:

  • Text generation (with llama.cpp, transformers, vllm, exllama2, gpt4all.cpp... and more)

  • Text to Audio:

  • Sound/Music generation (transformers-musicgen)

  • Speech generation (whisper, bark, piper, bark.cpp)

  • Speech to Text (i.e: transcription, with whisper.cpp, etc...)

  • Image generation with diffusers/stable-diffusion.cpp (text-to-image, image-to-image, etc...)

  • Text Embedding (with sentencetransformers, transformers)

  • Text Re-Ranking (rerankers, sentencetransformers)

  • Once loaded the first time, it keep models loaded in memory for faster inference

  • Distributed Inference (Federated and P2P mode)

Additional Notes:

  • Performance/Throughput can vary by inference pipeline chosen, you can use C/C++ based pipelines like llama.cpp for a faster inference and better performance read the LocalAI docs for the most up-to-date information.

Official Links