VoidLLM icon
VoidLLM icon

VoidLLM

Privacy-first LLM proxy and AI gateway — load balancing, multi-provider routing, API key management, usage tracking, rate limiting. Self-hosted. Zero knowledge of your prompts.

VoidLLM screenshot 1

Cost / License

  • Free
  • Open Source

Platforms

  • Windows
  • Mac
  • Linux
  • Self-Hosted
  • Docker
1like
0comments
0articles

Features

VoidLLM News & Activities

Highlights All activities

Recent activities

VoidLLM information

  • Developed by

    DE flagvoidmind-io
  • Licensing

    Open Source and Free product.
  • Written in

  • Alternatives

    3 alternatives listed
  • Supported Languages

    • English

AlternativeTo Categories

AI Tools & ServicesDevelopment

GitHub repository

  •  26 Stars
  •  3 Forks
  •  5 Open Issues
  •   Updated  
View on GitHub
VoidLLM was added to AlternativeTo by christianromeni on and this page was last updated .
No comments or reviews, maybe you want to be first?

What is VoidLLM?

VoidLLM is an LLM gateway and proxy designed to give developers full control over how large language models are accessed, routed, and secured.

It acts as a unified entry point for multiple LLM providers (local or cloud), exposing a single, consistent API while handling routing, load balancing, and access control behind the scenes.

VoidLLM is built with a strong focus on privacy and transparency, it does not log prompts by default and allows complete control over data flow, making it suitable for sensitive or enterprise environments.

Key features include:

Unified API layer for multiple LLM providers (OpenAI-compatible, local models, etc.) MCP (Model Context Protocol) support for structured tool access Proxy & gateway architecture for routing, filtering, and managing LLM traffic RBAC and access control for teams and organizations Pluggable backends (local models, self-hosted, or cloud APIs) No vendor lock-in — bring your own models and providers Designed for self-hosting and full infrastructure control

VoidLLM is ideal for developers and teams who want to:

centralize LLM usage enforce security and compliance policies avoid exposing API keys directly maintain full ownership of their data and AI workflows

Official Links