llama.cpp
1 like
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware - locally and in the cloud.
Cost / License
- Free
- Open Source (MIT)
Application types
Platforms
- Windows
- Mac
- Linux
- Docker
- Homebrew
- Nix Package Manager
- MacPorts
- Self-Hosted
llama.cpp News & Activities
Highlights All activities
Recent activities
- bugmenot liked llama.cpp
bugmenot added llama.cpp as alternative to Nexa Studio
bugmenot added llama.cpp as alternative to AI00 RWKV Server
bugmenot added llama.cpp as alternative to RWKV Runner
llama.cpp information
No comments or reviews, maybe you want to be first?
What is llama.cpp?
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware - locally and in the cloud.
- Plain C/C++ implementation without any dependencies
- Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
- AVX, AVX2, AVX512 and AMX support for x86 architectures
- RVV, ZVFH, ZFH, ZICBOP and ZIHINTPAUSE support for RISC-V architectures
- 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
- Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
- Vulkan and SYCL backend support
- CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity
The llama.cpp project is the main playground for developing new features for the ggml library.









