

node-llama-cpp
Like
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level.
Cost / License
- Free
- Open Source (MIT)
Platforms
- Windows
- Mac
- Linux
- Android
node-llama-cpp News & Activities
Highlights All activities
Recent activities
POX added node-llama-cpp as alternative to QVAC Workbench- POX added node-llama-cpp as alternative to LocalGPT
- bugmenot added node-llama-cpp as alternative to llama.cpp
POX added node-llama-cpp as alternative to Dione- POX added node-llama-cpp as alternative to RamaLama
POX added node-llama-cpp as alternative to Osaurus
POX added node-llama-cpp as alternative to Google AI Edge Gallery
Danilo_Venom added node-llama-cpp as alternative to Cloudflare Workers AI- Danilo_Venom added node-llama-cpp as alternative to Jellybox
node-llama-cpp information
No comments or reviews, maybe you want to be first?
What is node-llama-cpp?
Features
- Run LLMs locally on your machine
- Metal, CUDA and Vulkan support
- Pre-built binaries are provided, with a fallback to building from source without node-gyp or Python
- Adapts to your hardware automatically, no need to configure anything
- A Complete suite of everything you need to use LLMs in your projects
- Use the CLI to chat with a model without writing any code
- Up-to-date with the latest llama.cpp. Download and compile the latest release with a single CLI command
- Enforce a model to generate output in a parseable format, like JSON, or even force it to follow a specific JSON schema
- Provide a model with functions it can call on demand to retrieve information of perform actions
- Embedding support
- Great developer experience with full TypeScript support, and complete documentation
- Much more

