

node-llama-cpp
Like
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level.
Cost / License
- Free
- Open Source
Platforms
- Windows
- Mac
- Linux
- Android
Features
No features, maybe you want to suggest one?
Tags
- cmake-js
- grammar
- cuda
- metal
- prebuilt-binaries
- gpu
- cmake
- vulkan
- catai
- JSON Schema
- llama-cpp
- bindings
- function-calling
- AI
- Embedding
- llama
- gguf
node-llama-cpp News & Activities
Highlights All activities
Recent activities
POX added node-llama-cpp as alternative to Dione- POX added node-llama-cpp as alternative to RamaLama
POX added node-llama-cpp as alternative to Osaurus
POX added node-llama-cpp as alternative to Google AI Edge Gallery
Danilo_Venom added node-llama-cpp as alternative to Cloudflare Workers AI- Danilo_Venom added node-llama-cpp as alternative to Jellybox
node-llama-cpp information
No comments or reviews, maybe you want to be first?
Post comment/reviewWhat is node-llama-cpp?
Features
- Run LLMs locally on your machine
- Metal, CUDA and Vulkan support
- Pre-built binaries are provided, with a fallback to building from source without node-gyp or Python
- Adapts to your hardware automatically, no need to configure anything
- A Complete suite of everything you need to use LLMs in your projects
- Use the CLI to chat with a model without writing any code
- Up-to-date with the latest llama.cpp. Download and compile the latest release with a single CLI command
- Enforce a model to generate output in a parseable format, like JSON, or even force it to follow a specific JSON schema
- Provide a model with functions it can call on demand to retrieve information of perform actions
- Embedding support
- Great developer experience with full TypeScript support, and complete documentation
- Much more


