



AI Dev Gallery is described as 'Learn how to add AI with local models and APIs to Windows apps. Discover AI scenarios and models such as Phi, Mistral, Stable Diffusion, Whisper, and many more to delight your users. The AI Dev Gallery is an open-source app designed to help Windows developers integrate AI' and is a large language model (llm) tool in the ai tools & services category. There are more than 10 alternatives to AI Dev Gallery for a variety of platforms, including Windows, Linux, Mac, Android and iPhone apps. The best AI Dev Gallery alternative is Ollama, which is both free and Open Source. Other great apps like AI Dev Gallery are AnythingLLM, A1111 Stable Diffusion WEB UI, Comfy and InvokeAI.




Experience the power of RWKV models directly on your device. Completely offline, privacy-first, and efficient. No internet required.








NodeTool is a playground for AI that uses a visual canvas to connect different AI tools - like GPT, image creators, and video generators - into one seamless workflow. Instead of jumping between five different apps to write a script, generate an image, and turn it into a video...


A fast AI Video Generator for the GPU Poor. Supports Wan 2.1/2.2, Qwen Image, Hunyuan Video, LTX Video and Flux.
Draw Things provides a comprehensive but still easy-to-use, mobile and desktop solution for AI-based art generation. It packs all the power of Stable Diffusion into a sleek, iOS and Mac app that lets you create, upscale and edit AI art, totally offline, free and privacy safe.




SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an emphasis on making powertools easily accessible, high performance, and extensibility.

📱 The first fully functional, standalone AI assistant for mobile devices with powerful tool-calling capabilities 📱




LLM Hub is an open-source Android app for on-device LLM chat and image generation. It's optimized for mobile usage (CPU/GPU/NPU acceleration) and supports multiple model formats so you can run powerful models locally and privately.




Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware - locally and in the cloud.






