Run large language models locally on consumer hardware. Open-source ecosystem of models trained for assistant-style interactions with no internet needed.
High-performance C/C++ port of OpenAI's Whisper speech recognition model. Runs locally with no internet required on CPU, GPU, or Apple Silicon.
Free, open source alternative to OpenAI API. Run LLMs, generate images, audio, and more locally or on-prem with no GPU required.
LLM inference in pure C/C++. Run LLaMA and other models on consumer hardware with CPU and GPU support. The engine behind many local AI apps.
Desktop app to discover, download, and run local LLMs. User-friendly GUI for running open-source models on your computer.
All-in-one AI desktop app for chatting with documents, using RAG, and running agents. Supports any LLM and runs fully offline.
Fast, local neural text-to-speech system. Supports dozens of languages and voices, runs entirely offline with low resource usage.
Image generating software inspired by Stable Diffusion and Midjourney. Minimal setup, offline capable, no GPU tweaking needed.
Run large language models locally. Get up and running with LLaMA, Mistral, Gemma, and other open models with a single command.
Open source Cursor alternative. An AI-powered code editor built on VS Code with chat, autocomplete, and inline editing capabilities.
Open-source AI code assistant for VS Code and JetBrains. Connects to any LLM for autocomplete, chat, and edit directly in your IDE.
An open-source ChatGPT alternative that runs 100% offline on your computer. Supports multiple LLMs including LLaMA, Mistral, and more.