TypeScript toolkit for building AI-powered applications with React, Next.js, Vue, Svelte, and Node.js. Unified API for multiple AI providers with streaming.
LLM inference in pure C/C++. Run LLaMA and other models on consumer hardware with CPU and GPU support. The engine behind many local AI apps.
Hugging Face's state-of-the-art ML library. Access thousands of pretrained models for NLP, computer vision, audio, and multimodal tasks.
Drag & drop UI to build customized LLM flows. Build chatbots, agents, and RAG apps without coding using a visual node editor.
Microsoft's framework for building multi-agent conversational AI systems. Enables agents to chat with each other to solve tasks.
Framework for orchestrating role-playing autonomous AI agents. Build teams of AI agents that collaborate to solve complex tasks.
Open source framework for building production-ready LLM applications, RAG pipelines, and search systems with composable components.
Run large language models locally. Get up and running with LLaMA, Mistral, Gemma, and other open models with a single command.
Data framework for connecting custom data sources to LLMs. Build RAG applications, agents, and workflows over your data.
Framework for developing applications powered by LLMs. Build context-aware reasoning applications with chains, agents, and retrieval.