llama.cpp
LLM inference in pure C/C++. Run LLaMA and other models on consumer hardware with CPU and GPU support. The engine behind many local AI apps.
Introduction
暂无描述/由厂商提交后补全
Information
- Websitegithub.com
- Published date2026/03/05
LLM inference in pure C/C++. Run LLaMA and other models on consumer hardware with CPU and GPU support. The engine behind many local AI apps.
暂无描述/由厂商提交后补全