Ollama
Run LLMs locally
Run large language models locally. Get up and running with Llama, Mistral, and other open-source models with a simple CLI and API.
110.0k
GitHub Stars
free
Pricing
Yes
Self-Hostable
Features
- ✓ Run LLMs locally
- ✓ OpenAI-compatible API
- ✓ Model library (Llama, Mistral, Gemma, etc.)
- ✓ Custom model creation
- ✓ GPU acceleration
- ✓ REST API
- ✓ Cross-platform (Mac, Linux, Windows)
Pros
- + Completely free
- + No API costs
- + Data stays local
- + Easy model switching
- + Active development
Cons
- − Requires decent hardware
- − Limited to open-source models
- − No cloud fallback