Ollama¶
Description¶
Ollama is a local model server that runs open LLMs on your hardware and exposes a simple HTTP API. It’s the backbone for privacy-first AI: prompts and data stay on your machines.
Overview¶
After the first model pull, Ollama serves models to clients like Open WebUI (for chat) and Flowise (for workflows). Models are cached locally for quick reuse and can run fully offline when required.
Features¶
Run popular open models (chat, code, embeddings) locally
Simple, predictable HTTP API for developers
Local caching to avoid repeated downloads
Works seamlessly with Open WebUI and Flowise
Offline-capable for air-gapped deployments
Further Resources¶
Ollama — https://ollama.com
Ollama Model Library — https://ollama.com/library