Svc-ai-ollama Role¶
Description: No description available
Variables¶
author: Kevin Veen-Birkenbach
description: Installs Ollama — a local model server for running open LLMs with a simple HTTP API.
license: Infinito.Nexus NonCommercial License
license_url: https://s.infinito.nexus/license
company: Kevin Veen-Birkenbach
Consulting & Coaching Solutions https://www.veen.world
galaxy_tags: [‘ai’, ‘llm’, ‘inference’, ‘offline’, ‘privacy’, ‘self-hosted’, ‘ollama’]
repository: https://s.infinito.nexus/code
issue_tracker_url: https://s.infinito.nexus/issues
documentation: https://s.infinito.nexus/code/
logo: {‘class’: ‘fa-solid fa-microchip’}
run_after: []
README¶
Ollama¶
Description¶
Ollama is a local model server that runs open LLMs on your hardware and exposes a simple HTTP API. It’s the backbone for privacy-first AI: prompts and data stay on your machines.
Overview¶
After the first model pull, Ollama serves models to clients like Open WebUI (for chat) and Flowise (for workflows). Models are cached locally for quick reuse and can run fully offline when required.
Features¶
Run popular open models (chat, code, embeddings) locally
Simple, predictable HTTP API for developers
Local caching to avoid repeated downloads
Works seamlessly with Open WebUI and Flowise
Offline-capable for air-gapped deployments
Further Resources¶
Ollama — https://ollama.com
Ollama Model Library — https://ollama.com/library