//nbkelley /homelab

Ollama Configuration#

What Was Established#

Ollama is used as the primary model backend across the fleet. The configuration focuses on making the API accessible to other nodes (like the T480 orchestrator) and running models like Gemma 4.

Key Decisions#

To allow remote access from other machines in the homelab (e.g., from a Docker container or another PC), the Ollama service must be configured to listen on all network interfaces, not just localhost.

Current Configuration#

Network Exposure#

Create a systemd override to set OLLAMA_HOST to 0.0.0.0:

sudo mkdir -p /etc/systemd/system/ollama.service.d
sudo nano /etc/systemd/system/ollama.service.d/override.conf

Add the following content:

[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"

Apply changes:

sudo systemctl daemon-reload
sudo systemctl restart ollama

Verify listening status:

ss -tlnp | grep 11434
# Should show 0.0.0.0:11434

Model Usage#

  • Gemma 4 (26B/27B): Primary model for heavy reasoning and batch processing. Supports vision and 256K context.
  • Gemma 4 (E4B/E2B): Lightweight models used for routing, classification, and simple summarization on the Orchestr/T480 node.

Open WebUI Deployment, AI Infrastructure Overview, Pavilion (AI PC) Configuration

Sources#

Homelab AI - 2026-04-13 · ingested/chats/Homelab AI - 2026-04-13