//nbkelley /homelab

hugoboss

hugoboss#

What Was Established#

hugoboss (192.168.1.237) is a lightweight Ubuntu server dedicated to Hugo static site development. It serves as the authoring and scaffolding machine for all Hugo-based sites in the homelab. A full machine audit was conducted 2026-05-01; repos were synced and organised 2026-05-02.

Identity#

  • Hostname: hugoboss
  • IP: 192.168.1.237 (VLAN Gandalf)
  • OS: Ubuntu 24.04.3 LTS (Noble Numbat), kernel 6.8.0-88-generic
  • User: iluvatar
  • Disk: 63G total, ~12G used, 49G free
  • Memory: 3.8G total, ~2.7G available

Hugo Installation#

Hugo is installed via Homebrew (linuxbrew), not via apt or direct binary download.

Wiki System - Architecture

Wiki System - Architecture#

What Was Established#

The wiki system is designed around the LLM wiki pattern (Karpathy): raw sources (chat transcripts, notes, docs) are crystallized into structured markdown pages, embedded into pgvector, and retrieved semantically by agents in future sessions. A dedicated LXC (nk-wiki) will host the wiki VM, separating wiki infrastructure from other services.

Multi-Wiki Namespace Design#

Three wikis are planned, each with its own namespace in pgvector:

Servarr - Media Automation Stack

Servarr - Media Automation Stack#

Overview#

Servarr is a full VM at 192.168.1.112 (hostname: servarr) running a Docker Compose media automation stack. All services depend on a NAS mount at /data for media storage. Download clients (qbittorrent, nzbget) and indexer (prowlarr) route through a Gluetun VPN container via network_mode: service:gluetun.

Note: This VM is distinct from Varda (192.168.1.131), which is a separate web server hosting ilmare.nbkelley.com.

VM Specs#

Detail Value
Hostname servarr
IP 192.168.1.112
OS Ubuntu 24.04.4 LTS (Noble)
Kernel 6.8.0-107-generic
CPU QEMU Virtual CPU, 4 vCPUs
RAM 7.8 GB
Disk 63 GB root (/dev/sda2 ext4, 38% used)
Hypervisor Proxmox (Minas Tirith)

Container Inventory#

Servarr Stack (/docker/servarr/compose.yaml)#

Network: servarrnetwork (172.39.0.0/24)

Varda Server (Ubuntu)

Varda Server (Ubuntu)#

Overview#

Varda is an Ubuntu Server VM at 192.168.1.131 used for web hosting and development. It serves as the primary host for the ilmare.nbkelley.com website.

Note: Varda is a different VM from servarr (192.168.1.112), which runs the media automation stack. These were previously conflated in some wiki pages.

Key Details#

  • Hypervisor: Proxmox (Minas Tirith)
  • Management: Cockpit GUI (Port 9090)
  • Web Server: Nginx for static content
  • Firewall: UFW enabled (ports 22, 80, 443)
  • Development: VS Code Remote-SSH

Nginx Site Structure#

/var/www/ilmare.nbkelley.com/
├── Assets/
│   └── Images/  (Case-sensitive: capital I)
├── html/
│   └── index.html
├── Scripts/
│   └── scripts.js
└── Styles/
    └── styles.css

Nginx Server Block#

/etc/nginx/sites-available/ilmare.nbkelley.com:

AI Infrastructure Overview

AI Infrastructure Overview#

What Was Established#

The homelab is transitioning into a multi-node agentic architecture, utilizing a mix of existing laptops, desktops, and a future Mac Studio to handle different tiers of LLM workloads (Batch vs. Interactive).

Key Decisions#

Nodes are specialized by their hardware capabilities (VRAM and CPU/RAM) to optimize for cost and performance:

  • Inference Node (Batch/Heavy + Embeddings): HP Pavilion 15t-e300 — hostname nk-celebrimbor, IP 192.168.2.192. Intel i7, 32GB RAM, NVIDIA MX550 (2GB VRAM, CUDA disabled). Runs gemma4:e4b for monitoring pipeline synthesis (~15-18 t/s, CPU-only) and nomic-embed-text for wiki semantic embeddings (768-dim, via Ollama on port 11434).
  • Orchestrator Node: Thinkpad T480. Intel i5/i7 8th Gen, 32GB RAM. Running headless Ubuntu. Hosts n8n and lightweight models (Gemma 4 E4B) for routing and decision-making.
  • Interactive Node (Potential): ROG Zephyrus (GU501). Intel i7, NVIDIA GTX 1080 Max-Q (8GB VRAM). Ideal for 7B/8B models requiring high tokens-per-second for real-time chat.
  • Primary Reasoning Node (Deployed 2026-04-24): Mac Studio M1 Max, 64GB Unified Memory — hostname Legolas, IP 192.168.1.45. Handles all wiki pipeline LLM calls: gemma4:e2b (text cleaning), qwen3.6:35b-a3b-coding-nvfp4 (JSON crystallization), minicpm-v:8b (PDF OCR/vision). Fast interactive inference — 31B models at ~25+ t/s vs Pavilion’s ~15 t/s CPU-only. See Mac Studio.
  • Parallelism Nodes: Various i5 8th Gen desktops. 32GB RAM, no GPU. Used for distributed pipeline stages or additional lightweight model instances.

Current Configuration#

  • Legolas (Mac Studio): Ollama at 192.168.1.45:11434. Running gemma4:e2b, qwen3.6:35b-a3b-coding-nvfp4, minicpm-v:8b for wiki pipeline. Deployed 2026-04-24.
  • nk-celebrimbor (Pavilion): headless Ubuntu, Ollama CPU-only (CUDA disabled — MX550 2GB VRAM too small). Running gemma4:e4b at ~15-18 t/s for hourly monitoring pipeline; nomic-embed-text for wiki embeddings.
  • T480: planned orchestrator role not yet active.

Ollama Configuration, Open WebUI Deployment, Mac Studio, Pavilion (AI PC) Configuration

PostgreSQL

PostgreSQL#

What Was Established#

PostgreSQL 16 runs as an LXC on Proxmox, serving as the central database for n8n, the monitoring pipeline, and pgvector embeddings for the wiki.

Deployment#

Detail Value
LXC host postgresql
Container ID 108
IP 192.168.1.57
Port 5432
Version 16.13 (Debian 16.13-1.pgdg13+1)
OS Debian 13 (unprivileged LXC)
Disk 4 GB (App-Storage ZFS pool)
RAM 1024 MiB
CPU 1 core
Installed via tteck Proxmox helper scripts
Web UI Adminer — not yet installed
SSH user iluvatar (sudo, PermitRootLogin no)

Databases and Users#

Database User Purpose
homelab homelab n8n workflows, monitoring pipeline, pgvector wiki embeddings

Setup commands (run as postgres user)#

CREATE DATABASE homelab;
CREATE USER homelab WITH PASSWORD '<password>';
GRANT ALL PRIVILEGES ON DATABASE homelab TO homelab;
GRANT ALL ON SCHEMA public TO homelab;  -- required for n8n
\q

The GRANT ALL ON SCHEMA public step is required — without it n8n fails to start with a permissions error even though the database and user exist.

wiki-llm

wiki-llm#

What Was Established#

Dedicated Ubuntu 24.04 VM for hosting the homelab wiki system and Claude Code sessions. Chosen as a VM rather than an LXC for stronger isolation (wiki infrastructure handles credentials and embeddings).

Deployment#

Detail Value
Hostname wiki-llm
IP 192.168.1.206
VLAN Gandalf (192.168.1.x)
OS Ubuntu 24.04
CPU 2 cores
RAM 4 GB
Type VM (Proxmox)
SSH user iluvatar (sudo, PermitRootLogin no)

Access#

  • VS Code Remote SSH: Primary method for Claude Code sessions — VS Code connects to wiki-llm via remote SSH, giving Claude Code native filesystem access to /opt/wiki/homelab/
  • Direct SSH: ssh iluvatar@192.168.1.206

Wiki File Structure#

/opt/wiki/
  homelab/        ← git repo, all homelab wiki pages and skills
  work/           ← git repo, work/princelobel wiki + pipeline scripts
  projects/       ← git repo, projects wiki (planned)
  personal/       ← git repo, personal wiki (Gemma-only)
  raw-sources/    ← symlink to /mnt/wiki-nas/LLMWiki
  skills-reference/  ← clone of vanillaflava/llm-wiki-claude-skills (reference only)

Each wiki directory is an independent git repository (git init’d) for clean version history per namespace.

Windows VM Installation Troubleshooting

Windows VM Installation Troubleshooting#

What Was Established#

Troubleshooting guide for Windows installation when the local disk does not appear in the partitioning screen during setup.

Key Troubleshooting Steps#

  1. Check disk detection in BIOS/UEFI — If the disk doesn’t appear in BIOS, it’s a hardware issue (loose cable, faulty drive, wrong SATA port).

  2. Load storage drivers — Modern NVMe/RAID controllers may need drivers loaded during setup via Shift + F10 → “Load Driver”.

Node Exporter Deployment

Node Exporter Deployment#

What Was Established#

node_exporter is used to expose system-level metrics (CPU, RAM, Disk, Network) from Linux hosts to the central Prometheus instance at 192.168.1.167.

Monitored Hosts#

Host IP Port Prometheus job VLAN
nk-celebrimbor (Pavilion) 192.168.2.192 9100 pavilion mithrandir
wiki-llm 192.168.1.206 9100 wiki gandalf

Installation#

Method A: apt (Ubuntu) — preferred for Ubuntu hosts#

sudo apt install prometheus-node-exporter
# Service auto-enabled; runs on port 9100

Method B: Manual binary (other distros)#

wget https://github.com/prometheus/node_exporter/releases/download/v1.11.1/node_exporter-1.11.1.linux-amd64.tar.gz
tar xvf node_exporter-1.11.1.linux-amd64.tar.gz
sudo cp node_exporter-1.11.1.linux-amd64/node_exporter /usr/local/bin/

Create /etc/systemd/system/node_exporter.service: