//nbkelley /homelab

Homelab Dashboard

Homelab Dashboard#

What Was Established#

A Node.js + Express dashboard is deployed at https://status.nbkelley.com, serving homelab monitoring data. All API calls are server-side — no internal IPs, credentials, or raw API responses reach the browser.

Deployment#

Detail Value
Public URL https://status.nbkelley.com
Host proxy VM (192.168.1.222)
Port 3002
Runtime Node.js + Express, Docker container
compose.yaml location /home/iluvatar/compose.yaml on proxy VM
App directory /opt/homelab-dashboard/ on proxy VM
Routing Cloudflare Tunnel → 127.0.0.1:3002

File Structure#

/opt/homelab-dashboard/
  server.js         ← Express app, all API logic
  package.json
  package-lock.json
  dockerfile
  start.sh
  node_modules/
  public/
    index.html
    styles.css
    app.js          ← frontend render engine

compose.yaml Entry#

homelab-dashboard:
  build: /opt/homelab-dashboard
  container_name: homelab-dashboard
  restart: unless-stopped
  network_mode: host
  environment:
    - PORT=3002

network_mode: host means the app binds directly to host port 3002. The ports: mapping is ignored when using host networking.

Jellyfin Docker Compose Reference

Jellyfin Docker Compose Reference#

Canonical copy of /docker/jellyfin/compose.yaml on the servarr VM (192.168.1.112).

Jellyfin and Jellyseerr run in a separate compose file from the main Servarr stack, on their own Docker network (jellyfin_default, 172.18.0.0/16).

compose.yaml#

services:
  jellyfin:
    image: lscr.io/linuxserver/jellyfin:latest
    container_name: jellyfin
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles
      - JELLYFIN_PublishedServerUrl=http://192.168.1.112 #optional
    volumes:
      - ./config:/config
      - /data:/data
      - ./jellyfin-web/config.json:/usr/share/jellyfin/web/config.json
    devices:
      - /dev/dri:/dev/dri #Use for Intel QuickSync
    ports:
      - 8096:8096
      - 7359:7359/udp #Service Discovery
      - 1900:1900/udp #Client Discovery
    restart: unless-stopped
    labels:
      - deunhealth.restart.on.unhealthy=true
    healthcheck:
      test: curl -f http://localhost:8096/health || exit 1
      interval: 60s
      retries: 3
      start_period: 30s
      timeout: 10s
# Remove the Jellyfin service if installed directly on system.

  jellyseerr:
    container_name: jellyseerr
    image: fallenbagel/jellyseerr:latest
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles
    volumes:
      - ./jellyseerr:/app/config
    ports:
      - 5055:5055
    restart: unless-stopped

Key Details#

  • Network: Uses the auto-created jellyfin_default bridge network (172.18.0.0/16). Jellyseerr reaches the *arr services via the host IP (192.168.1.112) and their exposed ports.
  • GPU Passthrough: /dev/dri:/dev/dri enables Intel QuickSync hardware-accelerated transcoding.
  • PUID/PGID: Hardcoded to 1000 (unlike the servarr stack which uses .env variables).
  • TZ: America/Los_Angeles (note: servarr stack uses America/New_York).
  • Healthcheck: Already configured with deunhealth label — auto-restarts on unhealthy.
  • Custom web config: ./jellyfin-web/config.json is bind-mounted into the container for UI customization (e.g., the Hinterflix help link).

Directory Layout#

/docker/jellyfin/
├── compose.yaml
├── config/          # Jellyfin server config
├── jellyfin-web/
│   └── config.json  # Custom web UI config
└── jellyseerr/      # Jellyseerr app config

MBTA Dashboard - Setup

MBTA Dashboard - Setup#

What Was Established#

Office transit dashboard deployed on a self-hosted Debian VM (PLT-MBTADisplay, 192.168.168.42). Nginx serves static files from /var/www/MBTADisplay/public and proxies /api/ requests to a Node/Express caching proxy on port 3000. API keys are stored server-side and never exposed to the browser. Process managed via pm2 with a systemd service.

Architecture#

Browser (Anthias/Desktop)
    → Nginx (:80) → / → static files (/var/www/MBTADisplay/public)
                   → /api/ → Node/Express proxy (:3000)
                                → MBTA v3 API
                                → OpenWeatherMap API
                                → RSS feeds
                                → Caches responses

Nginx Configuration#

server {
    listen 80;
    server_name transit.intra.plgt.com 192.168.168.42;

    root /var/www/MBTADisplay/public;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }

    location /api/ {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Node/Express Proxy#

Setup#

mkdir -p /opt/mbta-proxy
cd /opt/mbta-proxy
npm init -y
npm install express node-fetch

API Key Management#

  • API keys stored in /opt/mbta-proxy/.env
  • Loaded via process.env.MBTA_API_KEY in server.js
  • pm2 started with --env flag to load .env file
  • Critical: API key must survive server.js overwrites from GitHub syncs

pm2 Process Manager#

pm2 start server.js --name mbta-proxy
pm2 save
pm2 startup systemd

systemd Service (/etc/systemd/system/pm2-administrator.service)#

[Unit]
Description=PM2 process manager
After=network.target

[Service]
Type=forking
User=administrator
ExecStart=/usr/local/bin/pm2 resurrect
ExecReload=/usr/local/bin/pm2 reload all
ExecStop=/usr/local/bin/pm2 kill
Restart=on-failure

[Install]
WantedBy=multi-user.target

GitHub Deployment#

Repository#

  • Repo: https://github.com/bich-nguyen/MBTADisplay.git
  • Cloned to /var/www/MBTADisplay
  • Static files in public/ subdirectory
  • Server files in /opt/mbta-proxy/ (separate from web root)

Ownership#

sudo chown -R administrator:administrator /var/www/MBTADisplay

Note: www-data ownership breaks git operations from administrator user.

Servarr - Media Automation Stack

Servarr - Media Automation Stack#

Overview#

Servarr is a full VM at 192.168.1.112 (hostname: servarr) running a Docker Compose media automation stack. All services depend on a NAS mount at /data for media storage. Download clients (qbittorrent, nzbget) and indexer (prowlarr) route through a Gluetun VPN container via network_mode: service:gluetun.

Note: This VM is distinct from Varda (192.168.1.131), which is a separate web server hosting ilmare.nbkelley.com.

VM Specs#

Detail Value
Hostname servarr
IP 192.168.1.112
OS Ubuntu 24.04.4 LTS (Noble)
Kernel 6.8.0-107-generic
CPU QEMU Virtual CPU, 4 vCPUs
RAM 7.8 GB
Disk 63 GB root (/dev/sda2 ext4, 38% used)
Hypervisor Proxmox (Minas Tirith)

Container Inventory#

Servarr Stack (/docker/servarr/compose.yaml)#

Network: servarrnetwork (172.39.0.0/24)

Servarr Docker Compose Reference

Servarr Docker Compose Reference#

Canonical copy of /docker/servarr/compose.yaml on the servarr VM (192.168.1.112).

compose.yaml#

# Compose file for the *arr stack. Configuration files are stored in the
# directory you launch the compose file on. Change to bind mounts if needed.
# All containers are ran with user and group ids of the main user and
# group to aviod permissions issues of downloaded files, please refer
# the read me file for more information.

#############################################################################
# NOTICE: We recently switched to using a .env file. PLEASE refer to the docs.
# https://github.com/TechHutTV/homelab/tree/main/media#docker-compose-and-env
#############################################################################

networks:
  servarrnetwork:
    name: servarrnetwork
    ipam:
      config:
        - subnet: 172.39.0.0/24

services:
  # airvpn recommended (referral url: https://airvpn.org/?referred_by=673908)
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun # If running on an LXC see readme for more info.
    networks:
      servarrnetwork:
        ipv4_address: 172.39.0.2
    ports:
      - ${FIREWALL_VPN_INPUT_PORTS}:${FIREWALL_VPN_INPUT_PORTS} # airvpn forwarded port, pulled from .env
      - 8080:8080 # qbittorrent web interface
      - 6881:6881 # qbittorrent torrent port
      - 6789:6789 # nzbget
      - 9696:9696 # prowlarr
    volumes:
      - ./gluetun:/gluetun
    # Make a '.env' file in the same directory.
    env_file:
      - .env
    healthcheck:
      test: ping -c 1 www.google.com || exit 1
      interval: 20s
      timeout: 10s
      retries: 5
    restart: unless-stopped

  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: qbittorrent
    restart: unless-stopped
    labels:
      - deunhealth.restart.on.unhealthy=true
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
      - WEBUI_PORT=8080 # must match "qbittorrent web interface" port number in gluetun's service above
      - TORRENTING_PORT=${FIREWALL_VPN_INPUT_PORTS} # airvpn forwarded port, pulled from .env
    volumes:
      - ./qbittorrent:/config
      - /data:/data
    depends_on:
      gluetun:
        condition: service_healthy
        restart: true
    network_mode: service:gluetun
    healthcheck:
      test: ping -c 1 www.google.com || exit 1
      interval: 60s
      retries: 3
      start_period: 20s
      timeout: 10s

  # See the 'qBittorrent Stalls with VPN Timeout' section for more information.
  deunhealth:
    image: qmcgaw/deunhealth
    container_name: deunhealth
    network_mode: "none"
    environment:
      - LOG_LEVEL=info
      - HEALTH_SERVER_ADDRESS=127.0.0.1:9999
      - TZ=${TZ}
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  nzbget:
    image: lscr.io/linuxserver/nzbget:latest
    container_name: nzbget
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./nzbget:/config
      - /data:/data
    depends_on:
      gluetun:
        condition: service_healthy
        restart: true
    restart: unless-stopped
    network_mode: service:gluetun

  prowlarr:
    image: lscr.io/linuxserver/prowlarr:latest
    container_name: prowlarr
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./prowlarr:/config
    restart: unless-stopped
    depends_on:
      gluetun:
        condition: service_healthy
        restart: true
    network_mode: service:gluetun

  sonarr:
    image: lscr.io/linuxserver/sonarr:latest
    container_name: sonarr
    restart: unless-stopped
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./sonarr:/config
      - /data:/data
    ports:
      - 8989:8989
    networks:
      servarrnetwork:
        ipv4_address: 172.39.0.3

  radarr:
    image: lscr.io/linuxserver/radarr:latest
    container_name: radarr
    restart: unless-stopped
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./radarr:/config
      - /data:/data
    ports:
      - 7878:7878
    networks:
      servarrnetwork:
        ipv4_address: 172.39.0.4

  lidarr:
    container_name: lidarr
    image: lscr.io/linuxserver/lidarr:latest
    restart: unless-stopped
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./lidarr:/config
      - /data:/data
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    ports:
      - 8686:8686
    networks:
      servarrnetwork:
        ipv4_address: 172.39.0.5

  bazarr:
    image: lscr.io/linuxserver/bazarr:latest
    container_name: bazarr
    restart: unless-stopped
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./bazarr:/config
      - /data:/data
    ports:
      - 6767:6767
    networks:
      servarrnetwork:
        ipv4_address: 172.39.0.6

# Newer additions to this stack feel. Remove the '#' to add the service.

  ytdl-sub:
    image: ghcr.io/jmbannon/ytdl-sub-gui:latest
    container_name: ytdl-sub
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
      - DOCKER_MODS=linuxserver/mods:universal-cron
    volumes:
      - ./ytdl-sub:/config
      - /data/youtube:/youtube
    networks:
      servarrnetwork:
        ipv4_address: 172.39.0.8
    restart: unless-stopped

#  jellyseerr:
#    container_name: jellyseerr
#    image: fallenbagel/jellyseerr:latest
#    environment:
#      - PUID=${PUID}
#      - PGID=${PGID}
#      - TZ=${TZ}
#    volumes:
#      - ./jellyseerr:/app/config
#    ports:
#      - 5055:5055
#    networks:
#      servarrnetwork:
#        ipv4_address: 172.39.0.9
#    restart: unless-stopped

Network Architecture#

172.39.0.0/24 (servarrnetwork)
├── 172.39.0.2  gluetun (VPN gateway)
│   ├── qbittorrent  (network_mode: service:gluetun)
│   ├── nzbget       (network_mode: service:gluetun)
│   └── prowlarr     (network_mode: service:gluetun)
├── 172.39.0.3  sonarr
├── 172.39.0.4  radarr
├── 172.39.0.5  lidarr
├── 172.39.0.6  bazarr
└── 172.39.0.8  ytdl-sub

Key Patterns#

  • VPN routing: qbittorrent, nzbget, and prowlarr use network_mode: service:gluetun to route all traffic through the VPN. They are NOT directly on servarrnetwork.
  • Static IPs: Media managers (sonarr, radarr, lidarr, bazarr) use static IPs on servarrnetwork for predictable inter-container communication.
  • .env file: Secrets (PUID, PGID, TZ, FIREWALL_VPN_INPUT_PORTS, VPN credentials) are stored in .env in the same directory.
  • deunhealth: Monitors container health via Docker socket and restarts unhealthy containers with the deunhealth.restart.on.unhealthy=true label.
  • Config persistence: Each service’s config is stored in a subdirectory of /docker/servarr/ (e.g., ./sonarr, ./radarr).
  • Media storage: All containers bind-mount /data (SMB share from NAS at 192.168.1.137) for media access.
  • jellyseerr: Commented out here — deployed in /docker/jellyfin/compose.yaml instead.

Servarr Stack - Gluetun VPN Troubleshooting

Servarr Stack - Gluetun VPN Troubleshooting#

Historical note: This session was conducted on a machine at 192.168.1.30 (hostname possibly “Varda” at the time, directory ~/home/nbkelley/docker/servarr). The current production Servarr stack lives on the servarr VM at 192.168.1.112, directory /docker/servarr/. See Servarr - Media Automation Stack for current configuration. The troubleshooting patterns documented here remain applicable.

What Was Established#

This session documents the deployment and troubleshooting of the Servarr media automation stack (Sonarr, Prowlarr, qBittorrent) behind a Gluetun VPN container. The stack relies on network_mode: service:gluetun to route all container traffic through AirVPN.

PostgreSQL

PostgreSQL#

What Was Established#

PostgreSQL 16 runs as an LXC on Proxmox, serving as the central database for n8n, the monitoring pipeline, and pgvector embeddings for the wiki.

Deployment#

Detail Value
LXC host postgresql
Container ID 108
IP 192.168.1.57
Port 5432
Version 16.13 (Debian 16.13-1.pgdg13+1)
OS Debian 13 (unprivileged LXC)
Disk 4 GB (App-Storage ZFS pool)
RAM 1024 MiB
CPU 1 core
Installed via tteck Proxmox helper scripts
Web UI Adminer — not yet installed
SSH user iluvatar (sudo, PermitRootLogin no)

Databases and Users#

Database User Purpose
homelab homelab n8n workflows, monitoring pipeline, pgvector wiki embeddings

Setup commands (run as postgres user)#

CREATE DATABASE homelab;
CREATE USER homelab WITH PASSWORD '<password>';
GRANT ALL PRIVILEGES ON DATABASE homelab TO homelab;
GRANT ALL ON SCHEMA public TO homelab;  -- required for n8n
\q

The GRANT ALL ON SCHEMA public step is required — without it n8n fails to start with a permissions error even though the database and user exist.

n8n

n8n#

What Was Established#

n8n is the automation and orchestration hub for the homelab. It runs as an LXC on Proxmox, connected to PostgreSQL for persistent workflow state and execution history. Community edition is sufficient for current use.

Deployment#

Detail Value
LXC host n8n
IP 192.168.1.169
Port 5678
URL http://192.168.1.169:5678
Version 2.15.1 (Self Hosted)
Installed via tteck Proxmox helper scripts
OS Debian 13 (unprivileged LXC)
Config file /opt/n8n.env

Configuration (/opt/n8n.env)#

N8N_SECURE_COOKIE=false
N8N_PORT=5678
N8N_PROTOCOL=http
N8N_HOST=192.168.1.169
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=192.168.1.57
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=homelab
DB_POSTGRESDB_USER=homelab
DB_POSTGRESDB_PASSWORD=<password>

After editing: systemctl restart n8n

Ollama Configuration

Ollama Configuration#

What Was Established#

Ollama is used as the primary model backend across the fleet. The configuration focuses on making the API accessible to other nodes (like the T480 orchestrator) and running models like Gemma 4.

Key Decisions#

To allow remote access from other machines in the homelab (e.g., from a Docker container or another PC), the Ollama service must be configured to listen on all network interfaces, not just localhost.