//nbkelley /homelab

Gluetun VPN Service

Gluetun VPN Service#

What Was Established#

  • Gluetun is a lightweight Docker container acting as a dedicated VPN gateway for other containers.
  • Implements the sidecar pattern: dependent containers (e.g., qBittorrent, nzbget, prowlarr) share Gluetun’s network namespace via network_mode: "service:gluetun".
  • AirVPN selected as the provider over ProtonVPN/Mullvad due to superior port forwarding support required for P2P services.
  • Container-level VPN on the servarr VM is architecturally separate from the network-level UniFi VPN on Helms Deep (VLAN 2).

Deployment Context#

Gluetun runs on the servarr VM (192.168.1.112) as part of the Servarr Docker Compose stack at /docker/servarr/. It is configured via .env file in that directory.

Jellyfin Docker Compose Reference

Jellyfin Docker Compose Reference#

Canonical copy of /docker/jellyfin/compose.yaml on the servarr VM (192.168.1.112).

Jellyfin and Jellyseerr run in a separate compose file from the main Servarr stack, on their own Docker network (jellyfin_default, 172.18.0.0/16).

compose.yaml#

services:
  jellyfin:
    image: lscr.io/linuxserver/jellyfin:latest
    container_name: jellyfin
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles
      - JELLYFIN_PublishedServerUrl=http://192.168.1.112 #optional
    volumes:
      - ./config:/config
      - /data:/data
      - ./jellyfin-web/config.json:/usr/share/jellyfin/web/config.json
    devices:
      - /dev/dri:/dev/dri #Use for Intel QuickSync
    ports:
      - 8096:8096
      - 7359:7359/udp #Service Discovery
      - 1900:1900/udp #Client Discovery
    restart: unless-stopped
    labels:
      - deunhealth.restart.on.unhealthy=true
    healthcheck:
      test: curl -f http://localhost:8096/health || exit 1
      interval: 60s
      retries: 3
      start_period: 30s
      timeout: 10s
# Remove the Jellyfin service if installed directly on system.

  jellyseerr:
    container_name: jellyseerr
    image: fallenbagel/jellyseerr:latest
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles
    volumes:
      - ./jellyseerr:/app/config
    ports:
      - 5055:5055
    restart: unless-stopped

Key Details#

  • Network: Uses the auto-created jellyfin_default bridge network (172.18.0.0/16). Jellyseerr reaches the *arr services via the host IP (192.168.1.112) and their exposed ports.
  • GPU Passthrough: /dev/dri:/dev/dri enables Intel QuickSync hardware-accelerated transcoding.
  • PUID/PGID: Hardcoded to 1000 (unlike the servarr stack which uses .env variables).
  • TZ: America/Los_Angeles (note: servarr stack uses America/New_York).
  • Healthcheck: Already configured with deunhealth label — auto-restarts on unhealthy.
  • Custom web config: ./jellyfin-web/config.json is bind-mounted into the container for UI customization (e.g., the Hinterflix help link).

Directory Layout#

/docker/jellyfin/
├── compose.yaml
├── config/          # Jellyfin server config
├── jellyfin-web/
│   └── config.json  # Custom web UI config
└── jellyseerr/      # Jellyseerr app config

Proxy Management & Cloudflare Tunnels

Proxy Management & Cloudflare Tunnels#

What Was Established#

There are multiple layers of proxying available in the homelab, ranging from edge protection (Cloudflare) to local routing (OPNsense/Nginx Proxy Manager).

Nginx Proxy Manager (NPM) Troubleshooting#

  • Redirect Loops & Timeouts: Often caused by misconfigured upstream servers or aggressive timeout settings in NPM’s web UI. Resolving a redirect loop may expose underlying connectivity issues that manifest as timeouts.
  • Docker Compose Pattern: NPM is deployed with network_mode: host to bind directly to host ports (80, 443, 81), bypassing Docker’s NAT for direct host network access.
  • Verification Steps:
    1. Check container health: docker ps | grep nginx-proxy-manager (ensure healthy status).
    2. Verify port bindings: sudo netstat -tulpn | grep :80 / :443 (requires net-tools package).
    3. Inspect NPM Web UI: Access at http://<host-ip>:81 to review Proxy Host settings, specifically timeout values and upstream server addresses.
  • Port Conflicts: Use netstat to identify which container owns a specific port (e.g., docker-proxy vs nginx: master). In this setup, port 8000 was observed bound to docker-proxy, indicating another service in the compose stack.
  • Co-located Services: The same Docker Compose stack hosts cloudflare-ddns (for dynamic IP updates) and netbird (for mesh networking), requiring careful port management to avoid conflicts.

Key Decisions#

  • Use network_mode: host for NPM to simplify port mapping and ensure direct access to host network interfaces.
  • Rely on net-tools (netstat) for quick port binding verification in host-networked Docker containers.

Current Configuration#

  • Docker Host: iluvatar@proxy (192.168.1.208)
  • NPM Web UI: http://192.168.1.208:81
  • Ports: 80 (HTTP), 443 (HTTPS), 81 (NPM Admin UI)

Historical Notes#

  • Troubleshooting session from 2025-11-17 resolved a redirect loop that subsequently turned into a timeout issue.
  • net-tools installation was required to diagnose port bindings on the host.

Open Questions#

  • Specific timeout values configured in NPM for upstream services.
  • Whether netbird or cloudflare-ddns requires dedicated port exposure or can share the host network.

Uptime Kuma - Configuration & Integrations

Servarr - Media Automation Stack

Servarr - Media Automation Stack#

Overview#

Servarr is a full VM at 192.168.1.112 (hostname: servarr) running a Docker Compose media automation stack. All services depend on a NAS mount at /data for media storage. Download clients (qbittorrent, nzbget) and indexer (prowlarr) route through a Gluetun VPN container via network_mode: service:gluetun.

Note: This VM is distinct from Varda (192.168.1.131), which is a separate web server hosting ilmare.nbkelley.com.

VM Specs#

Detail Value
Hostname servarr
IP 192.168.1.112
OS Ubuntu 24.04.4 LTS (Noble)
Kernel 6.8.0-107-generic
CPU QEMU Virtual CPU, 4 vCPUs
RAM 7.8 GB
Disk 63 GB root (/dev/sda2 ext4, 38% used)
Hypervisor Proxmox (Minas Tirith)

Container Inventory#

Servarr Stack (/docker/servarr/compose.yaml)#

Network: servarrnetwork (172.39.0.0/24)

Servarr Docker Compose Reference

Servarr Docker Compose Reference#

Canonical copy of /docker/servarr/compose.yaml on the servarr VM (192.168.1.112).

compose.yaml#

# Compose file for the *arr stack. Configuration files are stored in the
# directory you launch the compose file on. Change to bind mounts if needed.
# All containers are ran with user and group ids of the main user and
# group to aviod permissions issues of downloaded files, please refer
# the read me file for more information.

#############################################################################
# NOTICE: We recently switched to using a .env file. PLEASE refer to the docs.
# https://github.com/TechHutTV/homelab/tree/main/media#docker-compose-and-env
#############################################################################

networks:
  servarrnetwork:
    name: servarrnetwork
    ipam:
      config:
        - subnet: 172.39.0.0/24

services:
  # airvpn recommended (referral url: https://airvpn.org/?referred_by=673908)
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun # If running on an LXC see readme for more info.
    networks:
      servarrnetwork:
        ipv4_address: 172.39.0.2
    ports:
      - ${FIREWALL_VPN_INPUT_PORTS}:${FIREWALL_VPN_INPUT_PORTS} # airvpn forwarded port, pulled from .env
      - 8080:8080 # qbittorrent web interface
      - 6881:6881 # qbittorrent torrent port
      - 6789:6789 # nzbget
      - 9696:9696 # prowlarr
    volumes:
      - ./gluetun:/gluetun
    # Make a '.env' file in the same directory.
    env_file:
      - .env
    healthcheck:
      test: ping -c 1 www.google.com || exit 1
      interval: 20s
      timeout: 10s
      retries: 5
    restart: unless-stopped

  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: qbittorrent
    restart: unless-stopped
    labels:
      - deunhealth.restart.on.unhealthy=true
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
      - WEBUI_PORT=8080 # must match "qbittorrent web interface" port number in gluetun's service above
      - TORRENTING_PORT=${FIREWALL_VPN_INPUT_PORTS} # airvpn forwarded port, pulled from .env
    volumes:
      - ./qbittorrent:/config
      - /data:/data
    depends_on:
      gluetun:
        condition: service_healthy
        restart: true
    network_mode: service:gluetun
    healthcheck:
      test: ping -c 1 www.google.com || exit 1
      interval: 60s
      retries: 3
      start_period: 20s
      timeout: 10s

  # See the 'qBittorrent Stalls with VPN Timeout' section for more information.
  deunhealth:
    image: qmcgaw/deunhealth
    container_name: deunhealth
    network_mode: "none"
    environment:
      - LOG_LEVEL=info
      - HEALTH_SERVER_ADDRESS=127.0.0.1:9999
      - TZ=${TZ}
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  nzbget:
    image: lscr.io/linuxserver/nzbget:latest
    container_name: nzbget
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./nzbget:/config
      - /data:/data
    depends_on:
      gluetun:
        condition: service_healthy
        restart: true
    restart: unless-stopped
    network_mode: service:gluetun

  prowlarr:
    image: lscr.io/linuxserver/prowlarr:latest
    container_name: prowlarr
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./prowlarr:/config
    restart: unless-stopped
    depends_on:
      gluetun:
        condition: service_healthy
        restart: true
    network_mode: service:gluetun

  sonarr:
    image: lscr.io/linuxserver/sonarr:latest
    container_name: sonarr
    restart: unless-stopped
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./sonarr:/config
      - /data:/data
    ports:
      - 8989:8989
    networks:
      servarrnetwork:
        ipv4_address: 172.39.0.3

  radarr:
    image: lscr.io/linuxserver/radarr:latest
    container_name: radarr
    restart: unless-stopped
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./radarr:/config
      - /data:/data
    ports:
      - 7878:7878
    networks:
      servarrnetwork:
        ipv4_address: 172.39.0.4

  lidarr:
    container_name: lidarr
    image: lscr.io/linuxserver/lidarr:latest
    restart: unless-stopped
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./lidarr:/config
      - /data:/data
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    ports:
      - 8686:8686
    networks:
      servarrnetwork:
        ipv4_address: 172.39.0.5

  bazarr:
    image: lscr.io/linuxserver/bazarr:latest
    container_name: bazarr
    restart: unless-stopped
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./bazarr:/config
      - /data:/data
    ports:
      - 6767:6767
    networks:
      servarrnetwork:
        ipv4_address: 172.39.0.6

# Newer additions to this stack feel. Remove the '#' to add the service.

  ytdl-sub:
    image: ghcr.io/jmbannon/ytdl-sub-gui:latest
    container_name: ytdl-sub
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
      - DOCKER_MODS=linuxserver/mods:universal-cron
    volumes:
      - ./ytdl-sub:/config
      - /data/youtube:/youtube
    networks:
      servarrnetwork:
        ipv4_address: 172.39.0.8
    restart: unless-stopped

#  jellyseerr:
#    container_name: jellyseerr
#    image: fallenbagel/jellyseerr:latest
#    environment:
#      - PUID=${PUID}
#      - PGID=${PGID}
#      - TZ=${TZ}
#    volumes:
#      - ./jellyseerr:/app/config
#    ports:
#      - 5055:5055
#    networks:
#      servarrnetwork:
#        ipv4_address: 172.39.0.9
#    restart: unless-stopped

Network Architecture#

172.39.0.0/24 (servarrnetwork)
├── 172.39.0.2  gluetun (VPN gateway)
│   ├── qbittorrent  (network_mode: service:gluetun)
│   ├── nzbget       (network_mode: service:gluetun)
│   └── prowlarr     (network_mode: service:gluetun)
├── 172.39.0.3  sonarr
├── 172.39.0.4  radarr
├── 172.39.0.5  lidarr
├── 172.39.0.6  bazarr
└── 172.39.0.8  ytdl-sub

Key Patterns#

  • VPN routing: qbittorrent, nzbget, and prowlarr use network_mode: service:gluetun to route all traffic through the VPN. They are NOT directly on servarrnetwork.
  • Static IPs: Media managers (sonarr, radarr, lidarr, bazarr) use static IPs on servarrnetwork for predictable inter-container communication.
  • .env file: Secrets (PUID, PGID, TZ, FIREWALL_VPN_INPUT_PORTS, VPN credentials) are stored in .env in the same directory.
  • deunhealth: Monitors container health via Docker socket and restarts unhealthy containers with the deunhealth.restart.on.unhealthy=true label.
  • Config persistence: Each service’s config is stored in a subdirectory of /docker/servarr/ (e.g., ./sonarr, ./radarr).
  • Media storage: All containers bind-mount /data (SMB share from NAS at 192.168.1.137) for media access.
  • jellyseerr: Commented out here — deployed in /docker/jellyfin/compose.yaml instead.

Servarr Stack - Gluetun VPN Troubleshooting

Servarr Stack - Gluetun VPN Troubleshooting#

Historical note: This session was conducted on a machine at 192.168.1.30 (hostname possibly “Varda” at the time, directory ~/home/nbkelley/docker/servarr). The current production Servarr stack lives on the servarr VM at 192.168.1.112, directory /docker/servarr/. See Servarr - Media Automation Stack for current configuration. The troubleshooting patterns documented here remain applicable.

What Was Established#

This session documents the deployment and troubleshooting of the Servarr media automation stack (Sonarr, Prowlarr, qBittorrent) behind a Gluetun VPN container. The stack relies on network_mode: service:gluetun to route all container traffic through AirVPN.

Open WebUI Deployment

Open WebUI Deployment#

What Was Established#

Open WebUI is deployed via Docker to provide a ChatGPT-like interface for interacting with local Ollama instances. It is configured to connect to the host’s Ollama API.

Key Decisions#

Because the WebUI runs inside a Docker container, it cannot reach localhost:11434 of the host machine directly. The OLLMA_BASE_URL must point to the host’s actual LAN IP or use the host.docker.internal gateway.

Current Configuration#

Docker Deployment#

docker run -d \
  --name open-webui \
  --restart always \
  -p 3000:8080 \
  -v open-webui:/app/backend/data \
  --add-host=host.docker.internal:host-gateway \
  -e OLLAMA_BASE_URL=http://<YOUR_HOST_IP>:11434 \
  ghcr.io/open-webui/open-webui:main

Note: Replace <YOUR_HOST_IP> with the actual IP of the machine (e.g., 192.168.172.168) to ensure the container can route to the Ollama service.