//nbkelley /homelab

GPU Passthrough for Proxmox LXCs

GPU Passthrough for Proxmox LXCs#

What Was Established#

  • Intel GPU passthrough to Proxmox LXCs requires both host-side module loading and specific LXC device mounts.
  • vainfo frequently fails with X11/X server errors in headless containers; this is expected and does not indicate a broken passthrough.
  • intel_gpu_top requires i915 module loaded on the host and accessible debugfs/sysfs paths inside the container.

Key Decisions#

  • Headless VA-API Testing: When vainfo reports error: can't connect to X server!, set environment variables to bypass X11:
    export XDG_RUNTIME_DIR=/tmp/runtime-root
    export LIBVA_DRIVER_NAME=iHD
    vainfo
  • Module Verification: lsmod | grep i915 confirms the driver is loaded inside the container. Presence of i915, drm_buddy, ttm, and drm_display_helper indicates successful module injection.
  • Device Access: intel_gpu_top failing with No device filter specified... typically points to missing debugfs mounts or host-side i915 parameters, not necessarily a broken /dev/dri/ passthrough.

Current Configuration#

Proxmox LXC Config (/etc/pve/lxc/<container-id>.conf):

Jellyfin Docker Compose Reference

Jellyfin Docker Compose Reference#

Canonical copy of /docker/jellyfin/compose.yaml on the servarr VM (192.168.1.112).

Jellyfin and Jellyseerr run in a separate compose file from the main Servarr stack, on their own Docker network (jellyfin_default, 172.18.0.0/16).

compose.yaml#

services:
  jellyfin:
    image: lscr.io/linuxserver/jellyfin:latest
    container_name: jellyfin
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles
      - JELLYFIN_PublishedServerUrl=http://192.168.1.112 #optional
    volumes:
      - ./config:/config
      - /data:/data
      - ./jellyfin-web/config.json:/usr/share/jellyfin/web/config.json
    devices:
      - /dev/dri:/dev/dri #Use for Intel QuickSync
    ports:
      - 8096:8096
      - 7359:7359/udp #Service Discovery
      - 1900:1900/udp #Client Discovery
    restart: unless-stopped
    labels:
      - deunhealth.restart.on.unhealthy=true
    healthcheck:
      test: curl -f http://localhost:8096/health || exit 1
      interval: 60s
      retries: 3
      start_period: 30s
      timeout: 10s
# Remove the Jellyfin service if installed directly on system.

  jellyseerr:
    container_name: jellyseerr
    image: fallenbagel/jellyseerr:latest
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles
    volumes:
      - ./jellyseerr:/app/config
    ports:
      - 5055:5055
    restart: unless-stopped

Key Details#

  • Network: Uses the auto-created jellyfin_default bridge network (172.18.0.0/16). Jellyseerr reaches the *arr services via the host IP (192.168.1.112) and their exposed ports.
  • GPU Passthrough: /dev/dri:/dev/dri enables Intel QuickSync hardware-accelerated transcoding.
  • PUID/PGID: Hardcoded to 1000 (unlike the servarr stack which uses .env variables).
  • TZ: America/Los_Angeles (note: servarr stack uses America/New_York).
  • Healthcheck: Already configured with deunhealth label — auto-restarts on unhealthy.
  • Custom web config: ./jellyfin-web/config.json is bind-mounted into the container for UI customization (e.g., the Hinterflix help link).

Directory Layout#

/docker/jellyfin/
├── compose.yaml
├── config/          # Jellyfin server config
├── jellyfin-web/
│   └── config.json  # Custom web UI config
└── jellyseerr/      # Jellyseerr app config

Jellyfin Help Link Integration

Jellyfin Help Link Integration#

What Was Established#

  • Goal: Integrate a persistent, user-facing help link directly into the Jellyfin UI for logged-in users, directing them to the Hinterflix documentation.
  • Methods Evaluated: Custom Login Message (pre-login only), Custom CSS (Dashboard > General), Reverse Proxy HTML injection (Nginx), Custom JavaScript, and Plugin-based solutions.
  • Selected Approach: Custom CSS injected via the Jellyfin Dashboard. It provides immediate visibility without requiring external dependencies or server restarts.

Key Decisions#

  • Placement: Prioritized persistent in-interface links over pre-login messages. Options included header navigation, floating action button, and sidebar integration.
  • Implementation Path: Chose Dashboard > General > Custom CSS for its simplicity and zero-dependency nature compared to reverse proxy modifications or plugin development.
  • Target URL: The CSS href attributes should point to the deployed Hinterflix help site (e.g., https://status.nbkelley.com or the specific docs subdomain).

Current Configuration#

  • Location: Jellyfin Web UI → Dashboard → General → Custom CSS
  • Active Selectors:
    • headerTabs::after for top navigation bar integration
    • body::after for a fixed floating action button (FAB)
    • sidebarBox:last-child::after for sidebar menu integration
  • Dependencies: None. Relies entirely on Jellyfin’s built-in CSS injection capability.

Historical Notes#

  • Conversation dated 2025-11-22.
  • Jellyfin’s frontend framework (Ember.js/React hybrid) frequently updates CSS class names. Selectors like headerTabs and sidebarBox may break after major Jellyfin version upgrades, requiring CSS updates.
  • The sub_filter Nginx approach was considered but discarded in favor of native dashboard configuration to avoid proxy overhead and cache invalidation issues.

Open Questions#

  • Has the chosen CSS snippet been applied and verified against the current Jellyfin version running on the homelab?
  • Are there any specific UX guidelines or family-user constraints that dictate the preferred placement (header vs. floating vs. sidebar)?

Sources#

  • ingested/chats/128-Add User to Sudoers Group Guide.md
  • ingested/chats/113-Adding Help Link to Jellyfin Server.md
  • ingested/chats/028-Basic Header with Hamburger Menu Design.md
  • DeepSeek conversation: “Adding Help Link to Jellyfin Server” (2025-11-22)

Jellyfin LXC GPU Passthrough & Hardware Acceleration

Jellyfin LXC GPU Passthrough & Hardware Acceleration#

What Was Established#

Successfully configured Intel UHD Graphics 630 GPU passthrough to a Jellyfin LXC container on Proxmox for hardware-accelerated transcoding via Intel QuickSync (QSV).

Key Decisions#

  • GPU Passthrough Method: LXC container-level GPU device mapping (not full VM passthrough)
  • Hardware Acceleration: Intel QuickSync (QSV) selected over VAAPI for Jellyfin’s native support
  • Monitoring Constraints: Accepted that LXC container restrictions prevent full GPU monitoring tools (dmesg, intel_gpu_top) from functioning; validated functionality through actual transcoding tests instead

Current Configuration#

Host GPU Details#

  • GPU: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630]
  • PCI Address: 00:02.0
  • Driver: i915 (loaded)
  • Related Modules: drm_buddy, ttm, drm_display_helper, cec, i2c_algo_bit, video

LXC Container GPU Devices#

  • /dev/dri/card0 — character special (major 226, minor 0)
  • /dev/dri/renderD128 — character special (major 226, minor 128)
  • Permissions: crw-rw---- root:video (226/0 and 226/128)

Jellyfin Configuration#

  1. User Group Assignment: jellyfin user added to video group (usermod -a -G video jellyfin)
  2. Dashboard → Playback Settings:
    • Hardware Acceleration: Intel QuickSync (QSV)
    • Enable hardware encoding: Yes
    • Enable hardware decoding: Yes
    • Enable tone mapping: Yes
    • Allow encoding in HEVC: Yes
    • Allow encoding in AV1: Yes (if supported)

Validation Commands#

# Verify GPU device accessibility
ls -la /dev/dri/

# Check if processes are using GPU during playback
lsof /dev/dri/renderD128

# Monitor Jellyfin logs for hardware acceleration
sudo journalctl -u jellyfin -f | grep -i "hardware\|qsv\|quicksync\|vaapi"

# Check active transcoding sessions in Jellyfin UI
# Dashboard → Active Devices → look for (HW) indicator

Historical Notes#

  • dmesg restriction: dmesg: read kernel buffer failed: Operation not permitted — expected in LXC containers without full device access
  • intel_gpu_top limitation: GPU monitoring tools that require kernel debugfs access will not work inside the LXC; validated via actual transcoding performance and log inspection instead
  • i915 driver loaded: Confirmed via lsmod | grep i915 showing 3,928,064 bytes loaded
  • No GPU debug info: /sys/kernel/debug/dri not available in container — accepted limitation

Open Questions#

  • Does AV1 hardware encoding actually work on Coffee Lake-S (Gen 9.5) — typically limited to H.264/H.265
  • Performance baseline: what CPU load reduction is observed with QSV vs software transcoding?
  • Can GPU passthrough be extended to other LXC containers (e.g., Plex, if migrated)
  • Plex Transcoding LXC — similar GPU passthrough patterns for Plex
  • Proxmox LXC Configuration — LXC container setup patterns
  • Servarr - Media Automation Stack — media server ecosystem

Gluetun VPN Service

Proxmox LXC Storage Mounts

Proxmox LXC Storage Mounts#

What Was Established#

  • SMB/CIFS shares can be integrated into Proxmox via CLI or the Web GUI.
  • GUI integration is recommended for Proxmox-native features (backups, ISOs, templates) and automatic retention management.
  • LXC containers can directly mount Proxmox storage paths using mp0/mp1 directives in the container config file.
  • Separate CIFS storages should be used for media vs. backups to maintain clean separation of concerns.

Key Decisions#

  • Storage Content Types: For media-only CIFS shares, select Container templates, ISO images, or Disk image. Explicitly avoid Containers and VZDump backup file to prevent Proxmox from treating the media share as a system storage.
  • Subdirectory Configuration: Leave the Proxmox storage subdirectory field blank to mount the root of the SMB share, enabling flexible navigation to specific subfolders (e.g., Documents/Movies).
  • Jellyfin Path Targeting: Point Jellyfin directly to the specific media subfolder (e.g., /media/synology/Documents/Movies) rather than the mount root.

Current Configuration#

  • LXC Config Path: /etc/pve/lxc/<CT-ID>.conf (e.g., /etc/pve/lxc/100.conf).
  • Mount Syntax: mp0: /mnt/pve/<Storage-ID>,mp=/media/<mount-name>
  • Permission Mapping (for unprivileged containers):
    lxc.idmap: u 0 100000 65536
    lxc.idmap: g 0 100000 65536
    lxc.idmap: u 1000 1000 1
    lxc.idmap: g 1000 1000 1
    lxc.idmap: u 1001 101001 64535
    lxc.idmap: g 1001 101001 64535
  • Jellyfin Setup: Dashboard → Libraries → Add Media Library → Folders: /media/synology/Documents/Movies.

Historical Notes#

  • Conversation date: 2025-11-07.
  • Focuses on resolving LXC mount visibility and permission issues for Jellyfin on a Synology NAS.
  • No major infrastructure changes flagged; patterns remain valid for Proxmox 8.x.

Open Questions#

  • None.

Sources#

  • ingested/chats/092-Mount NAS Storage to LXC Containers.md
  • ingested/chats/091-Mount Synology SMB to Proxmox Guide.md
  • ingested/chats/090-Mount SMB Share on Proxmox Guide.md
  • DeepSeek conversation: Mount NAS Storage to LXC Containers (2025-11-07)

Sources#

  • ingested/chats/092-Mount NAS Storage to LXC Containers.md
  • ingested/chats/091-Mount Synology SMB to Proxmox Guide.md
  • ingested/chats/090-Mount SMB Share on Proxmox Guide.md
  • Historical DeepSeek conversation: “Mount SMB Share on Proxmox Guide” (2025-11-07)

Proxmox Storage Management, Proxmox Systemd Mounts