//nbkelley /homelab

GPU Passthrough for Proxmox LXCs

GPU Passthrough for Proxmox LXCs#

What Was Established#

  • Intel GPU passthrough to Proxmox LXCs requires both host-side module loading and specific LXC device mounts.
  • vainfo frequently fails with X11/X server errors in headless containers; this is expected and does not indicate a broken passthrough.
  • intel_gpu_top requires i915 module loaded on the host and accessible debugfs/sysfs paths inside the container.

Key Decisions#

  • Headless VA-API Testing: When vainfo reports error: can't connect to X server!, set environment variables to bypass X11:
    export XDG_RUNTIME_DIR=/tmp/runtime-root
    export LIBVA_DRIVER_NAME=iHD
    vainfo
  • Module Verification: lsmod | grep i915 confirms the driver is loaded inside the container. Presence of i915, drm_buddy, ttm, and drm_display_helper indicates successful module injection.
  • Device Access: intel_gpu_top failing with No device filter specified... typically points to missing debugfs mounts or host-side i915 parameters, not necessarily a broken /dev/dri/ passthrough.

Current Configuration#

Proxmox LXC Config (/etc/pve/lxc/<container-id>.conf):

Jellyfin LXC GPU Passthrough & Hardware Acceleration

Jellyfin LXC GPU Passthrough & Hardware Acceleration#

What Was Established#

Successfully configured Intel UHD Graphics 630 GPU passthrough to a Jellyfin LXC container on Proxmox for hardware-accelerated transcoding via Intel QuickSync (QSV).

Key Decisions#

  • GPU Passthrough Method: LXC container-level GPU device mapping (not full VM passthrough)
  • Hardware Acceleration: Intel QuickSync (QSV) selected over VAAPI for Jellyfin’s native support
  • Monitoring Constraints: Accepted that LXC container restrictions prevent full GPU monitoring tools (dmesg, intel_gpu_top) from functioning; validated functionality through actual transcoding tests instead

Current Configuration#

Host GPU Details#

  • GPU: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630]
  • PCI Address: 00:02.0
  • Driver: i915 (loaded)
  • Related Modules: drm_buddy, ttm, drm_display_helper, cec, i2c_algo_bit, video

LXC Container GPU Devices#

  • /dev/dri/card0 — character special (major 226, minor 0)
  • /dev/dri/renderD128 — character special (major 226, minor 128)
  • Permissions: crw-rw---- root:video (226/0 and 226/128)

Jellyfin Configuration#

  1. User Group Assignment: jellyfin user added to video group (usermod -a -G video jellyfin)
  2. Dashboard → Playback Settings:
    • Hardware Acceleration: Intel QuickSync (QSV)
    • Enable hardware encoding: Yes
    • Enable hardware decoding: Yes
    • Enable tone mapping: Yes
    • Allow encoding in HEVC: Yes
    • Allow encoding in AV1: Yes (if supported)

Validation Commands#

# Verify GPU device accessibility
ls -la /dev/dri/

# Check if processes are using GPU during playback
lsof /dev/dri/renderD128

# Monitor Jellyfin logs for hardware acceleration
sudo journalctl -u jellyfin -f | grep -i "hardware\|qsv\|quicksync\|vaapi"

# Check active transcoding sessions in Jellyfin UI
# Dashboard → Active Devices → look for (HW) indicator

Historical Notes#

  • dmesg restriction: dmesg: read kernel buffer failed: Operation not permitted — expected in LXC containers without full device access
  • intel_gpu_top limitation: GPU monitoring tools that require kernel debugfs access will not work inside the LXC; validated via actual transcoding performance and log inspection instead
  • i915 driver loaded: Confirmed via lsmod | grep i915 showing 3,928,064 bytes loaded
  • No GPU debug info: /sys/kernel/debug/dri not available in container — accepted limitation

Open Questions#

  • Does AV1 hardware encoding actually work on Coffee Lake-S (Gen 9.5) — typically limited to H.264/H.265
  • Performance baseline: what CPU load reduction is observed with QSV vs software transcoding?
  • Can GPU passthrough be extended to other LXC containers (e.g., Plex, if migrated)
  • Plex Transcoding LXC — similar GPU passthrough patterns for Plex
  • Proxmox LXC Configuration — LXC container setup patterns
  • Servarr - Media Automation Stack — media server ecosystem

Gluetun VPN Service