//nbkelley /homelab

GPU Passthrough for Proxmox LXCs

GPU Passthrough for Proxmox LXCs#

What Was Established#

  • Intel GPU passthrough to Proxmox LXCs requires both host-side module loading and specific LXC device mounts.
  • vainfo frequently fails with X11/X server errors in headless containers; this is expected and does not indicate a broken passthrough.
  • intel_gpu_top requires i915 module loaded on the host and accessible debugfs/sysfs paths inside the container.

Key Decisions#

  • Headless VA-API Testing: When vainfo reports error: can't connect to X server!, set environment variables to bypass X11:
    export XDG_RUNTIME_DIR=/tmp/runtime-root
    export LIBVA_DRIVER_NAME=iHD
    vainfo
  • Module Verification: lsmod | grep i915 confirms the driver is loaded inside the container. Presence of i915, drm_buddy, ttm, and drm_display_helper indicates successful module injection.
  • Device Access: intel_gpu_top failing with No device filter specified... typically points to missing debugfs mounts or host-side i915 parameters, not necessarily a broken /dev/dri/ passthrough.

Current Configuration#

Proxmox LXC Config (/etc/pve/lxc/<container-id>.conf):

Jellyfin LXC GPU Passthrough & Hardware Acceleration

Jellyfin LXC GPU Passthrough & Hardware Acceleration#

What Was Established#

Successfully configured Intel UHD Graphics 630 GPU passthrough to a Jellyfin LXC container on Proxmox for hardware-accelerated transcoding via Intel QuickSync (QSV).

Key Decisions#

  • GPU Passthrough Method: LXC container-level GPU device mapping (not full VM passthrough)
  • Hardware Acceleration: Intel QuickSync (QSV) selected over VAAPI for Jellyfin’s native support
  • Monitoring Constraints: Accepted that LXC container restrictions prevent full GPU monitoring tools (dmesg, intel_gpu_top) from functioning; validated functionality through actual transcoding tests instead

Current Configuration#

Host GPU Details#

  • GPU: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630]
  • PCI Address: 00:02.0
  • Driver: i915 (loaded)
  • Related Modules: drm_buddy, ttm, drm_display_helper, cec, i2c_algo_bit, video

LXC Container GPU Devices#

  • /dev/dri/card0 — character special (major 226, minor 0)
  • /dev/dri/renderD128 — character special (major 226, minor 128)
  • Permissions: crw-rw---- root:video (226/0 and 226/128)

Jellyfin Configuration#

  1. User Group Assignment: jellyfin user added to video group (usermod -a -G video jellyfin)
  2. Dashboard → Playback Settings:
    • Hardware Acceleration: Intel QuickSync (QSV)
    • Enable hardware encoding: Yes
    • Enable hardware decoding: Yes
    • Enable tone mapping: Yes
    • Allow encoding in HEVC: Yes
    • Allow encoding in AV1: Yes (if supported)

Validation Commands#

# Verify GPU device accessibility
ls -la /dev/dri/

# Check if processes are using GPU during playback
lsof /dev/dri/renderD128

# Monitor Jellyfin logs for hardware acceleration
sudo journalctl -u jellyfin -f | grep -i "hardware\|qsv\|quicksync\|vaapi"

# Check active transcoding sessions in Jellyfin UI
# Dashboard → Active Devices → look for (HW) indicator

Historical Notes#

  • dmesg restriction: dmesg: read kernel buffer failed: Operation not permitted — expected in LXC containers without full device access
  • intel_gpu_top limitation: GPU monitoring tools that require kernel debugfs access will not work inside the LXC; validated via actual transcoding performance and log inspection instead
  • i915 driver loaded: Confirmed via lsmod | grep i915 showing 3,928,064 bytes loaded
  • No GPU debug info: /sys/kernel/debug/dri not available in container — accepted limitation

Open Questions#

  • Does AV1 hardware encoding actually work on Coffee Lake-S (Gen 9.5) — typically limited to H.264/H.265
  • Performance baseline: what CPU load reduction is observed with QSV vs software transcoding?
  • Can GPU passthrough be extended to other LXC containers (e.g., Plex, if migrated)
  • Plex Transcoding LXC — similar GPU passthrough patterns for Plex
  • Proxmox LXC Configuration — LXC container setup patterns
  • Servarr - Media Automation Stack — media server ecosystem

Gluetun VPN Service

Proxmox LXC Storage Mounts

Proxmox LXC Storage Mounts#

What Was Established#

  • SMB/CIFS shares can be integrated into Proxmox via CLI or the Web GUI.
  • GUI integration is recommended for Proxmox-native features (backups, ISOs, templates) and automatic retention management.
  • LXC containers can directly mount Proxmox storage paths using mp0/mp1 directives in the container config file.
  • Separate CIFS storages should be used for media vs. backups to maintain clean separation of concerns.

Key Decisions#

  • Storage Content Types: For media-only CIFS shares, select Container templates, ISO images, or Disk image. Explicitly avoid Containers and VZDump backup file to prevent Proxmox from treating the media share as a system storage.
  • Subdirectory Configuration: Leave the Proxmox storage subdirectory field blank to mount the root of the SMB share, enabling flexible navigation to specific subfolders (e.g., Documents/Movies).
  • Jellyfin Path Targeting: Point Jellyfin directly to the specific media subfolder (e.g., /media/synology/Documents/Movies) rather than the mount root.

Current Configuration#

  • LXC Config Path: /etc/pve/lxc/<CT-ID>.conf (e.g., /etc/pve/lxc/100.conf).
  • Mount Syntax: mp0: /mnt/pve/<Storage-ID>,mp=/media/<mount-name>
  • Permission Mapping (for unprivileged containers):
    lxc.idmap: u 0 100000 65536
    lxc.idmap: g 0 100000 65536
    lxc.idmap: u 1000 1000 1
    lxc.idmap: g 1000 1000 1
    lxc.idmap: u 1001 101001 64535
    lxc.idmap: g 1001 101001 64535
  • Jellyfin Setup: Dashboard → Libraries → Add Media Library → Folders: /media/synology/Documents/Movies.

Historical Notes#

  • Conversation date: 2025-11-07.
  • Focuses on resolving LXC mount visibility and permission issues for Jellyfin on a Synology NAS.
  • No major infrastructure changes flagged; patterns remain valid for Proxmox 8.x.

Open Questions#

  • None.

Sources#

  • ingested/chats/092-Mount NAS Storage to LXC Containers.md
  • ingested/chats/091-Mount Synology SMB to Proxmox Guide.md
  • ingested/chats/090-Mount SMB Share on Proxmox Guide.md
  • DeepSeek conversation: Mount NAS Storage to LXC Containers (2025-11-07)

Sources#

  • ingested/chats/092-Mount NAS Storage to LXC Containers.md
  • ingested/chats/091-Mount Synology SMB to Proxmox Guide.md
  • ingested/chats/090-Mount SMB Share on Proxmox Guide.md
  • Historical DeepSeek conversation: “Mount SMB Share on Proxmox Guide” (2025-11-07)

Proxmox Storage Management, Proxmox Systemd Mounts

Web Server Architecture on Proxmox

Web Server Architecture on Proxmox#

What Was Established#

High-level architectural strategies for deploying web development environments on Proxmox, focusing on balancing isolation with resource efficiency.

Key Decisions#

  • LXC for Services: Use LXC containers for lightweight, single-purpose services (e.g., Nginx, Databases) to minimize overhead.
  • VM for Complex Workloads: Use full VMs when running Docker, Kubernetes, or when custom kernel modules are required.
  • Reverse Proxy Pattern: Always use a reverse proxy (Nginx Proxy Manager, Traefik, or C/Caddy) to handle SSL termination and route traffic to multiple internal services.
  • Database Isolation: Separate databases into their own containers/VMs to improve security and facilitate independent backups.

Current Configuration#

Networking Patterns#

  • Bridge Mode: Default vmbr0 for services requiring LAN access.
  • Internal Network: Use secondary bridges (e.g., vmbr1) for isolated communication between web servers and databases.

Storage Patterns#

  • Local-LVM: Preferred for high-performance VM/container disks.
  • Directory Storage: Suitable for container volumes and simpler storage needs.

Historical Notes#

This architecture plan was established in March 2025. The preference for LXCs over VMs for simple web services was a primary driver.