//nbkelley /homelab

Gluetun VPN Service

Gluetun VPN Service#

What Was Established#

  • Gluetun is a lightweight Docker container acting as a dedicated VPN gateway for other containers.
  • Implements the sidecar pattern: dependent containers (e.g., qBittorrent, nzbget, prowlarr) share Gluetun’s network namespace via network_mode: "service:gluetun".
  • AirVPN selected as the provider over ProtonVPN/Mullvad due to superior port forwarding support required for P2P services.
  • Container-level VPN on the servarr VM is architecturally separate from the network-level UniFi VPN on Helms Deep (VLAN 2).

Deployment Context#

Gluetun runs on the servarr VM (192.168.1.112) as part of the Servarr Docker Compose stack at /docker/servarr/. It is configured via .env file in that directory.

GPU Passthrough for Proxmox LXCs

GPU Passthrough for Proxmox LXCs#

What Was Established#

  • Intel GPU passthrough to Proxmox LXCs requires both host-side module loading and specific LXC device mounts.
  • vainfo frequently fails with X11/X server errors in headless containers; this is expected and does not indicate a broken passthrough.
  • intel_gpu_top requires i915 module loaded on the host and accessible debugfs/sysfs paths inside the container.

Key Decisions#

  • Headless VA-API Testing: When vainfo reports error: can't connect to X server!, set environment variables to bypass X11:
    export XDG_RUNTIME_DIR=/tmp/runtime-root
    export LIBVA_DRIVER_NAME=iHD
    vainfo
  • Module Verification: lsmod | grep i915 confirms the driver is loaded inside the container. Presence of i915, drm_buddy, ttm, and drm_display_helper indicates successful module injection.
  • Device Access: intel_gpu_top failing with No device filter specified... typically points to missing debugfs mounts or host-side i915 parameters, not necessarily a broken /dev/dri/ passthrough.

Current Configuration#

Proxmox LXC Config (/etc/pve/lxc/<container-id>.conf):

Proxmox LXC Storage Mounts

Proxmox LXC Storage Mounts#

What Was Established#

  • SMB/CIFS shares can be integrated into Proxmox via CLI or the Web GUI.
  • GUI integration is recommended for Proxmox-native features (backups, ISOs, templates) and automatic retention management.
  • LXC containers can directly mount Proxmox storage paths using mp0/mp1 directives in the container config file.
  • Separate CIFS storages should be used for media vs. backups to maintain clean separation of concerns.

Key Decisions#

  • Storage Content Types: For media-only CIFS shares, select Container templates, ISO images, or Disk image. Explicitly avoid Containers and VZDump backup file to prevent Proxmox from treating the media share as a system storage.
  • Subdirectory Configuration: Leave the Proxmox storage subdirectory field blank to mount the root of the SMB share, enabling flexible navigation to specific subfolders (e.g., Documents/Movies).
  • Jellyfin Path Targeting: Point Jellyfin directly to the specific media subfolder (e.g., /media/synology/Documents/Movies) rather than the mount root.

Current Configuration#

  • LXC Config Path: /etc/pve/lxc/<CT-ID>.conf (e.g., /etc/pve/lxc/100.conf).
  • Mount Syntax: mp0: /mnt/pve/<Storage-ID>,mp=/media/<mount-name>
  • Permission Mapping (for unprivileged containers):
    lxc.idmap: u 0 100000 65536
    lxc.idmap: g 0 100000 65536
    lxc.idmap: u 1000 1000 1
    lxc.idmap: g 1000 1000 1
    lxc.idmap: u 1001 101001 64535
    lxc.idmap: g 1001 101001 64535
  • Jellyfin Setup: Dashboard → Libraries → Add Media Library → Folders: /media/synology/Documents/Movies.

Historical Notes#

  • Conversation date: 2025-11-07.
  • Focuses on resolving LXC mount visibility and permission issues for Jellyfin on a Synology NAS.
  • No major infrastructure changes flagged; patterns remain valid for Proxmox 8.x.

Open Questions#

  • None.

Sources#

  • ingested/chats/092-Mount NAS Storage to LXC Containers.md
  • ingested/chats/091-Mount Synology SMB to Proxmox Guide.md
  • ingested/chats/090-Mount SMB Share on Proxmox Guide.md
  • DeepSeek conversation: Mount NAS Storage to LXC Containers (2025-11-07)

Sources#

  • ingested/chats/092-Mount NAS Storage to LXC Containers.md
  • ingested/chats/091-Mount Synology SMB to Proxmox Guide.md
  • ingested/chats/090-Mount SMB Share on Proxmox Guide.md
  • Historical DeepSeek conversation: “Mount SMB Share on Proxmox Guide” (2025-11-07)

Proxmox Storage Management, Proxmox Systemd Mounts

Proxmox NVMe Partition Management

Proxmox NVMe Partition Management#

What Was Established#

  • Advanced manual partitioning strategy for separating OS and VM/Container data on a single NVMe drive.
  • Recommended for test clusters where OS reinstalls are expected without risking data loss on the storage partition.
  • Uses sgdisk to create distinct partitions that Proxmox treats as independent storage targets.

Key Decisions#

  • Partition Layout: EFI (1GB), OS (~100GB), Data (remaining ~399GB).
  • Tooling: sgdisk for GPT partitioning, mkfs.ext4/mkfs.fat for formatting, manual mounting for data.
  • Storage Integration: Data partition mounted at /mnt/data and added as a Directory storage in Proxmox.

Current Configuration#

  • Target Hardware: 3x Laptops with 500GB NVMe drives.
  • Partition Scheme:
    • /dev/nvme0n1p1: 1GB EFI System Partition (FAT32)
    • /dev/nvme0n1p2: ~100GB Proxmox OS (ext4)
    • /dev/nvme0n1p3: ~399GB VM/Container Data (ext4)
  • Post-Install Mount: Data partition mounted at /mnt/data and auto-mounted via /etc/fstab.

Historical Notes#

  • As of 2026-02-28, the user opted for the “complex” manual partitioning approach over the installer’s built-in LVM/ZFS options to ensure OS and data appear as totally separate drives.
  • This method requires dropping to a shell during installation and manually launching proxinstall.

Open Questions#

  • How does this partitioning scheme interact with future ZFS pool expansions if the laptops are clustered?
  • Are there specific sgdisk flags needed for NVMe drives in UEFI mode?

Sources#

  • ingested/chats/013-Set Up Proxmox with LVM and ZFS.md
  • DeepSeek conversation: Proxmox OS and Storage Separation Guide (2026-02-28)

Proxmox Storage Management

Proxmox Storage Management#


title: Proxmox Storage Management version: 1.2 date: 2026-04-30 namespace: general wiki: homelab tags: [proxmox, storage, smb, cifs, zfs, backups] changes: Crystallized from historical DeepSeek conversation historical: true#

Proxmox Storage Management#

What Was Established#

  • SMB/CIFS shares can be integrated into Proxmox via CLI or the Web GUI.
  • GUI integration is recommended for Proxmox-native features (backups, ISOs, templates) and automatic retention management.
  • Local disk inspection (e.g., sda2) requires verifying mount status and filesystem type to avoid conflicts with active root partitions.

CLI Mounting (Persistent)#

  1. Install utilities: apt update && apt install cifs-utils
  2. Create credentials file: /etc/smb-credentials with chmod 600.
  3. Add to /etc/fstab: //server/share /mnt/smb-share cifs credentials=/etc/smb-credentials,uid=0,gid=0,file_mode=0660,dir_mode=0770,iocharset=utf8 0 0
  4. Test mount: mount -a

GUI Mounting (Proxmox Native)#

  • Navigate to DatacenterStorageAddSMB/CIFS.
  • Required Fields: ID, Server, Share, Username, Password.
  • Content Types (select based on use case):
    • ISO images
    • Container templates
    • VZDump backup files
  • Advanced Options: Max Backups, Prune Backups, SMB Version (e.g., 3.0), Domain.
  • Verification: Green checkmark in Storage list indicates active status. Use “Test Connection” if available.

Local Disk Inspection#

  • Check status: lsblk, blkid /dev/sda2, mount | grep sda2.
  • Mount temporarily: mount /dev/sda2 /mnt/sda2.
  • Inspect: ls -la /mnt/sda2, df -h, du -sh /mnt/sda2/*.
  • Note: sda2 is typically the root partition on Proxmox hosts; avoid modifying if already mounted at /.

Open Questions#

  • Specific purpose of sda2 on Isengard (192.168.1.69)? (Likely root, but verify with lsblk/blkid).
  • SMB share usage for LonelyMountain (192.168.1.137) backups vs ISOs? (GUI method preferred for VZDump retention).

Proxmox Systemd Mounts — systemd mount units for NAS shares Proxmox LXC Storage Mounts — LXC bind mounts and permission mapping Proxmox ZFS Storage & Installation Patterns Proxmox NVMe Partition Management TrueNAS (Vairë) - Storage & Backup Server

Proxmox Systemd Mounts

Proxmox Systemd Mounts#

What Was Established#

  • SMB/CIFS shares can be integrated into Proxmox via CLI or the Web GUI.
  • GUI integration is recommended for Proxmox-native features (backups, ISOs, templates) and automatic retention management.
  • System-level mounts (e.g., at /media/synology) are distinct from Proxmox Storage and are used for general file operations accessible by the host OS.
  • Proxmox automatically mounts SMB shares under /mnt/pve/<storage_id> when configured via the GUI.

Key Decisions#

  • Maintain both a Proxmox Storage entry (for VM disks/backups) and a system-level mount (for general file access).
  • Extract credentials from /etc/pve/storage.cfg to avoid hardcoding passwords in multiple places.
  • Use pvesm status to verify existing storage mounts and avoid conflicts.

Current Configuration#

  • Synology NAS (LonelyMountain): 192.168.1.137
  • Proxmox Storage: Configured via Datacenter > Storage (SMB/CIFS) for VM-related content.
  • System Mount: Targeted at /media/synology for general file operations.

System-Level Mount Methods#

1. Extract Credentials & Create System Mount#

  1. Check existing Proxmox storage: pvesm status
  2. Extract server/share info from /etc/pve/storage.cfg
  3. Create credentials file:
    nano /root/.smbcredentials
    # Add: username=your_username, password=your_password
    chmod 600 /root/.smbcredentials
  4. Add to /etc/fstab:
    //192.168.1.137/share_name /media/synology cifs credentials=/root/.smbcredentials,uid=0,gid=0,file_mode=0660,dir_mode=0770,iocharset=utf8 0 0
  5. Test: mount -a

Create /etc/systemd/system/media-synology.mount:

Proxmox VM Boot Troubleshooting

Proxmox VM Boot Troubleshooting#

What Was Established#

  • Ubuntu VMs can hang during the boot process at apparmor.service (displayed as “staring apparmor.service - L:oad appArmor profiles…”).
  • In Proxmox, this specific hang was caused by an iGPU (…)

Coffee Lake iGPU Passthrough Freeze & Recovery#

  • Issue: Coffee Lake iGPU passthrough to an Ubuntu VM causes the Proxmox VM to freeze on boot.
  • Immediate Recovery Steps (run from Proxmox host console):
    1. Stop the frozen VM: qm stop <VMID> --force or kill -9 <PID> via ps aux | grep qemu.
    2. Remove iGPU from VM config: Edit /etc/pve/qemu-server/<VMID>.conf and remove hostpci lines and GPU-related args: lines.
    3. Reset host iGPU state: Edit /etc/modprobe.d/pve-blacklist.conf and comment out blacklist i915.
    4. Reboot host: update-initramfs -u -k all && reboot.
    5. Verify host recovery: lspci | grep -i vga and lsmod | grep i915.
  • Proper Re-configuration Steps:
    1. Enable IOMMU: Update /etc/default/grub with GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt", then update-grub && reboot.
    2. Verify IOMMU Groups: find /sys/kernel/iommu_groups/ -type l.
    3. Add iGPU via UI: VM → Hardware → Add PCI Device → Select iGPU → Check “All Functions” & “PCI-Express” → DO NOT check “Primary GPU”.
  • Advanced/Alternative Configuration:
    • GRUB Parameters: quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=efifb:off
    • Host Blacklist: echo "blacklist i915" >> /etc/modprobe.d/pve-blacklist.conf
    • VM Config (args):
      args: -device vfio-pci,host=00:02.0,addr=0x18,x-igd-gms=1,driver=vfio-pci
      args: -device vfio-pci,host=00:02.1,addr=0x18.1,driver=vfio-pci
      args: -global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off
      args: -set device.vga.ramfb=off
      args: -set device.vga.driver=vfio-pci
    • VM Hardware Requirements: Machine q35, BIOS OVMF (UEFI), Display Default (not SPICE).
    • Ubuntu VM Guest: Install Intel graphics drivers, add i915.enable_guc=2 to kernel parameters, ensure early KMS start.

Sources#

  • ingested/chats/103-Troubleshooting Coffee Lake iGPU Passthrough on Proxmox.md
  • ingested/chats/101-Troubleshooting Slow Ubuntu VM Boot.md

TrueNAS (Vairë) - Storage & Backup Server

TrueNAS (Vairë) - Storage & Backup Server#

What Was Established#

TrueNAS (Vairë) is deployed as a Proxmox VM to serve as the primary storage and backup server for the homelab. It handles ZFS storage pools, NFS/SMB shares for Proxmox and other VMs, and hosts Collabora Office in an iocage jail.

Key Decisions#

  • VM Type: Q35 with UEFI firmware (recommended for ZFS stability).
  • Resources: 16 GiB RAM (fixed, no ballooning) and 2 vCPUs. 32 GB boot disk separate from storage.
  • Storage: 4TB HDD passed through directly to the VM for ZFS data integrity and performance.
  • Backup Integration: NFS share (/mnt/tank/backups) configured for Proxmox backups. SMB share available for manual access.
  • Collabora Office: Deployed in a dedicated iocage jail on port 9980.

Current Configuration#

  • Hostname: Vairë
  • IP Address: 192.168.1.100 (NFS) / 192.168.1.133 (SMB)
  • Proxmox VM ID: [Pending verification]
  • ZFS Pool: tank (4TB HDD passthrough)
  • NFS Share: /mnt/tank/backups (Network: 192.168.1.0/24, Maproot: root)
  • SMB Share: /mnt/tank/backups (Version 3.0)
  • Collabora Jail: iocage jail, port 9980

Historical Notes#

  • The conversation notes two different IPs for TrueNAS (192.168.1.100 and 192.168.1.133). Verify which IP is currently assigned to the TrueNAS VM.
  • TrueNAS CORE uses iocage jails. Ensure jail templates are up to date.
  • Proxmox backups to NFS require hard,intr,noatime mount options in /etc/fstab.

Open Questions#

  • Which IP address is currently active for Vairë (192.168.1.100 or 192.168.1.133)?
  • Is the 4TB HDD currently formatted as a ZFS pool on Vairë?
  • Has Collabora Office been integrated with a Nextcloud instance?

Sources#

  • ingested/chats/032-TrueNas - Vairë.md
  • Historical DeepSeek conversation on TrueNAS VM setup and Proxmox integration.

Windows VM Installation Troubleshooting

Windows VM Installation Troubleshooting#

What Was Established#

Troubleshooting guide for Windows installation when the local disk does not appear in the partitioning screen during setup.

Key Troubleshooting Steps#

  1. Check disk detection in BIOS/UEFI — If the disk doesn’t appear in BIOS, it’s a hardware issue (loose cable, faulty drive, wrong SATA port).

  2. Load storage drivers — Modern NVMe/RAID controllers may need drivers loaded during setup via Shift + F10 → “Load Driver”.

Proxmox ZFS Storage Setup

Proxmox ZFS Storage Setup#

What Was Established#

Procedures for initializing ZFS pools using multiple drives via the Proxmox Web GUI and configuring them as usable storage for Virtual Machines (VMs) and Linux Containers (LXC).

Key Decisions#

  • RAID Levels: Selection depends on the number of disks and redundancy requirements:
    • Stripe (RAID 0): Maximum capacity, no redundancy.
    • Mirror (RAID 1): Redundancy, 50% capacity loss.
    • RAID-Z1/Z2: Requires 3+ disks for parity-based redundancy.
  • Compression: lz4 is the recommended compression algorithm for performance.
  • Ashift: Set to 12 for modern SSDs/NVMe to ensure proper block alignment.
  • Thin Provisioning: Enabled for storage pools to allow for flexible disk allocation.

Current Configuration#

1. Initialize ZFS Pools (Web GUI)#

Navigate to: DatacenterNodeDisksZFS