//nbkelley /homelab

Filament Drying & Storage Guide

Filament Drying & Storage Guide#

What Was Established#

Drying temperatures, times, and storage practices for common 3D printing filaments (TPU, PLA). Airtight container with desiccant is the standard storage method. Fresh-from-package filament typically needs minimal drying.

Drying Reference#

Filament Temp Time (stored) Time (fresh)
TPU (Inland) 60-65°C 4-6 hours 2-3 hours (or skip)
PLA 50-55°C 4-6 hours 1-2 hours (or skip)

Lower-temp drying: Some prefer 50°C for TPU to reduce deformation risk, extending time to 6-8 hours. The “lower and slower” approach is safer for sensitive filaments.

Proxmox LXC Storage Mounts

Proxmox LXC Storage Mounts#

What Was Established#

  • SMB/CIFS shares can be integrated into Proxmox via CLI or the Web GUI.
  • GUI integration is recommended for Proxmox-native features (backups, ISOs, templates) and automatic retention management.
  • LXC containers can directly mount Proxmox storage paths using mp0/mp1 directives in the container config file.
  • Separate CIFS storages should be used for media vs. backups to maintain clean separation of concerns.

Key Decisions#

  • Storage Content Types: For media-only CIFS shares, select Container templates, ISO images, or Disk image. Explicitly avoid Containers and VZDump backup file to prevent Proxmox from treating the media share as a system storage.
  • Subdirectory Configuration: Leave the Proxmox storage subdirectory field blank to mount the root of the SMB share, enabling flexible navigation to specific subfolders (e.g., Documents/Movies).
  • Jellyfin Path Targeting: Point Jellyfin directly to the specific media subfolder (e.g., /media/synology/Documents/Movies) rather than the mount root.

Current Configuration#

  • LXC Config Path: /etc/pve/lxc/<CT-ID>.conf (e.g., /etc/pve/lxc/100.conf).
  • Mount Syntax: mp0: /mnt/pve/<Storage-ID>,mp=/media/<mount-name>
  • Permission Mapping (for unprivileged containers):
    lxc.idmap: u 0 100000 65536
    lxc.idmap: g 0 100000 65536
    lxc.idmap: u 1000 1000 1
    lxc.idmap: g 1000 1000 1
    lxc.idmap: u 1001 101001 64535
    lxc.idmap: g 1001 101001 64535
  • Jellyfin Setup: Dashboard → Libraries → Add Media Library → Folders: /media/synology/Documents/Movies.

Historical Notes#

  • Conversation date: 2025-11-07.
  • Focuses on resolving LXC mount visibility and permission issues for Jellyfin on a Synology NAS.
  • No major infrastructure changes flagged; patterns remain valid for Proxmox 8.x.

Open Questions#

  • None.

Sources#

  • ingested/chats/092-Mount NAS Storage to LXC Containers.md
  • ingested/chats/091-Mount Synology SMB to Proxmox Guide.md
  • ingested/chats/090-Mount SMB Share on Proxmox Guide.md
  • DeepSeek conversation: Mount NAS Storage to LXC Containers (2025-11-07)

Sources#

  • ingested/chats/092-Mount NAS Storage to LXC Containers.md
  • ingested/chats/091-Mount Synology SMB to Proxmox Guide.md
  • ingested/chats/090-Mount SMB Share on Proxmox Guide.md
  • Historical DeepSeek conversation: “Mount SMB Share on Proxmox Guide” (2025-11-07)

Proxmox Storage Management, Proxmox Systemd Mounts

Proxmox NVMe Partition Management

Proxmox NVMe Partition Management#

What Was Established#

  • Advanced manual partitioning strategy for separating OS and VM/Container data on a single NVMe drive.
  • Recommended for test clusters where OS reinstalls are expected without risking data loss on the storage partition.
  • Uses sgdisk to create distinct partitions that Proxmox treats as independent storage targets.

Key Decisions#

  • Partition Layout: EFI (1GB), OS (~100GB), Data (remaining ~399GB).
  • Tooling: sgdisk for GPT partitioning, mkfs.ext4/mkfs.fat for formatting, manual mounting for data.
  • Storage Integration: Data partition mounted at /mnt/data and added as a Directory storage in Proxmox.

Current Configuration#

  • Target Hardware: 3x Laptops with 500GB NVMe drives.
  • Partition Scheme:
    • /dev/nvme0n1p1: 1GB EFI System Partition (FAT32)
    • /dev/nvme0n1p2: ~100GB Proxmox OS (ext4)
    • /dev/nvme0n1p3: ~399GB VM/Container Data (ext4)
  • Post-Install Mount: Data partition mounted at /mnt/data and auto-mounted via /etc/fstab.

Historical Notes#

  • As of 2026-02-28, the user opted for the “complex” manual partitioning approach over the installer’s built-in LVM/ZFS options to ensure OS and data appear as totally separate drives.
  • This method requires dropping to a shell during installation and manually launching proxinstall.

Open Questions#

  • How does this partitioning scheme interact with future ZFS pool expansions if the laptops are clustered?
  • Are there specific sgdisk flags needed for NVMe drives in UEFI mode?

Sources#

  • ingested/chats/013-Set Up Proxmox with LVM and ZFS.md
  • DeepSeek conversation: Proxmox OS and Storage Separation Guide (2026-02-28)

Proxmox Storage Management

Proxmox Storage Management#


title: Proxmox Storage Management version: 1.2 date: 2026-04-30 namespace: general wiki: homelab tags: [proxmox, storage, smb, cifs, zfs, backups] changes: Crystallized from historical DeepSeek conversation historical: true#

Proxmox Storage Management#

What Was Established#

  • SMB/CIFS shares can be integrated into Proxmox via CLI or the Web GUI.
  • GUI integration is recommended for Proxmox-native features (backups, ISOs, templates) and automatic retention management.
  • Local disk inspection (e.g., sda2) requires verifying mount status and filesystem type to avoid conflicts with active root partitions.

CLI Mounting (Persistent)#

  1. Install utilities: apt update && apt install cifs-utils
  2. Create credentials file: /etc/smb-credentials with chmod 600.
  3. Add to /etc/fstab: //server/share /mnt/smb-share cifs credentials=/etc/smb-credentials,uid=0,gid=0,file_mode=0660,dir_mode=0770,iocharset=utf8 0 0
  4. Test mount: mount -a

GUI Mounting (Proxmox Native)#

  • Navigate to DatacenterStorageAddSMB/CIFS.
  • Required Fields: ID, Server, Share, Username, Password.
  • Content Types (select based on use case):
    • ISO images
    • Container templates
    • VZDump backup files
  • Advanced Options: Max Backups, Prune Backups, SMB Version (e.g., 3.0), Domain.
  • Verification: Green checkmark in Storage list indicates active status. Use “Test Connection” if available.

Local Disk Inspection#

  • Check status: lsblk, blkid /dev/sda2, mount | grep sda2.
  • Mount temporarily: mount /dev/sda2 /mnt/sda2.
  • Inspect: ls -la /mnt/sda2, df -h, du -sh /mnt/sda2/*.
  • Note: sda2 is typically the root partition on Proxmox hosts; avoid modifying if already mounted at /.

Open Questions#

  • Specific purpose of sda2 on Isengard (192.168.1.69)? (Likely root, but verify with lsblk/blkid).
  • SMB share usage for LonelyMountain (192.168.1.137) backups vs ISOs? (GUI method preferred for VZDump retention).

Proxmox Systemd Mounts — systemd mount units for NAS shares Proxmox LXC Storage Mounts — LXC bind mounts and permission mapping Proxmox ZFS Storage & Installation Patterns Proxmox NVMe Partition Management TrueNAS (Vairë) - Storage & Backup Server

Proxmox Systemd Mounts

Proxmox Systemd Mounts#

What Was Established#

  • SMB/CIFS shares can be integrated into Proxmox via CLI or the Web GUI.
  • GUI integration is recommended for Proxmox-native features (backups, ISOs, templates) and automatic retention management.
  • System-level mounts (e.g., at /media/synology) are distinct from Proxmox Storage and are used for general file operations accessible by the host OS.
  • Proxmox automatically mounts SMB shares under /mnt/pve/<storage_id> when configured via the GUI.

Key Decisions#

  • Maintain both a Proxmox Storage entry (for VM disks/backups) and a system-level mount (for general file access).
  • Extract credentials from /etc/pve/storage.cfg to avoid hardcoding passwords in multiple places.
  • Use pvesm status to verify existing storage mounts and avoid conflicts.

Current Configuration#

  • Synology NAS (LonelyMountain): 192.168.1.137
  • Proxmox Storage: Configured via Datacenter > Storage (SMB/CIFS) for VM-related content.
  • System Mount: Targeted at /media/synology for general file operations.

System-Level Mount Methods#

1. Extract Credentials & Create System Mount#

  1. Check existing Proxmox storage: pvesm status
  2. Extract server/share info from /etc/pve/storage.cfg
  3. Create credentials file:
    nano /root/.smbcredentials
    # Add: username=your_username, password=your_password
    chmod 600 /root/.smbcredentials
  4. Add to /etc/fstab:
    //192.168.1.137/share_name /media/synology cifs credentials=/root/.smbcredentials,uid=0,gid=0,file_mode=0660,dir_mode=0770,iocharset=utf8 0 0
  5. Test: mount -a

Create /etc/systemd/system/media-synology.mount:

Servarr - Media Automation Stack

Servarr - Media Automation Stack#

Overview#

Servarr is a full VM at 192.168.1.112 (hostname: servarr) running a Docker Compose media automation stack. All services depend on a NAS mount at /data for media storage. Download clients (qbittorrent, nzbget) and indexer (prowlarr) route through a Gluetun VPN container via network_mode: service:gluetun.

Note: This VM is distinct from Varda (192.168.1.131), which is a separate web server hosting ilmare.nbkelley.com.

VM Specs#

Detail Value
Hostname servarr
IP 192.168.1.112
OS Ubuntu 24.04.4 LTS (Noble)
Kernel 6.8.0-107-generic
CPU QEMU Virtual CPU, 4 vCPUs
RAM 7.8 GB
Disk 63 GB root (/dev/sda2 ext4, 38% used)
Hypervisor Proxmox (Minas Tirith)

Container Inventory#

Servarr Stack (/docker/servarr/compose.yaml)#

Network: servarrnetwork (172.39.0.0/24)

TrueNAS (Vairë) - Storage & Backup Server

TrueNAS (Vairë) - Storage & Backup Server#

What Was Established#

TrueNAS (Vairë) is deployed as a Proxmox VM to serve as the primary storage and backup server for the homelab. It handles ZFS storage pools, NFS/SMB shares for Proxmox and other VMs, and hosts Collabora Office in an iocage jail.

Key Decisions#

  • VM Type: Q35 with UEFI firmware (recommended for ZFS stability).
  • Resources: 16 GiB RAM (fixed, no ballooning) and 2 vCPUs. 32 GB boot disk separate from storage.
  • Storage: 4TB HDD passed through directly to the VM for ZFS data integrity and performance.
  • Backup Integration: NFS share (/mnt/tank/backups) configured for Proxmox backups. SMB share available for manual access.
  • Collabora Office: Deployed in a dedicated iocage jail on port 9980.

Current Configuration#

  • Hostname: Vairë
  • IP Address: 192.168.1.100 (NFS) / 192.168.1.133 (SMB)
  • Proxmox VM ID: [Pending verification]
  • ZFS Pool: tank (4TB HDD passthrough)
  • NFS Share: /mnt/tank/backups (Network: 192.168.1.0/24, Maproot: root)
  • SMB Share: /mnt/tank/backups (Version 3.0)
  • Collabora Jail: iocage jail, port 9980

Historical Notes#

  • The conversation notes two different IPs for TrueNAS (192.168.1.100 and 192.168.1.133). Verify which IP is currently assigned to the TrueNAS VM.
  • TrueNAS CORE uses iocage jails. Ensure jail templates are up to date.
  • Proxmox backups to NFS require hard,intr,noatime mount options in /etc/fstab.

Open Questions#

  • Which IP address is currently active for Vairë (192.168.1.100 or 192.168.1.133)?
  • Is the 4TB HDD currently formatted as a ZFS pool on Vairë?
  • Has Collabora Office been integrated with a Nextcloud instance?

Sources#

  • ingested/chats/032-TrueNas - Vairë.md
  • Historical DeepSeek conversation on TrueNAS VM setup and Proxmox integration.

Proxmox ZFS Storage Setup

Proxmox ZFS Storage Setup#

What Was Established#

Procedures for initializing ZFS pools using multiple drives via the Proxmox Web GUI and configuring them as usable storage for Virtual Machines (VMs) and Linux Containers (LXC).

Key Decisions#

  • RAID Levels: Selection depends on the number of disks and redundancy requirements:
    • Stripe (RAID 0): Maximum capacity, no redundancy.
    • Mirror (RAID 1): Redundancy, 50% capacity loss.
    • RAID-Z1/Z2: Requires 3+ disks for parity-based redundancy.
  • Compression: lz4 is the recommended compression algorithm for performance.
  • Ashift: Set to 12 for modern SSDs/NVMe to ensure proper block alignment.
  • Thin Provisioning: Enabled for storage pools to allow for flexible disk allocation.

Current Configuration#

1. Initialize ZFS Pools (Web GUI)#

Navigate to: DatacenterNodeDisksZFS

Proxmox ZFS Storage & Installation Patterns

Proxmox ZFS Storage & Installation Patterns#

What Was Established#

Procedures for managing ZFS rpool on single-disk Proxmox installations, including methods for limiting pool size and troubleshooting import failures.

Key Decisions#

  • Single Disk Size Limitation: When installing Proxmox on a large disk but wanting to limit the ZFS pool to a specific size (e.g., 64GB) to leave room for other partitions, use the hdsize parameter in the Proxmox installer’s Advanced Options.
  • Custom Partitioning Method: For complex layouts, it is possible to manually partition a drive in Debian and then upgrade the system to Proxmox VE.

Current Configuration#

ZFS Pool Creation (Manual)#

To create a ZFS pool with specific optimizations (ashift=12, compression=lz4) and a size limit: