//nbkelley /homelab

Proxmox Storage Management

Proxmox Storage Management#


title: Proxmox Storage Management version: 1.2 date: 2026-04-30 namespace: general wiki: homelab tags: [proxmox, storage, smb, cifs, zfs, backups] changes: Crystallized from historical DeepSeek conversation historical: true#

Proxmox Storage Management#

What Was Established#

  • SMB/CIFS shares can be integrated into Proxmox via CLI or the Web GUI.
  • GUI integration is recommended for Proxmox-native features (backups, ISOs, templates) and automatic retention management.
  • Local disk inspection (e.g., sda2) requires verifying mount status and filesystem type to avoid conflicts with active root partitions.

CLI Mounting (Persistent)#

  1. Install utilities: apt update && apt install cifs-utils
  2. Create credentials file: /etc/smb-credentials with chmod 600.
  3. Add to /etc/fstab: //server/share /mnt/smb-share cifs credentials=/etc/smb-credentials,uid=0,gid=0,file_mode=0660,dir_mode=0770,iocharset=utf8 0 0
  4. Test mount: mount -a

GUI Mounting (Proxmox Native)#

  • Navigate to DatacenterStorageAddSMB/CIFS.
  • Required Fields: ID, Server, Share, Username, Password.
  • Content Types (select based on use case):
    • ISO images
    • Container templates
    • VZDump backup files
  • Advanced Options: Max Backups, Prune Backups, SMB Version (e.g., 3.0), Domain.
  • Verification: Green checkmark in Storage list indicates active status. Use “Test Connection” if available.

Local Disk Inspection#

  • Check status: lsblk, blkid /dev/sda2, mount | grep sda2.
  • Mount temporarily: mount /dev/sda2 /mnt/sda2.
  • Inspect: ls -la /mnt/sda2, df -h, du -sh /mnt/sda2/*.
  • Note: sda2 is typically the root partition on Proxmox hosts; avoid modifying if already mounted at /.

Open Questions#

  • Specific purpose of sda2 on Isengard (192.168.1.69)? (Likely root, but verify with lsblk/blkid).
  • SMB share usage for LonelyMountain (192.168.1.137) backups vs ISOs? (GUI method preferred for VZDump retention).

Proxmox Systemd Mounts — systemd mount units for NAS shares Proxmox LXC Storage Mounts — LXC bind mounts and permission mapping Proxmox ZFS Storage & Installation Patterns Proxmox NVMe Partition Management TrueNAS (Vairë) - Storage & Backup Server

TrueNAS (Vairë) - Storage & Backup Server

TrueNAS (Vairë) - Storage & Backup Server#

What Was Established#

TrueNAS (Vairë) is deployed as a Proxmox VM to serve as the primary storage and backup server for the homelab. It handles ZFS storage pools, NFS/SMB shares for Proxmox and other VMs, and hosts Collabora Office in an iocage jail.

Key Decisions#

  • VM Type: Q35 with UEFI firmware (recommended for ZFS stability).
  • Resources: 16 GiB RAM (fixed, no ballooning) and 2 vCPUs. 32 GB boot disk separate from storage.
  • Storage: 4TB HDD passed through directly to the VM for ZFS data integrity and performance.
  • Backup Integration: NFS share (/mnt/tank/backups) configured for Proxmox backups. SMB share available for manual access.
  • Collabora Office: Deployed in a dedicated iocage jail on port 9980.

Current Configuration#

  • Hostname: Vairë
  • IP Address: 192.168.1.100 (NFS) / 192.168.1.133 (SMB)
  • Proxmox VM ID: [Pending verification]
  • ZFS Pool: tank (4TB HDD passthrough)
  • NFS Share: /mnt/tank/backups (Network: 192.168.1.0/24, Maproot: root)
  • SMB Share: /mnt/tank/backups (Version 3.0)
  • Collabora Jail: iocage jail, port 9980

Historical Notes#

  • The conversation notes two different IPs for TrueNAS (192.168.1.100 and 192.168.1.133). Verify which IP is currently assigned to the TrueNAS VM.
  • TrueNAS CORE uses iocage jails. Ensure jail templates are up to date.
  • Proxmox backups to NFS require hard,intr,noatime mount options in /etc/fstab.

Open Questions#

  • Which IP address is currently active for Vairë (192.168.1.100 or 192.168.1.133)?
  • Is the 4TB HDD currently formatted as a ZFS pool on Vairë?
  • Has Collabora Office been integrated with a Nextcloud instance?

Sources#

  • ingested/chats/032-TrueNas - Vairë.md
  • Historical DeepSeek conversation on TrueNAS VM setup and Proxmox integration.

Proxmox ZFS Storage Setup

Proxmox ZFS Storage Setup#

What Was Established#

Procedures for initializing ZFS pools using multiple drives via the Proxmox Web GUI and configuring them as usable storage for Virtual Machines (VMs) and Linux Containers (LXC).

Key Decisions#

  • RAID Levels: Selection depends on the number of disks and redundancy requirements:
    • Stripe (RAID 0): Maximum capacity, no redundancy.
    • Mirror (RAID 1): Redundancy, 50% capacity loss.
    • RAID-Z1/Z2: Requires 3+ disks for parity-based redundancy.
  • Compression: lz4 is the recommended compression algorithm for performance.
  • Ashift: Set to 12 for modern SSDs/NVMe to ensure proper block alignment.
  • Thin Provisioning: Enabled for storage pools to allow for flexible disk allocation.

Current Configuration#

1. Initialize ZFS Pools (Web GUI)#

Navigate to: DatacenterNodeDisksZFS

Proxmox ZFS Storage & Installation Patterns

Proxmox ZFS Storage & Installation Patterns#

What Was Established#

Procedures for managing ZFS rpool on single-disk Proxmox installations, including methods for limiting pool size and troubleshooting import failures.

Key Decisions#

  • Single Disk Size Limitation: When installing Proxmox on a large disk but wanting to limit the ZFS pool to a specific size (e.g., 64GB) to leave room for other partitions, use the hdsize parameter in the Proxmox installer’s Advanced Options.
  • Custom Partitioning Method: For complex layouts, it is possible to manually partition a drive in Debian and then upgrade the system to Proxmox VE.

Current Configuration#

ZFS Pool Creation (Manual)#

To create a ZFS pool with specific optimizations (ashift=12, compression=lz4) and a size limit: