I recently decided to rebuild my homelab after a nasty double hard drive failure (no important files were lost, thanks to ddrescue). The new setup uses one SSD as the PVE root drive, and two Ironwolf HDDs in a RAID 1 MD array (which I’ll probably expand to RAID 5 in the near future).

Previously the storage array had a simple ext4 filesystem mounted to /mnt/storage, which was then bind-mounted to LXC containers running my services. It worked well enough, but figuring out permissions between the host, the container, and potentially nested containers was a bit of a challenge. Now I’m using brand new hard drives and I want to do the first steps right.

The host is an old PC living a new life: i3-4160 with 8 GB DDR3 non-ECC memory.

  • Option 1 would be to do what I did before: format the array as an ext4 volume, mount on the host, and bind mount to the containers. I don’t use VMs much because the system is memory constrained, but if I did, I’d probably have to use NFS or something similar to give the VMs access to the disk.

  • Option 2 is to create an LVM volume group on the RAID array, then use Proxmox to manage LVs. This would be my preferred option from an administration perspective since privileges would become a non-issue and I could mount the LVs directly to VMs, but I have some concerns:

    • If the host were to break irrecoverably, is it possible to open LVs created by Proxmox on a different system? If I need to back up some LVM config files to make that happen, which files are those? I’ve tried following several guides to mount the LVs, but never been successful.
    • I’m planning to put things on the server that will grow over time, like game installers, media files, and Git LFS storage. Is it better to use thinpools or should I just allocate some appropriately huge LVs to those services?
  • Option 3 is to forget mdadm and use Proxmox’s ZFS to set up redundancy. My main concern here, in addition to everything in option 2, is that ZFS needs a lot of memory for caching. Right now I can dedicate 4 GB to it, which is less than the recommendation – is it responsible to run a ZFS pool with that?

My primary objective is data resilience above all. Obviously nothing can replace a good backup solution, but that’s not something I can afford at the moment. I want to be able to reassemble and mount the array on a different system if the server falls to pieces. Option 1 seems the most conducive for that (I’ve had to do it once), but if LVM on RAID or ZFS can offer the same resilience without any major drawbacks (like difficulty mounting LVs or other issues I might encounter)… I’d like to know what others use or recommend.

  • Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 hours ago

    Zfs for some reasons is always loved by the self hosting experts.

    Personally I don’t like it because it’s over complicated and not officially part of the Linux kernel.

    20y of self hosting for me means Linux software raid (raid1 then moved to raid5) and mostly ext4, recently (=last 2 years) upgraded to btrfs on my data raid. Btrfs at least is integrated in the Linux kernel and while has some drawbacks (don’t do raid with btrfs, put btrfs on MD raid instead) its been super rock solid.

    I would have kept ext4, but thought why not try btrfs, and it’s been so smooth that I had no reasons to go back.