Hi all. I was curious about some of the pros and cons of using Proxmox in a home lab set up. It seems like in most home lab setups it’s overkill. But I feel like there may be something I’m missing. Let’s say I run my home lab on two or three different SBCs. Main server is an x86 i5 machine with 16gigs memory and the others are arm devices with 8 gigs memory. Ample space on all. Wouldn’t Proxmox be overkill here and eat up more system resources than just running base Ubuntu, Debian or other server distro on them all and either running the services needed from binary or docker? Seems like the extra memory needed to run the Proxmox software and then the containers would just kill available memory or CPU availability. Am I wrong in thinking that Proxmox is better suited for when you have a machine with 32gigs or more of memory and some sort of base line powerful cpu?

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    6 months ago

    LXC is worse than virtualization as it pins to a single core instead of getting scheduled by the kernel scheduler. It also is quiet slow and dated. Either run Podman, Docker or full VMs.

    First what you’re saying about the scheduler isn’t even what happens by default, that was some crap that Proxmox pulled when they migrated from OpenVZ to LXC. To be fair, they had a bunch of more or less valid reasons to force that configuration, but again it due to kernel related issues that were affecting Proxmox more than regular Ubuntu and those issues were solved around the end of 2021.

    Now Docker and LXC serve different purposes and they aren’t a replacement for each other. Docker is a stateless application container solution while LXC is a full persistent container aimed at running full operating systems…

    Docker and LXC share a bunch of underlaying technologies at on the beginning Docker even used LXC as their backed, they later moved to their execution environment called libcontainer because they weren’t using all the featured that LXC provided and wanted more control over the implementation.

    For those who really need full systems is LXC definitely faster than a VM. Your argument assumes everything can and should be done inside Docker/Podman when that’s very far from the reality. The Docker guys have written a very good article showcasing the differences and optimal use cases for both.

    Here two quotes for you:

    LXC is especially beneficial for users who need granular control over their environments and applications that require near-native performance. As an open source project, LXC continues to evolve, shaped by a community of developers committed to enhancing its capabilities and integration with the Linux kernel. LXC remains a powerful tool for developers looking for efficient, scalable, and secure containerization solutions. Efficient access to hardware resources (…) Virtual Desktop Infrastructure (VDI) (…) Close to native performance, suitable for intensive computational tasks.

    Docker excels in environments where deployment speed and configuration simplicity are paramount, making it an ideal choice for modern software development. Streamlined deployment (…) Microservices architecture (…) CI/CD pipelines.

    Anyways…

    It also ships with a newer kernel than Debian although it shouldn’t matter as you are using it for virtualization.

    It matters, trust me. Once you start requiring modules it will suddenly matter. Either way even if they ship a kernel that is newer than Debian it is so fucked at that point that you’ll be better with whatever Debian provides out of the box.