Anyone else just sick of trying to follow guides that cover 95% of the process, or maybe slightly miss a step and then spend hours troubleshooting setups just to get it to work?

I think I just have too much going in my “lab” the point that when something breaks (and my wife and/or kids complain) it’s more of a hassle to try and remember how to fix or troubleshoot stuff. I lightly document myself cuz I feel like I can remember well enough. But then it’s a style to find the time to fix, or stuff is tested and 80%completed but never fully used because life is busy and I don’t have loads of free time to pour into this stuff anymore. I hate giving all that data to big tech, but I also hate trying to manage 15 different containers or VMs, or other services. Some stuff is fine/easy or requires little effort, but others just don’t seem worth it.

I miss GUIs with stuff where I could fumble through settings to fix it as is easier for me to look through all that vs read a bunch of commands.

Idk, do you get lab burnout? Maybe cuz I do IT for work too it just feels like it’s never ending…

  • Encrypt-Keeper@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?

    If those 10 single layer deep containers are Proxmox’s LXC containers then yes, absolutely. OCI containers are isolated processes that run single services, usually just a single binary. There’s no OS, no init system. They’re very lightweight with very little overhead. They’re “containerized services”. LXC containers on the other hand are very heavy “system containers” that have a full OS and user space, init system, file systems etc. They are one step removed from being full size VMs, short of the fact that they can share the hosts kernel and don’t need to virtualize. In short, your single LXC running docker and a bunch of containers inside of it is far more resource efficient than running a bunch of separate LXC containers.

    One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers, without the need of actually entering the machine

    I mean that’s exactly what docker containers do but more efficiently.

    I can just snapshot the current setup and then rollback if it isn’t good

    I mean that’s sort of the entire idea behind docker containers as well. It can even be automated for zero downtime updates and deployments, as well as rollbacks.

    When compared to 10 CT’s that are finetuned to their specific app, you will have better performance running the CT’s than a VM running everything

    That is incorrect. Let’s break away from containers and VMs for a second and look deeper into what is happening under the hood here.

    Option A (Docker + containers): One OS, One Init system, one full set of Linux libraries.

    Option B (10 LXC containers): Ten operating systems, ten separate init systems, 10 separate sets of full Linux libraries.

    Option A is far more lightweight, and becomes a more attractive option the more services you add.

    And not only that, but as you found out, you don’t need to run a full VM for your docker host. You could just use an LXC. Though in that case I’d still prefer the one VM, so that your containers aren’t sharing your Proxmox Host’s kernel.

    Like LXCs do have a use case, but it sounds like you’re using them to an alternative to regular service containers and that’s not really what it’s for.

    • Pika@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      24 hours ago

      Your statements are surprising to me, because when I initially set this system up I tested against that because I had figured similar.

      My original layout was a full docker environment under a single VM which was only running Debian 12 with docker.

      I remember seeing a good 10gb different with ram usage between offloading the machines off the docker instance onto their own CT’s and keeping them all as one unit. I guess this could be chalked down to the docker container implementation being bad, or something being wrong with the vm. It was my primary reason for keeping them isolated, it was a win/win because services had better performance and was easier to manage.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 hours ago

        There are a number of reasons why your docker setup was using too much RAM, including just poorly built containers. You could also swap out docker for podman, which is daemonless and rootless, and registers container workloads with systemd. So if you’re married to the LXCs you can use that for running OCI containers. Also a new version of Proxmox enabled the ability to run OCI containers using LXCs so you can run them directly without docker or podman.

        • Pika@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          23 hours ago

          Yea I plan to try out the new Proxmox version at some point to try that out, thank you again.