Anyone else just sick of trying to follow guides that cover 95% of the process, or maybe slightly miss a step and then spend hours troubleshooting setups just to get it to work?

I think I just have too much going in my “lab” the point that when something breaks (and my wife and/or kids complain) it’s more of a hassle to try and remember how to fix or troubleshoot stuff. I lightly document myself cuz I feel like I can remember well enough. But then it’s a style to find the time to fix, or stuff is tested and 80%completed but never fully used because life is busy and I don’t have loads of free time to pour into this stuff anymore. I hate giving all that data to big tech, but I also hate trying to manage 15 different containers or VMs, or other services. Some stuff is fine/easy or requires little effort, but others just don’t seem worth it.

I miss GUIs with stuff where I could fumble through settings to fix it as is easier for me to look through all that vs read a bunch of commands.

Idk, do you get lab burnout? Maybe cuz I do IT for work too it just feels like it’s never ending…

  • WhyJiffie@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    unless you have zillion gigabytes of RAM, you really don’t want to spin up a VM for each thing you host. the separate OS-es have a huge memory overhead, with all the running services, cache memory, etc. the memory usage of most services can largely vary, so if you could just assign 200 MB RAM to each VM that would be moderate, but you can’t, because when it will need more RAM than that, it will crash, possibly leaving operations in half and leading to corruption. and to assign 2 GB RAM to every VM is waste.

    I use proxmox too, but I only have a few VMs, mostly based on how critical a service is.

    • Pika@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      For VMs, I fully agree with you, but the best part about Proxmox is the ability to use containers, or CTs, which share system resources. So unlike a VM, if you specify a container has two gigs of RAM, that just means that it has two gigs of RAM that it can use, unlike the VM where it’s going to use that amount (and will crash if it can’t get that amount)

      These CT’s do the equivalent of what docker does, which is share the system space with other services with isolation, While giving an easy to administrate and backup system, while keeping it able to be seperate by service.

      For example, with a Proxmox CT, I can do snapshots of the container itself before I do any type of work, if where if I was using Docker on a primary machine, I would need to back up the Docker container completely. Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral. If I had to take troubleshooting bare bones versus troubleshooting a Docker container, I’m going to choose bare bones every step of the way.(You can even run an Alpine CT if you would rather keep the average Docker container setup)

      Also for the over committing thing, be aware that your issue you’ve stated there will happen with a Docker setup as well. Docker doesn’t care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.

      Anyway, long story short, Docker containers do basically the same thing that a Proxmox CT does. it’s just ephemeral instead of persistent, And designed to be plug-and-go, which I’ve found in the case of running a Proxmox-style setup, isn’t super handy due to the fact that a lot of times I would want to share resources such as having a dedicated database or caching system, Which is generally a pain in the butt to try to implement on Docker setups.

      • WhyJiffie@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        16 hours ago

        oh, LXC containers! I see. I never used them because I find LXC setup more complicated, once tried to use a turnkey samba container but couldn’t even figure out where to add the container image to LXC, or how to start if not that way.

        but also, I like that this way my random containerized services use a different kernel, not the main proxmox kernel, for isolation.

        Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral.

        I don’t understand this point. on docker, it’s rare that you need to touch the Dockerfile (which contains the container image build instructions). did you mean the docker compose file? or a script file that contains a docker run command?

        also, you can run commands or open a shell in any container with docker, except if the container image does not contain any shell binary (but even then, copying a busybox or something to a volume of the container would help), but that’s rare too.
        you do it like this: docker exec -it containername command. bit lengthy, but bash aliases help

        Also for the over committing thing, be aware that your issue you’ve stated there will happen with a Docker setup as well. Docker doesn’t care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.

        in docker I don’t allocate memory, and it’s not common to do so. it shares the system memory with all containers. docker has a rudimentary resource limit thingy, but what’s better is you can assign containers to a cgroup, and define resource limits or reservations that way. I manage cgroups with systemd “.slice” units, and it’s easier than it sounds

        • Pika@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 hours ago

          They are very nice. They share kernelspace so I can understand wanting isolation but, the ability to just throw a base Debian container on, assign it a resource pool and resource allocation, and install a service directly to it, while having it isolated from everything without having to use Docker’s emphereal by design system(which does have its perks but I hate troubleshooting containers on it) or having to use a full VM is nice.

          And yes, by Docker file I would mean either the Docker file or the compose file(usually compose). By straight on the container I mean on the container, My CTs don’t run Docker period, aside from the one that has the primary Docker stack. So I don’t have that layer to worry about on most CT’s

          As for the memory thing, I was just mentioning that Docker does the same thing that containers do if you don’t have enough RAM for what’s been provisioned. The way I had taken that original post is that specifying 2 gigs of RAM to the point the system exhausts it’s ram would cause corruption and the system crashes, which is true but docker falls for the same issue if the system exhausts it’s ram. That’s all I meant by it. Also cgroups sound cool, I gotta say I haven’t messed with them a whole lot. I wish proxmox had a better resource share system to designate a specific group as having X amount of max resources, and then have the CT or vm’s be using those pools.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        I’m really confused here, you don’t like how everything is containerized, and your preferred method is to run Proxmox and containerize everything, but in an ecosystem with less portability and tooling?

        • Pika@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 day ago

          I don’t like how everything is docker containerized.

          I already run proxmox, which containerizes things by design with their CT’s and VM’s

          Running a docker image ontop of that is just wasting system resources. (while also complicating the troubleshooting process) It doesn’t make sense to run a CT or VM for a container, just to put docker on it and run another container via that. It also completly bypasses everything that proxmox provides you for snapshotting and backup because proxmox’s system is for the entire container, and if all services are running on the same container all services are going to be snapshotted.

          My current system allows me to have per service snapshots(and backups), all within the proxmox webUI, all containerized, and all restricted to their own resources. Docker is just not needed at this point.

          A docker system just adds extra headway that isn’t needed. So yes, just give me a standard installer.

          • Encrypt-Keeper@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 day ago

            Nothing is “docker containerized”. Docker is just a daemon and set of tools for managing OCI compliant containers.

            Running a docker image ontop of that is just wasting system resources.

            No? If you spun up one VM in Proxmox and installed docker and used it to run 10 containers, that would use fewer system resources than running 10 LXC containers directly on Proxmox.

            Like… you don’t like that the industry has adapted this efficient, portable, interchangeable, flexible, lightweight, mature technology, because you prefer the one that is heavier, less flexible, less portable, non-OCI compliant alternative?

            • Pika@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 day ago

              are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?

              I can agree with the statement that a single VM running docker with 10 containers uses less than 10 CT’s with docker installed then running their own containers(but that’s not what I do, or what I am asking for).

              I currently do use one CT that has docker installed with all my docker images, which I wouldn’t do if I had the ability not to but some apps require docker) but this removes most of the benefits you get using proxmox in the first place.

              One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers, without the need of actually entering the machine. (like for example if I"m screwing with a server, I can just snapshot the current setup and then rollback if it isn’t good) Throwing everything into a VM with docker bypasses that while adding headway to the system. I would need to backup the compose file (or however you are composing it) and the container, and then do my changes. My current system is a 1 click make my changes, if bad one click to revert.

              For resource explanation. Installing docker into a VM on proxmox then running every container in that does waste resources. You have the resources that docker requires to function (which is currently 4 gigs of ram per their website but when testing I’ve seen as low as 1 gig work fine)+ cpu and whatever storage it takes up which is about half a gig or so) in a VM(which also uses more processing and ram than CT’s do as they no longer share resources). When compared to 10 CT’s that are finetuned to their specific app, you will have better performance running the CT’s than a VM running everything, while keeping your ability to snapshot and removing the extra layer and ephemeral design that docker has(this can be a good and bad thing, but when troubleshooting I learn towards good).

              edit: clarification and general visibility so it wasnt bunched together.

              • Encrypt-Keeper@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                1 day ago

                are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?

                If those 10 single layer deep containers are Proxmox’s LXC containers then yes, absolutely. OCI containers are isolated processes that run single services, usually just a single binary. There’s no OS, no init system. They’re very lightweight with very little overhead. They’re “containerized services”. LXC containers on the other hand are very heavy “system containers” that have a full OS and user space, init system, file systems etc. They are one step removed from being full size VMs, short of the fact that they can share the hosts kernel and don’t need to virtualize. In short, your single LXC running docker and a bunch of containers inside of it is far more resource efficient than running a bunch of separate LXC containers.

                One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers, without the need of actually entering the machine

                I mean that’s exactly what docker containers do but more efficiently.

                I can just snapshot the current setup and then rollback if it isn’t good

                I mean that’s sort of the entire idea behind docker containers as well. It can even be automated for zero downtime updates and deployments, as well as rollbacks.

                When compared to 10 CT’s that are finetuned to their specific app, you will have better performance running the CT’s than a VM running everything

                That is incorrect. Let’s break away from containers and VMs for a second and look deeper into what is happening under the hood here.

                Option A (Docker + containers): One OS, One Init system, one full set of Linux libraries.

                Option B (10 LXC containers): Ten operating systems, ten separate init systems, 10 separate sets of full Linux libraries.

                Option A is far more lightweight, and becomes a more attractive option the more services you add.

                And not only that, but as you found out, you don’t need to run a full VM for your docker host. You could just use an LXC. Though in that case I’d still prefer the one VM, so that your containers aren’t sharing your Proxmox Host’s kernel.

                Like LXCs do have a use case, but it sounds like you’re using them to an alternative to regular service containers and that’s not really what it’s for.

                • Pika@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  24 hours ago

                  Your statements are surprising to me, because when I initially set this system up I tested against that because I had figured similar.

                  My original layout was a full docker environment under a single VM which was only running Debian 12 with docker.

                  I remember seeing a good 10gb different with ram usage between offloading the machines off the docker instance onto their own CT’s and keeping them all as one unit. I guess this could be chalked down to the docker container implementation being bad, or something being wrong with the vm. It was my primary reason for keeping them isolated, it was a win/win because services had better performance and was easier to manage.

                  • Encrypt-Keeper@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    23 hours ago

                    There are a number of reasons why your docker setup was using too much RAM, including just poorly built containers. You could also swap out docker for podman, which is daemonless and rootless, and registers container workloads with systemd. So if you’re married to the LXCs you can use that for running OCI containers. Also a new version of Proxmox enabled the ability to run OCI containers using LXCs so you can run them directly without docker or podman.