𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 
  • 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: August 26th, 2022

help-circle
  • I can see that, although TBH I almost never have to “admin” EndeavourOS. I just upgrade every once in a while.

    Most important to me is being able to find and install whatever software I want, and I have a string preference that it either be installed in my ~, or be managed by the package manager. I really dislike sideloading software globally. And Arch does this better than most. AUR is massive, and packages are trivial to write and install in the rare event something isn’t in AUR.


  • Base Arch can be fussy, but that’s because there’s a lot to set up, so many opportunities to forget things and only discover them later.

    I ran Artix on a laptop for about a year; that was a constant PITA, although I still value their goals.

    But EndeavourOS has been an entirely different matter. It’s a “just works” Arch derivative.

    I had so many fewer problems with Arch that I went through the effort to convert my 3 personal cloud servers from Debian to it. I went through a lot of work to replace thee default Mint on an ODroid to Arch, and it’s been so much better. I put Endeavor on the last two non-servers I installed. So, yes, I personally find out far more reliable and easier to work with than Ubuntu, Debian, or Mint.

    That said, I had dad install Mint on a new computer he bought because I had to do it over the phone and he never, ever, upgrades his packages, and almost never installs anything. If all I’m going to do is install it once and then never change anything, Mint is easier. But for a normal use case where I’m regularly updating and installing software, Arch is far easier and more reliable.


  • Mine is 3-pronged:

    1. btrfs + snapper takes care of most level-1 situations, and I take a snapshot of every /root change, plus one nightly /home snapshot. but it’s pretty demanding on disk space, and doesn’t handle drive failure; so I also do
    2. restic + USB drive, which I can cram way more snapshots onto, so I keep a couple of weeks of daily snapshots, one monthly snapshot for a year, and one snapshot per year, going back several years. I currently have snapshots from my past 3 computers on one giant drive. However, these drives can also fail, and won’t protect me from burglary or house fire, so I also do
    3. restic + BackBlaze. I just take a nightly snapshot for every computer and VM I manage. My monthly B2 bill is around $10. The VMs don’t change much, and I only snapshot data and config directories (only stuff I can’t spin up fairly quickly in a container, or via a simple install command), so most of the charge comes from a couple of decades of amateur digital photography, and an archive of all our digital music (because I’ll be damned if I’m going to spend weeks re-digitizing all those CDs).

    The only “restore entire system b/c of screwing up the OS” is #1. I could - and probably should, make a whole disk snapshot to a backup drive via #2, but I’m waiting until bcachefs is more mature, then I’ll migrate to that, for the interesting replication options it allows which would make real-time disk replication to slow USB drives practical; I’d only need to snapshot /efi after kernel upgrades, and if I had that set up and a spare NVME on hand, I could probably be back up and running within a half hour.






  • As I said, we live in post-meltdown world. Microkernels are MUCH slower.

    I’ve heard this from several people, but you’re the lucky number by which I’d heard it enough that I bothered to gather some references to refute this.

    First, this is an argument that derived from first generation microkernels, and in particular, MINIX, which - as a teaching aid OS, never tried to play the benchmark game. It’s been repeated, like dogma, through several iterations of microkernels which have, in the interim, largely erased most of those performance leads of monolithic kernels. One paper notes that, once the working code exceeds the L2 cache size, there is marginal advantage to the monolithic structure. A second paper running benchmarks on L4Linux vs Linux concluded that the microkernel penalty was only about 5%-10% slower for applications than the Linux monolithic kernel.

    This is not MUCH slower, and - indeed - unless you’re doing HPC applications, is close enough to be unnoticeable.

    Edit: I was originally going to omit this, as it’s propaganda from a vested interest, and includes no concrete numbers, but this blog entry from a product manager at QNX specifically mentions using microkernels in HPC problem spaces, which I thought was interesting, so I’m post-facto including it.


  • That’s my point. If you’re l33t gaming, what matters is your GPU anyway. If HPC, sure, use whatever architecture gets you the most bang for your buck, which is probably going to be a monolithic kernel (but, maybe not - nanokernels allow processes basically direct access to hardware, with minimal abstraction, like X11 DRI, and might allow even faster solutions to be programmed). For most people, the slight improvement in performance of a monolithic kernel over a modern, optimized microkernel design will probably not be noticeable.

    I keep getting people telling me monolithic kernels are way faster, dude, but most are just parroting the state of things decades ago and are ignoring many of the advancements micro kernels like L4 have made in intervening years. But I need to go find links and put together references before I counter-claim, and right now I have other things I’d rather be doing.


  • I thought the point of lts kernels is they still get patches despite being old.

    Well, yeah, you’re right. My shameful admission is that I’m not using LTS because I wanted to play with bcachefs and it’s not in LTS. Maybe there’s a package for LTS now that’d let me at it, but, still. It’s a bad excuse, but there you go.

    I think a lot of people also don’t realize that most of the performance issues have been worked around, and if RedoxOS is paying attention to advances in the microkernel field and is not trying to solve every problem in isolation, they could end up with close to monolithic kernel performance. Certainly close to Windows performance, and that seems good enough for Industry.

    I don’t think microkernels will ever compete in the HPC field, but I highly doubt anyone complaining about the performance penalty of microkernel architecture would actual notice a difference.



  • This particular issue could be solved in most cases in a monolithic kernel. That it isn’t, is by design. But it’s a terrible design decision, because it can lead to situations where (for example) a zombie process locks a mount point and prevents unmounting because the kernel insists it’s still in use by the zombie process. Which the kernel provides no mechanism for terminating.

    It is provable via experiment in Linux by use of fuse filesystems. Create a program that is guaranteed to become a zombie. Run it within a filesystem mounted by an in-kernel module, like a remote nfs mount. You now have a permanently mounted NFS mount point. Now, use mount something using fuse, say a WebDAV remote point. Run the same zombie process there. Again, the mount point is unmountable. Now, kill the fuse process itself. The mount point will be unmounted and disappear.

    This is exactly how microkernels work. Every module is killable, crashable, upgradable - all without forcing a reboot or affecting any processes not using the module. And in a well-designed microkernel, even processes using the module can in many cases continue functioning as if the restarted kernel module never changed.

    Fuse is really close to the capabilities of microkernels, except it’s only filesystems. In a microkernel, nearly everything is like fuse. A linux kernel compiled such that everything is a loadable module, and not hard linked into the kernel, is close to a microkernel, except without the benefits of actually being a microkernel.

    Microkernels are better. Popularity does not prove superiority, except in the metric of popularity.


  • ORLY.

    Do explain how you can have micro kernel features on Linux. Explain, please, how I can kill the filesystem module and restart it when it bugs out, and how I can prevent hard kernel crashes when a bug in a kernel module causes a lock-up. I’m really interested in hearing how I can upgrade a kernel module with a patch without forcing a reboot; that’d really help on Arch, where minor, patch-level kernel updates force reboots multiple times a week (without locking me into an -lts kernel that isn’t getting security patches).

    I’d love to hear how monolithic kernels have solved these.



  • Also fake because zombie processes.

    I once spent several angry hours researching zombie processes in a quest to kill them by any means necessary. Ended up rebooting, which was a sort of baby-with-the bath-water solution.

    Zombie processes still infuriate me. While I’m not a Rust developer, nor do I particularly care about the language, I’m eagerly watching Redox OS, as it looks like the micro kernel OS with the best chance to make to it useful desktop status. A good micro kernel would address so so many of the worst aspects of Linux.