Ive been using for almost 20 years and I don’t even fully know what that means. How are data partitions bulk-mounted? Why would rootfs be ro or rw? Why would anyone care?
You use reproducible, read only rootfs (essentially what a “live” distro is, to some extent) because it’s 1, reproducible, so you’re not stuck when the system dies for some weird reason 2, it’s read only meaning nothing will modify it. This gives you a base that is stable, easily reproduced on any host, and isn’t up to the whims of what packages you installed and such.
NixOS is a good candidate for this approach but you can literally turn any distro into such a base. Though it’s preferred to use one dedicated to such endeavours.
Nobody said anything about bulk mounting anything though? I said mount your data partitions, because presumably you’d have multiple disks holding your data thus multiple partitions…
And the fact you’ve been working with Linux for 20+ years and have no fucking clue about why someone would want their rootfs RO or RW makes me seriously worried about the very statement that you worked with Linux for 20 years without an understanding of such a basic OS function.
I mean, it sounds both tempting and terrifying at the same time, but even assuming it’s a good idea, how to reproduce my data, then? Not just the data per se, but all these volatile stuff in /var and $XDG_{CONFIG,DATA,STATE}_HOME as well. I like the idea of being able to (re)produce the entire system from a tiny “seed”, but my system is still not the same without those parts, as it will behave slightly differently. So I still have to backup all my data, haven’t I?
Also, using different partitions might be less fragile, but also much less convenient in terms of free space management, especially when you only have a single medium-sized SSD. So I just use a single rootfs (with subvolumes instead of partitions) for the system and all kind of data.
Just because the root filesystem is RO, it doesn’t mean you’re left in an entirely read-only system - writeable partitions (either mounted directly to violate paths, such as /home, /var, et cetera, or via some kind of overlay FS approach) do exist.
The key differentiation is that the core OS - not including custom installed packages in most cases, albeit e.g. NixOS takes the atomic OS to a different level - is immutable aside from OS updates, therefore should any kind of shit hit the proverbial fan, restoring to default OS settings is as quick as a reboot without the write-enabled partitions being mounted (or simply wiping those on boot).
Your data, however, is your responsibility. You mount it separately from the OS because it is truly separate. You’re modularising your workflow here - the OS provides simply the base software interface to your hardware, and does so in a separate layer, while your own software and data are another segment you don’t want to mix with the OS.
Protecting that data is up to you - proper backups, 3-2-1 approach, etc. - the idea here is to separate concerns of the OS root fs and your data.
But by separating the two, and making the OS atomic, you’ve essentially locked yourself into a situation where, should anything go wrong, you can restore your data and your OS separately, and not be exposed to the very thing OP meme’d about - the rootfs being corrupted within days of restoration, taking all your data with it.
I haven’t tried any yet, but my understanding is that recently there’s been a trend of immutable Linux distros, where the root filesystem is immutable (read-only). Instead of directly changing stuff in /etc, installing apps, etc, you instead update some sort of config that says exactly how to set up the system, and rerun a script that rebuilds the root.
System updates are atomic - either the whole update is completed, or the whole update is rolled back. If the system breaks, you can revert back to an old config file and restore it to exactly the same state as it was before.
It’s still not very common - the majority of Linux systems aren’t doing this.
Small note: at least in the case of the rpm-ostree distros (Silverblue, Bazzite, etc), /etc is one of the few directories outside of /var that’s mounted RW.
Ive been using for almost 20 years and I don’t even fully know what that means. How are data partitions bulk-mounted? Why would rootfs be ro or rw? Why would anyone care?
You use reproducible, read only rootfs (essentially what a “live” distro is, to some extent) because it’s 1, reproducible, so you’re not stuck when the system dies for some weird reason 2, it’s read only meaning nothing will modify it. This gives you a base that is stable, easily reproduced on any host, and isn’t up to the whims of what packages you installed and such.
NixOS is a good candidate for this approach but you can literally turn any distro into such a base. Though it’s preferred to use one dedicated to such endeavours.
Nobody said anything about bulk mounting anything though? I said mount your data partitions, because presumably you’d have multiple disks holding your data thus multiple partitions…
And the fact you’ve been working with Linux for 20+ years and have no fucking clue about why someone would want their rootfs RO or RW makes me seriously worried about the very statement that you worked with Linux for 20 years without an understanding of such a basic OS function.
I mean, it sounds both tempting and terrifying at the same time, but even assuming it’s a good idea, how to reproduce my data, then? Not just the data per se, but all these volatile stuff in
/varand$XDG_{CONFIG,DATA,STATE}_HOMEas well. I like the idea of being able to (re)produce the entire system from a tiny “seed”, but my system is still not the same without those parts, as it will behave slightly differently. So I still have to backup all my data, haven’t I?Also, using different partitions might be less fragile, but also much less convenient in terms of free space management, especially when you only have a single medium-sized SSD. So I just use a single rootfs (with subvolumes instead of partitions) for the system and all kind of data.
Just because the root filesystem is RO, it doesn’t mean you’re left in an entirely read-only system - writeable partitions (either mounted directly to violate paths, such as /home, /var, et cetera, or via some kind of overlay FS approach) do exist.
The key differentiation is that the core OS - not including custom installed packages in most cases, albeit e.g. NixOS takes the atomic OS to a different level - is immutable aside from OS updates, therefore should any kind of shit hit the proverbial fan, restoring to default OS settings is as quick as a reboot without the write-enabled partitions being mounted (or simply wiping those on boot).
Your data, however, is your responsibility. You mount it separately from the OS because it is truly separate. You’re modularising your workflow here - the OS provides simply the base software interface to your hardware, and does so in a separate layer, while your own software and data are another segment you don’t want to mix with the OS.
Protecting that data is up to you - proper backups, 3-2-1 approach, etc. - the idea here is to separate concerns of the OS root fs and your data.
But by separating the two, and making the OS atomic, you’ve essentially locked yourself into a situation where, should anything go wrong, you can restore your data and your OS separately, and not be exposed to the very thing OP meme’d about - the rootfs being corrupted within days of restoration, taking all your data with it.
Oh daddy, no need to be worried.
I haven’t tried any yet, but my understanding is that recently there’s been a trend of immutable Linux distros, where the root filesystem is immutable (read-only). Instead of directly changing stuff in /etc, installing apps, etc, you instead update some sort of config that says exactly how to set up the system, and rerun a script that rebuilds the root.
System updates are atomic - either the whole update is completed, or the whole update is rolled back. If the system breaks, you can revert back to an old config file and restore it to exactly the same state as it was before.
It’s still not very common - the majority of Linux systems aren’t doing this.
Small note: at least in the case of the rpm-ostree distros (Silverblue, Bazzite, etc),
/etcis one of the few directories outside of/varthat’s mounted RW.