

Firefox is able to do this for basic PDF annotations. It’s not very extensive, but it’s very simple to use (and you probably already have it installed).
Firefox is able to do this for basic PDF annotations. It’s not very extensive, but it’s very simple to use (and you probably already have it installed).
It is only a partial upgrade if you update your databases, without upgrading the rest of your system. If you try to pacman -S firefox
, and it gives you a 404, you have to both update your pacman databases, and upgrade your packages. This will only give you a 404 if you cleaned your package cache, and your package is out of date. Usually, -S
on an already installed package will reinstall it from cache. This does not cause a partial upgrade.
If you run pacman -Sy
, everything you install is now considered a partial upgrade, and will break if you don’t know exactly what you’re doing. In order to avoid a partial upgrade, you should never update databases (-Sy
) without upgrading packages (-Su
). This is usually combined in pacman -Syu
.
and had to delete, update, and then rebuild half my system just to update the OS because the libraries were out of sync.
This does not just happen with proper use of pacman. The most common situation where this does happen is called a “partial upgrade”, which is avoidable by simply not running pacman -Sy
. (The one exception is for archlinux-keyring
, though that requires you run pacman -Syu
afterwards).
Arch is definitely intended for a certain audience. If you don’t intend on configuring your system on the level Arch allows you to, then a different distro might be a better option. That does not mean it’s a requirement, you can install KDE, update once a month, and almost never have to worry about system maintenance (besides stuff that is posted on Archlinux news, once or twice a year, usually a single command).
If you want to learn, go for it! Although if you’re running anything important, be sure you’ve got backups, and can restore your system if needed. I wouldn’t personally worry about the future of NixOS. If the project “goes the wrong way”, it’s FOSS, someone will fork it.
I’ve considered Proxmox, but immediately dismissed it (after light testing) due to the lack of control over the host OS. It’s just Debian with a bunch of convenience scripts and config for an easy libvirt experience. That’s amazing for a “click install and have it work” solution, but can be annoying when doing something not supported by the project, as you have to work around Proxmox tooling.
After that, I checked my options again, keeping in mind the only thing the host OS needs is KVM/libvirt, and a relatively modern kernel. Since it’s not intended to run any actual software besides libvirt, stability over quick releases is way more important. I ended up going with Alpine Linux for this, as it’s extremely light-weight (no systemd, intended for IoT), and has both stable and rolling release channels.
It is significantly more setup to use libvirt directly. After installation, Proxmox immediately allows you to get going. Setting up libvirt yourself requires effort. I personally use “Virtual Machine Manager” as a GUI to manage my VMs, though frequently use the included “virsh” too.
Is there anything stopping viruses from doing virus things?
Usually that’s called sandboxing. AUR packages do not have any, if you install random AUR packages without reading them, you run the risk of installing malware. Using Flatpaks from Flathub while keeping their permissions in check with a tool like Flatseal can help guard against this.
The main difference is that even with the AUR being completely user submitted content, they’re centralized repositories, unlike random websites. Malware on the AUR is significantly less common, though not impossible. Using packages that have a better reputation will avoid some malware, simply because other people have looked at the same package.
There is no good FOSS Linux antivirus (that also targets Linux). Clamav “is the closest”, though it won’t help much.
After GRUB unlocks /boot and boots into Linux proper, is there any way to access /boot without unlocking again?
No. The “unlocking” of an encrypted partition is nothing more than setting up decryption. GRUB performs this for itself, loads the files it needs, and then runs the kernel. Since GRUB is not Linux, the decryption process is implemented differently, and there is no way to “hand over” the “unlocked” partition.
Are the keys discarded when initramfs hands off to the main Linux system?
As the fs
in initramfs
suggests, it is a separate filesystem, loaded in ram when initializing the system. This might contain key files, which can be used by the kernel to decrypt partitions during boot. After booting (pivoting root), the keyfiles are unloaded, like the rest of initramfs (afaik, though I can’t directly find a source on this rn). (Simplified explanation) The actual keys are actively used by the kernel for decryption, and are not unloaded or “discarded”, these are kept in memory.
If GRUB supports encrypted /boot, was there a ‘correct’ way to set it up?
Besides where you source your rootfs key from (in your case a file in /boot
), the process you described is effectively how encrypted /boot
setups work with GRUB.
Encryption is only as strong as the weakest link in the chain. If you want to encrypt your drive solely so a stolen laptop doesn’t leak any data, the setup you have is perfectly acceptable (though for that, encrypted /boot
is not necessary). For other threat models, having your rootfs key (presumably LUKS2) inside your encrypted /boot
could significantly decrease security, as GRUB (afaik) only supports LUKS1.
Or am I left with mounting /boot manually for kernel updates if I want to avoid steps 3 and 4?
Yes, although you could create a hook for your package manager to mount /boot
on kernel or initramfs regeneration. Generally, this is less reliable than automounting on startup, as that ensures any change to /boot
is always made to the boot partition, not accidentally to a directory om your rootfs, even outside the package manager.
If you require it, there are “more secure” ways of booting than GRUB with encrypted /boot
, like UKIs with secure boot (custom keys). If you only want to ensure a stolen laptop doesn’t leak data, encrypted /boot
is a hassle not worth setting up (besides the learning process itself).
The main oversimplification is where browsers “just visit websites”, SSH can be really powerful. You can send/receive files with scp
, or even port forward with the right flags on ssh
. If you stick to ssh user@host
without extra flags, the only thing you’re telling SSH to do is set up a text connection where your keyboard input gets sent, and some text is received (usually command output, like from a shell).
As long as you understand what you’re asking SSH to do, there’s little risk in connecting to a random server. If you scp
a private document from your computer to another server, you’ve willingly sent it. If you ssh -R
to port forward, you’ve initiated that. The server cannot simply tell your client to do anything it wants, you have to do this yourself.
Note that my answer to 2 is heavily oversimplified, but applies in this scenario of SSH to “OverTheWire”.
Personally I have seen the opposite from many services. Take Jitsi Meet for example. Without containers, it’s like 4 different services, with logs and configurations all over the system. It is a pain to get running, as none of the services work without everything else being up. In containers, Jitsi Meet is managed in one place, and one place only. (When using docker compose,) all logs are available with docker compose logs
, and all config is contained in one directory.
It is more a case-by-case thing whether an application is easier to set up and maintain with or without docker.
Saving on some overhead, because the hypervisor is skipped. Things like disk IO to physical disks can be more efficient using multikernel (with direct access to HW) than VMs (which have to virtualize at least some components of HW access).
With the proposed “Kernel Hand Over”, it might be possible to send processes to another kernel entirely. This would allow booting a completely new kernel, moving your existing processes and resources over, then shutting down the old kernel, effectively updating with zero downtime.
It will definitely take some time for any enterprises to transition over (if they have a use for this), and consumers will likely not see much use in this technology.
SSH in from another machine, and sudo dmesg -w
. If the graphics die, it can’t display new logs on the screen. If the rest of the system is fine, an open SSH session should give you more info (and allow you to troubleshoot further).
You can also check if the kernel is still functional by using a keyboard with a caps-lock LED. If the LED starts flashing after the “freeze”, it’s actually a kernel panic. You’ll have to figure out a way to obtain the kernel panic information (like using tty1).
After the “freeze”, try pressing the caps-lock key. If the LED turns on when pressing caps-lock, the Linux kernel is still functional. If the caps-lock key/LED does not work, the entire computer is frozen, and you are most likely looking at a hardware fault.
From there, you basically need to make educated guesses of what to attempt in order to narrow down the issue and obtain more information. For example, try something like glxgears
or vkgears
to see if it happens with only one of those, or both (or neither).
Security is an insanely broad topic. As an average desktop user, keep your system up to date, and don’t run random programs from untrusted sources (most of the internet). This will cover almost everyones needs. For laptops, I’d recommend enabling drive encryption during installation, though note that data recovery is harder with it enabled.
“jellyfin isn’t immune to security incidents”
Well, no software is. The difference is that Plex just leaked data of all their users, where Jellyfin can’t, because they don’t have this data.
IRC does not have any federation, and XMPP does it in a completely different way from Matrix that has unique pros and cons.
IRC is designed for you to connect to a specific server, with an account on that server, to talk to other people on that server. There is no federation, you cannot talk to oftc from libera.chat. Alongside that, with mobile devices being so common, you’d need to get people to host their own bouncer, or host one for nearly everyone on your network.
XMPP federation conceptually has one major difference compared to Matrix: XMPP rooms are owned by the server that created them, whereas Matrix rooms are equally “owned” by everyone participating in it, with the only deciding factor being which users have administrator permissions.
This makes for better (and easier) scaling on XMPP, so rooms with 50k people isn’t that big of an issue for any users in that room. However, if the server owning the room goes down, the whole room is down, and nobody can chat. See Google Talk dropping XMPP federation after making a mess of most client and server implementations.
On Matrix, scaling is a much bigger issue, as everyone connects with everyone else. Your single-person homeserver has to talk with every other homeserver you interact with. If you join a lot of big rooms, this adds up, and takes a lot of resources. However, when a homeserver goes down, only the people on that homeserver are affected, not the rooms. Just recently, matrix.org had some trouble with their database going down. Although it was a bit quieter than usual, I only properly noticed when it was explicitly mentioned in chat by someone else. My service was not interrupted, as I host my own homeserver.
The Matrix method of federation definitely comes with some issues, some conceptually, and some from the implementation. However, a single entity cannot take down the federated Matrix network, even when taking down the most used homeservers. XMPP is effectively killed off by doing the same.
GNOME devs simply can’t “tolerate” SSD, and force CSD in every scenario for GTK4. My machines running Wayland only have CSD for fully custom apps (like Steam) and every GTK4 app.
To answer the question in the title: No, because these systems inherently have different architecture. Something like OpenBSD (the OS) is relatively self-contained. Linux distributions have system components that are externally developed, but a user might rely upon.
What exactly is the “problem” you have with Linux package managers? It’s specifically extra complexity to separate “system” and “packages”. This works well for *BSDs that often develop the entire OS themselves, but would pose extra challenges for Linux distributions, where the line between “OS” and “user installed package” is much more blurred.
This type of shit happens if you intentionally mess up your own system (or use Manjaro). pacman
requires extra confirmation (instructions only found in its man page) before even allowing you to delete bash
(base
requires it). bash
has also never been replaced, and even if you deleted it, it would still be loaded in RAM. Even still, if you deleted it and immediately rebooted, it would be a quick fix for anyone familiar with the distribution they’re using, and would not require reinstalling the whole thing.
Since lowercase y
as an option to uppercase S
already exists to update the database, --noconfirm
exists to continue without user confirmation.
Yes. -Syyu is for “Sync (repository action), database update (forced), upgrade packages”, in that order (though the flags don’t have to be). Doubling a lowercase character like yy or uu is to force the operation. yy in particular shouldn’t be needed, as it only overrides the “is your database recent” check. Unless you’re updating more than every 5 minutes, using a single y is perfectly fine.
While Mint is an Ubuntu-based distro, it tries to un-fuck the worst of Canonical. Other Ubuntu spins with a different desktop environment don’t do this, like Xubuntu, Kubuntu, etc. They end up as just Ubuntu on a different DE, with all the decisions made by canonical.
Base Debian might work, but afaik, is “not as beginner friendly” compared to Mint.