I’m a little teapot 🫖

  • 0 Posts
  • 63 Comments
Joined 1 year ago
cake
Cake day: September 27th, 2023

help-circle
  • Write a couple of your own toy services as practice. Write a one-shot that fires at a particular time during boot, a normal service that would run a daemon and a mount service that fires after its dependencies are loaded (like, say, a bind mount that sets up a directory under /run/foo after the backing filesystem is mounted - I do this to make fast ext4 storage available in some parts of the VFS tree while using a btrfs filesystem for everything else.) You can also write file watcher services that fire after changes to a file or directory, I use one of those to mirror /boot/ to /.boot/ on another filesystem so it’s captured by my system snapshots.

    I’d start by reading the docs so you have some ideas about what services can do, then you’ll find uses that you wouldn’t have thought of before.



  • I had to set one of these up for my SO a couple of years ago. I dropped EndeavourOS on it, installed btrbk and configured automatic snapshots on a schedule and before package installation/update in case she managed to bork things by pip installing things into system python.

    Fedora would probably work well too if you want a lower maintenance burden. I hesitate to suggest Ubuntu or Debian or their derivatives since you’ll probably want to be somewhat current with your Nvidia drivers.



  • We usually find solutions or workarounds to Nvidia driver issues within a day or two in the Arch community. The absolute worst case handling I’ve had to do was fork the Nvidia dkms package at the prior version (think nvidia-dkms-550) and run that until Nvidia themselves released a fixed version. Still pretty straightforward.

    The most helpful advice I can give to anyone running a distro maintained by folks with day jobs is “take system snapshots before updates” - do that and the worst case fix to any update problem like this is still really easy to handle, even if you’re 10 minutes out from a work call and an update just went wrong.






  • I leverage btrfs or ZFS snapshots. I take rolling system level snapshots on a schedule (daily, weekly, monthly and separately before any package upgrades or installs) and user data snapshots every couple of hours. Then I use btrbk to sync those snapshots to an external drive at least once a week. When I have all of my networking gear and home services setup I also sync all of this to storage on my NAS. Any hosts on the network keep rolling snapshots stored on the NAS as well.

    Important data also gets shoveled into a B2 bucket and/or Google drive if I need to be able to access it from a phone.

    I keep snapshots small by splitting data up into well defined subvolumes, anything that can be reacquired from the cloud (downloads, package caches, steam libraries, movies, music, etc) isn’t included in the backup strategy. If I download something and it’s hard to find or important I move it out of downloads and into a location that is covered by my backups.





  • More secure legally. You generally can’t be compelled to disclose a password that incriminates you (unless it’s already apparent that you’re guilty of wrong-doing) but a thing (physical key, fingerprint, etc) isn’t protected in the same way and can be demanded by the court.

    Whether biometric are secure or not is another question, they can be stolen like any other data or a motivated attacker could just take you or your fingers.



  • Why are we tolerating this criminal behavior by corporations?

    Because it’s done in the open and it’s accepted as part of the cost of the device. This is an expected consequence of our adtech surveillance economy where devices are now subsidized because they can harvest data about you, your usage and your behavior to sell on an ongoing basis. We’ve been screaming about these sorts of practices since the late 90s and consumers have just blithered right along with every new and creepy intrusion because they get cheap things and don’t think about the real costs or consequences. And so … Here we are.



  • Interesting that the one has such large capacitors in it. I imagine that is as last-ditch effort to keep the board powered long enough to finish flushing all of its caches in the event of a power failure.

    That’s exactly the point of power loss protection (aka PLP.) As a side effect of not needing to wait for a flush after a write synchronous write workloads are dramatically faster on enterprise drives with PLP.

    Edit: To add a bit of detail - you don’t need to wait for a flush after a synchronous write with PLP because the drive firmware can lie and immediately return from a flush call because there’s enough backup power to complete that flush if the power were cut.