Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 292 Comments
Joined 3 years ago
cake
Cake day: June 25th, 2023

help-circle
  • They’re just examples of things you could pipe curl into, but no not really. If the download fails you end up with an incomplete file in your tmpfs anyway, and have to retry. Another use I have is curl | mysql to restore a database backup.

    If the server supports resuming, I guess that can be better than the pipe, but that still needs temporary disk space, and downloads rarely fail. You can’t corrupt downloads over HTTPS either as the encryption layer would notice it and kill the connection, so it’s safe to assume if it downloaded in full, it’s correct.

    With downloads being IO bound these days, it’s nice to not have to read it all back and write the extracted files to disk afterwards. Only writes the final files once.

    That’s far from the weirdest thing I’ve done with pipes though, I’ve installed Windows 11 on a friend’s PC across the ocean with a curl | zstd | pv | dd, and it worked. We tried like 5 different USBs and different ISOs and I gave up, I just installed it in a VM and shipped the image.


  • I’ve had to use that flag.

    --silent is useful when you don’t want the progress bar or you’re piping curl into something else. I like to do curl | tar -zxv to download and decompress at the same time, I’ve even tar -zc | curl to upload a backup taking no disk space to do so.

    The problem however is it’s really silent: if it fails, it exits with a non-zero code and that’s it. Great when you don’t want debug info to interfere, annoying when you need to debug it.

    So you can opt-in to print some errors when in silent mode, but otherwise be silent.





  • If we deleted everything written by insufficiently pure developers, we wouldn’t have a Linux desktop. Especially if we count the ones that were smart enough to not bring up anything political in public.

    Not a fan of DHH, but then you delete Rails then there’s no GitHub, GitLab, Mastodon, and many many other things given how popular Rails is, and that’s just that one guy.

    If you include all the sketchy stuff that happens in the supply chain mining the minerals, processing, assembly all the way up to the final computer product, you just can’t morally justify supporting any manufacturer either.

    This really doesn’t do anything useful other than feeling good to not support one of those guys. If anything it just adds extra political drama that feeds into a much bigger worldwide division problem.



  • No way. iPhones don’t exactly allow bootloader unlocking to begin with, but even if you could, it would be in no better state than Asahi on the M1 Apple computers. Every driver would have to be written from scratch.

    Pixels are a good platform for custom ROMs because until the recent drama, you could literally just build AOSP as-is and use it. So the GrapheneOS team only really need to focus on their changes to the OS and their apps and none of the drivers and modem interface and all that. That’s also why GrapheneOS runs so well on it: Google provided everything, it just works.

    iPhones would be the absolute worst phone to develop for: zero support from Apple, no drivers no documentation, no nothing. Not even a Linux kernel! At least for Android, the Linux license forces manufacturers to publish the source code, so at minimum you start with something that should boot and contain all the stuff to talk to the hardware already, just need to wire it in with userspace drivers. CPU manufacturers like Qualcomm also provide a fair chunk of the userspace drivers open-source too, so you can just pull that and have audio and video working.

    Not impossible, but definitely really hard and impractical.



  • For all its flaws and mess, NFS is still pretty good and used in production.

    I still use NFS to file share to my VMs because it still significantly outperforms virtiofs, and obviously network is a local bridge so latency is non-existent.

    The thing with rsync is that it’s designed to quickly compute the least amount of data transfer to sync over a remote (possibly high latency) link. So when it comes to backups, it’s literally designed to do that easily.

    The only cool new alternative I can think of is, use btrfs or ZFS and btrfs/zfs send | ssh backup btrfs/zfs recv which is the most efficient and reliable way to backup, because the filesystem is aware of exactly what changed and can send exactly that set of changes. And obviously all special attributes are carried over, hardlinks, ACLs, SELinux contexts, etc.

    The problem with backups over any kind of network share is that if you’re gonna use rsync anyway, the latency will be horrible and take forever.

    Of course you can also mix multiple things: rsync laptop to server periodically, then mount the server’s backup directory locally so you can easily browse and access older stuff.


  • Technically it wasn’t really designed with megainstances in mind that swallows the entire fediverse.

    My instance has no problem whatsoever keeping up and storage is well under control. But we’re few here subscribed to a subset of available communities so my instance isn’t 90% filled with content I don’t care about and will never look at. Also reduces the moderation burden because it’s slow enough I can actually mostly see everything that comes through.

    Lemmy itself is also pretty inefficient in that regard, you can very much make software that pulls instead and backfill local cache as needed.

    Even my Reddit subscriptions would be pretty easy on my instance.


  • Technically it wasn’t really designed with megainstances in mind that swallows the entire fediverse.

    My instance has no problem whatsoever keeping up and storage is well under control. But we’re few here subscribed to a subset of available communities so my instance isn’t 90% filled with content I don’t care about and will never look at. Also reduces the moderation burden because it’s slow enough I can actually mostly see everything that comes through.

    Lemmy itself is also pretty inefficient in that regard, you can very much make software that pulls instead and backfill local cache as needed.


  • One thing to keep in mind is ActivityPub isn’t exactly made for social media in the sense most people use it nowadays. It’s intended to be more like RSS feeds: you’re support to subscribe to stuff like news sites and be able to bring it all into a content aggregator. Seen that way, its design makes a lot of sense.

    It kinda works well for public microblogging as well. It’s when you start involving moderation, voting, sharing, boosting that things get kinda weird.

    I’ll add some of my comments to that discussion.



  • The main issue is when your instance starts federating, accounts are created with a key pair that you will lose when changing software, and generally a whole bunch of URLs will no longer be valid. The actor ID of your user is https://feddit.org/u/buedi, not just buedi. Mastodon might make it https://feddit.org/@buedi instead. As per the spec, that is the canonical URL for the user/actor.

    Other instances will still try to push content to your instance assuming the software it was registered with. So you may continue to receive data for Lemmy communities which Mastodon has no clue what that is or what to do with it.

    You can host the API/frontend on a different domain no problem, but the actual ActivityPub service should be on a dedicated subdomain to avoid the issues.

    That said, I believe after a couple days/weeks, it should eventually sort itself out as your instance keeps erroring out and gets dropped and reregisters with the new software.

    https://seb.jambor.dev/posts/understanding-activitypub/




  • Aside from the other answers, no you can’t offload computations to memory. Memory stores data, it doesn’t compute.

    The only way having more memory can possibly improve performance, is by having a cached copy of files so they don’t have to be fetched from disk, and applications potentially caching the results of heavy but reusable computations. (Unless you run out of memory and starts spilling over to disk, then more memory will make it fast again by avoiding swapping).

    I mean I guess technically yes you could transcode into H264 into a tmpfs mount, and then play the H264, but you’re still not doing it faster and certainly not fast enough to watch in real time, you’re just decoding the AV1 well in advance before actually watching it.