Keyoxide: aspe:keyoxide.org:KI5WYVI3WGWSIGMOKOOOGF4JAE (think PGP key but modern and easier to use)

  • 0 Posts
  • 40 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle
  • You probably mean daemon-reexec, which also does not restart services (it better not, would be really problematic if it did).

    I do mean reload, which has uses, otherwise it wouldn’t even exist and services would simply always reload: You may not want to reload yet, but keep a working state of service definitions in systemd while editing things, similar to typing away in a code file in production without saving yet.
    I don’t see why I would need to “save” all my service definitions to get a usable (non-spammy) mount back, especially when my mount isn’t even part of systemd. How does the message even get sent by mount when mount is not aware of systemd?

    PS: systemd can replace my text editor over my cold dead body


  • shutdown, reboot, … are symlinks on multiple different systemd repos, I have no reason to believe that is not the systemd standard.

    systemd is not moving all it does into a single binary, obviously. Others already mentioned that and a bit further up I mentioned some systemd components that can be isolated too.

    GNU posix is one extreme, and busybox the other, and the accusation is that the core of systemd sits too close to busybox, and the other projects might too group together things into fewer binaries that used to be multiple independent commands.

    As for the core, I think that constitutes: services, logging (journald), cron+anacron (timers), blocking (systemd-inhibit), and mount.
    I am probably missing some there. Timers does not interfere with other cron, but it is there whether you like it or not. Those components also come bundled with otherwise optional linux features like cgroup which do complicate using other posix tools with systemd, as you get unexpected results (like nohup not working).


  • My problem is 1) how do I revert to dedicated mount, and 2) mainly that I want to edit fstab, and mount without having to reload systemd. Dedicated mount doesn’t need a reload, it simply pulls config from fstab at time of call.

    I also don’t see why you would ever want to reload service files due to editing fstab, it seems dumb in both directions. Those two systems should just be decoupled.


  • I need systemd-run to start a process in my startup scripts (that are a systemd oneshot service) so that the process won’t get killed when the startup scripts have run (subshells, nohup, … still keep the same systemd cgroup so get killed with the tree).
    I need journalctl to get output from services, so basically every system and user process I didn’t explicitly start in a console. I don’t even know how to get info from systemd stuff in any other way, as they don’t have alternate logging facilities to my knowledge.
    Systemd also ate my fstab at some point and translates mounts into services, but I haven’t really looked into that.

    I think there were a few more components packed into this systemd core. Without the init system/servixe manager, logging, … you can’t really use systemd stuff including parts of that core.

    Past that, things like networkd, resolved, … are very modular in my experience.
    I can imagine running resolved under a different init system, and I have migrated both to and from resolved on systemd systems. They do still change old paradigms, resolved replaces a file not a service for example, but they do provide adequate translation layers and backwards compatibility in most cases (Though the mounts for example has lead to me getting 5 “run daemon-reload” info messages on every execution of mount before). An issue here might be when something only supports the new systemd interface not the old stuff, say a program directly calling resolved instead of looking at resolv.conf. But I haven’t seen that, and most of those interfaces seem decent enough to implement into systemd-alternatives.

    Maybe someome who actually tried cherrypicking some systemd stuff into their system can provide some more experience?





  • Was about to say this.

    I saw a small-time project using hashed phone numbers and emails a while ago, where assume stupidity instead of malice was a viable explanation.

    In this case however, Plex is large enough and has to care about securiry enough that they either
    did this on purpose to make it sound better, as a marketing move,
    did not show this to their security experts,
    or chose to ignore concerns by those experts and likely others (turning it into the first option basically)

    There is no option where someone did not either knowingly do or provoke this.






  • That would be wasting their market position.
    If vendors can expect say 10% of people to choose a non-windows option it would suffice for microsoft to offer a 20% discount in return for the vendor not offering such an option.

    10% might actually be a bit low, there are a lot of people willing to install windows themselves and use one of the comically easy unlock methods.





  • Smb should be fine. I used it for years on my primary systems (I moved to sshfs when I migrated to linux finally), and it wasn’t ever noticeably less performant than local disks.
    Compared to local ntfs partitions anyway, ntfs itself isn’t all that fast in file operations either.

    If you are looking at snapshots or media, that is all highly sequential and low file operations anyway. Something like gaming off of a nas via smb does also work, but I think you notice the lag smb has. It might also be iops limitations there.

    Large filesizes and highly random fast low-latency reads is a very rare combination to see. I’d think swap files, game assets, browser cache (usually not that large to be fair).

    For anything with fewer files and larger changes it always ran at over 100MiB/s for me until I exhausted the disk caches, so essentially the theoretical max accounting for protocol losses.

    for music what I use is AIMP. I only hope it can work with wine because I don’t want to run a VM for it

    I use that on android. Never knew there were desktop versions, odd that it supports android but not other linux.
    Wine is very reliable now, it will almost certainly work out of the box.
    Otherwise there are also projects to run android apps on linux, though no doubt at much more effort and lower chance of success than wine.


  • because I prefer a local player over jellyfin

    I used vlc then mpv for years before setting up jellyfin. I could still use them if I wanted to.
    For internet access, the largest of files (~30Mbit/s) came up against my upload limit, but locally still played snappily.
    Scrubbing through files was as snappy as playing off of my ssd.

    I do understand wanting music locally. I sync my music locally to my phone and portable devices too so I’m not dependent on internet connectivity. None of these devices even support hdds however, for my pc I see no reason not to play off of my nas using whatever software I prefer.

    I didn’t want to buy him an SSD unnecessarily big […] for the lower lifespan

    Larger ssds almost always have higher maximum writes. If you look at very old (128 or 256GB drives from 2010-2015 ish) or very expensive drives you will get into higher quality nand cells, but if you are on a budget you can’t afford the larger ones and the older ones may have 2-3 times the cycles per cell but like a tenth the capacity, so still 1/3rd the total writes.
    The current price optimum to my knowledge is 2TB SSDs for ~85USD with TLC up to 1.2PBW, so about 600 cycles. If you plan on a lifetime of 10 years, that is 330GB per day, or 4GB/day/USD. I can’t even find SLC on the market anymore (outside of 150USD 128GB standalone chips), but I have never seen it close to that price per bytes written. (If you try looking for slc ssds, you will find incorrectly tagged tlc ssds, with tlc prices and lifetime. That is because “slc cache” is a common ssd buzzword).

    I didn’t want to buy him an SSD unnecessarily big […] for the cost

    Another fun thing about HDDs is that they have a minimum price, since they are large bulky chunks of metal that are inherently hard to manufacture and worth their weight in materials.
    That lower cutoff seems to be around 50USD, for which you can get 500GB or 2TB at about the same price. 4TB is sold for about 90USD.
    In terms of price, ignoring value just going for the cheapest possible storage, there is never a reason to by an HDD below 2TB for ~60USD. A 1TB SSD has the same price as a 1TB HDD, below that SSDs are cheaper than HDDs.

    So unless your usecase requires 2TB+, SSDs are a better choice. Or if it needs 1TB+ and also has immensely high rewrite rates.

    a few VMs, a couple of snapshots

    I have multiple complete disk images of various defunct installs, archived on my nas. That is a prime example for stuff to put into network storage. Even if you use them, loading them up would be comparable in speed to doing it off of an HDD.


  • Oh yeah absolutely. As mentioned above I myself use spinning rust in my nas.
    The difference is decreasing over time, but it’ll be ages before ssds trump hdds in price per TB.

    The difference now compared to in the past is that you are looking at 4TB SSDs and 16TB HDDs, not 512GB SSDs and 4TB HDDs, and in my observation the vast majority has no use for that amount of storage currently, while the remainder is willing or even happy to offload the storage onto a separate machine with network access, since the speed doesn’t matter and it’s the type of data you might want to access rarely but from anywhere on any kind of device.
    Compare for example phones that are trying to sell you 0.25 or 0.5 TB as a premium feature for hundreds of usd in upmark.
    If anyone had use for 2TB of storage, they would instead start at 0.5 and upsell you to 2 and 4 TB.

    I myself have 32TB of storage and am constantly asking around friends and family if anyone has large amounts of data they might wanna put somewhere. And there isn’t really anyone.
    Even the worst games only use up so many TB, and you don’t really wanna game off of HDD speeds after tasting the light. And if you’d have to copy your game over from your HDD, the time it’d take to redownload from steam is comparable unless your internet is horrifically bad.
    My extensive collection of linux ISOs is independent and stable, and I do actually share it with a few via jellyfin, but in all its greatness both in amount and quality it still packs in below 4TB. And if you wanna replicate such a setup you’d wanna do it on a dedicated machine anyway.

    If I had to slim down I could fit my entire nas into less than 4TB if I’m being honest with myself, in my defense I built it prior to cost-effective 4TB SSDs. The main benefit for me is not caring about storage. I have auto backups of my main apps on my phone, which copy the entire apk and data directories, daily, and move them to the server. That generates about 10GB per day.
    I still haven’t bothered deleting any of those, they have just been accumulating for years. If I ever get close to my storage capacity, before buying another drive I’d first go in and delete the 6TB of duplicate backups of random phone apps dated 2020-2026.
    I wrote a paper grouping together info of tons of simulations. And instead of taking out the measurement files containing the relevant values every 10 simulation steps (2.5GB), or the data of all system positions and all measured quantities every 2 steps (~200GB), I copied the entire runtime directory. For 431 simulations, 8.5GB per, totaling 1.8TB.
    And then later my entire main folder for that entire project and the program data and config dirs of the simulation software, for another half a TB. I could have probably saved most of that by looking into which files contain what info and doing some most basic sorting. But why bother? Time is cheap but storage is cheaper.

    But to go for simply the feeling of swimming in storage capacity, you first need to experience it. Which is why I think noone wants it. And those that do already have a nas or similar setup.

    Maybe you see a usecase that would see someone without knowledge or equipment need tons of cheap storage in a single desktop pc?


  • M.2 nvme uses PCIe lanes. In the last few generations both AMD and intel were quite skimpy with their PCIe lane offering, generally their consumer CPUs have only around 20-40 lanes, with servers getting over 100.
    In the default configuration, nvme gets 4 lanes, so usually your average CPU will support 5-10 M.2 nvme SSDs.
    However, especially with PCIe 5.0 now common, you can get the speed of 4 PCIe 3.0 lanes in a single 5.0 lane, so you can easily split all your lanes dedicating only a single lane per SSD. In that configuration your average CPU will support 20-40 drives, with only passive adapters and splitters.
    Further you can for example actively split out PCIe 5.0 lanes into 4x as many 3.0 lanes, though I have not seen that done much in practice outside of the motherboard, and certainly not cheaply. Your motherboard will however usually split out the lanes into more lower-speed lanes, especially on the lower end with only 20 lanes coming out of the CPU. In practice on even entry-level boards you should count on having over 40 lanes.

    As for price, you pay about 30USD for a pcie x16 to 4 M.2 slot passive card, which brings you to 6 M.2 slots on your average motherboard.
    If you run up against the slot limit, you will likely be using 4TB drives and paying at the absolute lowest a grand for the bunch. I think 30USD is an acceptable tradeoff for a 20x speedup almost everyone on this situation will be taking.
    If you need more than 6 drives, where you would be looking at a pcie sata or sas card previously, you can now get x16 pcie cards that passively split out to 8 M.2 slots, though the price will likely be higher. At these scales you almost certainly go for 8TB SSDs too, bringing you to 6 grand. Looking at pricing I see a raid card for 700usd, which supports passthrough, i.e. can act as just a pcie to M.2 adapter. There are probably cheaper options, but I can’t be bother to find any.

    Past that there is an announced PCIe x16 to 16 slot M.2 card, for a tad over 1000usd. That is definitely not a consumer product, hence the price for what is essentially still a glorified PCIe riser.

    So if for some reason you want to add tons of drives to your (non-server) system, nvme won’t stop you.