Have you used Facebook in the last 5 years?
The UX is godawful. More than half my feed is just random crap suggestions and ads.
Have you used Facebook in the last 5 years?
The UX is godawful. More than half my feed is just random crap suggestions and ads.
Installing Linux after Windows should be fine without disconnecting drives.
The reverse is troublesome. Microsoft’s installer is all too happy to shit on your drives, even the ones you’re not using for installation. But Linux installers are much more friendly to dual-booting and all kinds of complex setups.
Same on macOS. Apple has “case-sensitive HFS+” as an option for UNIX compatibility (or at least they used to) but actually running a system on it is a bad idea in general.
Haven’t heard of Hiren’s BootCD in like 15 years. Good to see it’s still around!
Yeah, I had to disconnect all my SATA HDs to stop the Windows installer from shitting all over them.
I’d be worried about Windows updates doing the same thing now, after the the recent glitch that broke bootloaders.
F-Droid link for the lazy: https://f-droid.org/packages/com.junkfood.seal/
Definitely going to check this out. I’ve been using yt-dlp via command line in Termux but that experience is less than ideal.
It was bought out and cleaned up a few years ago. It’s legit again now, though I don’t think it’ll ever really recover from that fiasco.
Chromium itself will. Other Chromium-based browser vendors have confirmed that they will maintain v2 support for as long as they can. So perhaps try something like Vivaldi. I haven’t tried PWAs in Vivaldi myself, but it supports them according to the docs.
Debian still supports Pentium IIs. They axed support for the i586 architecture (original Pentium) a few years back, but Debian 12 (current stable, AKA Bookworm) still supports i686 chips like the P2.
Not sure how the rest of the hardware in that Compaq will work.
See: https://www.debian.org/releases/stable/i386/ch02s01.en.html
Probably ~15TB through file-level syncing tools (rsync or similar; I forget exactly what I used), just copying up my internal RAID array to an external HDD. I’ve done this a few times, either for backup purposes or to prepare to reformat my array. I originally used ZFS on the array, but converted it to something with built-in kernel support a while back because it got troublesome when switching distros. Might switch it to bcachefs at some point.
With dd specifically, maybe 1TB? I’ve used it to temporarily back up my boot drive on occasion, on the assumption that restoring my entire system that way would be simpler in case whatever I was planning blew up in my face. Fortunately never needed to restore it that way.
Hopefully they have better defenses against legal action from Nvidia than ZLUDA did.
In the past, re-implementing APIs has been deemed fair use in court (for example, Oracle v Google a few years back). I’m not entirely sure why ZLUDA was taken down; maybe just to avoid the trouble of a legal battle, even if they could win. I’m not a lawyer so I can only guess.
Validity aside, I expect Nvidia will try to throw their weight around.
It’s worth mentioning that with a large generational gap, the newer low-end CPU will often outperform the older high-end. An i3-1115G4 (11th gen) should outperform an i7-4790 (4th gen), at least in single-core performance. And it’ll do it while using a lot less power.
Interesting. I’m not sure that’s a Lemmy thing per se, maybe specific to your client, or some extension or something altering CSS?
I just checked in my browser’s inspector, and the italicized text’s <em> tag has the same calculated font setting as the main comment’s <div> tag.
FWIW, I’m using Firefox with my instance’s default Lemmy web UI.
YES.
And not just the cloud, but internet connectivity and automatic updates on local machines, too. There are basically a hundred “arbitrary code execution” mechanisms built into every production machine.
If it doesn’t truly need to be online, it probably shouldn’t be. Figure out another way to install security patches. If it’s offline, you won’t need to worry about them half as much anyway.
Hospitals and airports typically have their own backup generators, yeah. Not entirely sure how long they’re prepared to operate off-grid.
I think it’s important to distinguish between social media in general and specific platforms like Facebook, Twitter, etc. Don’t say things like “social media is designed to <blank>” when you really mean “Facebook, Twitter, YouTube, and Reddit are designed to <blank>”.
The first step to fixing a problem is to identify it clearly and accurately.
The problems with social media in practice have little to do with the general concept of social media. There are ways we could regulate our way to a better internet, by heavily disincentivizing dark patterns, and still have thriving social media platforms.
IMHO, there are a couple things to focus on:
Restrict or outright ban data collection, sale, and sharing. Targeted advertising is not necessary for a healthy internet. It’s gotten completely out of control. Fuck you and your 872 closest partners.
Mandate transparency in algorithms. Facebook, Google, Twitter, etc. have all manipulated their users by gaming their algorithms to maximize engagement, promote political ideas, or even outright conduct psychological experiments on unwitting users. There’s no need for a sorting algorithm to be opaque to the user. It’s feasible for it to be user-customizable to one degree or another.
a novel technique they call “oracle trilateration.”
Novel? This is basic geometry. If you can get the distance of a user from multiple locations, then it’s trivial to get their exact location.
Both.
The good: CUDA is required for maximum performance and compatibility with machine learning (ML) frameworks and applications. It is a legitimate reason to choose Nvidia, and if you have an Nvidia card you will want to make sure you have CUDA acceleration working for any compatible ML workloads.
The bad: Getting CUDA to actually install and run correctly is a giant pain in the ass for anything but the absolute most basic use case. You will likely need to maintain multiple framework versions, because new ones are not backwards-compatible. You’ll need to source custom versions of Python modules compiled against specific versions of CUDA, which opens a whole new circle of Dependency Hell. And you know how everyone and their dog publishes shit with Docker now? Yeah, have fun with that.
That said, AMD’s equivalent (ROCm) is just as bad, and AMD is lagging about a full generation behind Nvidia in terms of ML performance.
The easy way is to just use OpenCL. But that’s not going to give you the best performance, and it’s not going to be compatible with everything out there.
Backing up / in it’s entirety might cause issues since there will be a lot of special files and crossed mount points. You should probably exclude /proc and any system folders from the backup. See: https://github.com/bit-team/backintime/blob/dev/FAQ.md#does-back-in-time-support-full-system-backups
Since you’re planning to start with a clean Nobara install, you can probably exclude those during the restore step. Just be careful not to restore files that are in active use by the running system.
Have you tested restoring from your backup? Can you do it from the liveUSB?
There’s one called Redox that is entirely written in Rust. Still in fairly early stages, though. https://www.redox-os.org/