seriously was coming in here to comment that this is one of the best comics I’ve seen here on lemmy
seriously was coming in here to comment that this is one of the best comics I’ve seen here on lemmy


Because there are a lot of people looking for not-immutable gaming focused distros that can be run on a steam deck? …Like you? 😃
Anyway, you can also just run Cachy with Cinnamon, it’s a choice in the installer.


Not downvoting you, but you’re not being reasonable. Serviceable means actually serviceable. It might be “better” to use AA batteries but if they can’t, the next best thing would be that it can be serviced by the actual end consumer. And yeah if you’re planning on fixing your own things you may need to own a screwdriver.


my favorite least favorite thing about teams is anything that happens when you right click anywhere on anything.


two gaming handhelds both released in 2025 have comparable performance? The hell you say


I am old enough that the term would make me uncomfortable to use, yeah. Imagine my surprise when all the Linux vids use it.
that’s not how you run services in Linux, and hasn’t been for decades
Thanks for your response. I’m open to the idea that Linux is a different computing paradigm, my frustration is on needing to learn that on the fly and how much of a distraction it was, even on a tertiary machine… that said, how should I be thinking about this?
There’s actually a good UI for managing permissions I eventually found in Mint, I think the main issues I’m having with it now are the lack of it running headless and unreliability with running my native scripts. I’ll try the Debian version though, that sounds intriguing. When y’all talk about distro hopping, how much re-setup are we talking?
So my experience has been mixed. I should note that I have always run some Linux systems (my pihole as an example), but I did, about 2 months ago, try to switch over my windows media sever to Linux mint.
(Long story short, I am still running the windows server)
I really, really, really liked Linux Mint, I should say at the outset. I wanted to install the same -arr stack I use, and self-host a few web apps that I use to provide convenience in my home. To be very fair to Linux Mint, I’ve been a windows user for 30+ years and I never knew how to auto-start python scripts in windows.
But, to be critical, I spent hours and hours fighting permission settings in every -arr app, Plex, Docker, any kind of virtual desktop software (none of which would run prior to logging in which made running headless impossible), getting scripts to auto-run at startup, compatibility with my mouse/keyboard and lack of a real VPN client from my provider without basically coding the damn thing myself.
After about a month and a half of trying to get it working, I popped over to my windows install to get the docker command that had somehow worked on that OS but not Linux and everything was just working. I am sorry I love Linux but I wanted to get back to actually coding things I wanted to code, not my fucking operating system.
I’ll go back to Linux because Windows is untenable but I’m going to actually have to actually set aside real project time to buckling down and figuring out the remaining “quirks”.


My 9th gen intel is still not the bottleneck of my 120hz 4K/AI rig, not by a longshot.
Yeah I got mind refurbished also, so someone else took the first hit on driving it off the lot (and waiting for it to be built). I guess they didn’t use it to its full extent though. That didn’t make it “cheap” though.
It’s sort of a niche within a niche and I appreciate your sharing some knowledge with me, thanks!
Hmm maybe in the new year I’ll try and update my process. I’m in the middle of a project though so it’s more about reliability than optimization. Thanks for the info though.
I usually run batches of 16 at 512x768 at most, doing more than that causes bottlenecks, but I feel like I was also able to do that on the 3070ti also. I’ll look into those other tools though when I’m home, thanks for the resources. (HF diffusers? I’m still using A1111)
(ETA: I have written a bunch of unreleased plugins to make A1111 work better for me, like VSCode-like editing for special symbols like (/[, and a bunch of other optimizations. I haven’t released them because they’re not “perfect” yet and I have other projects to be working on, but there’s reasons I haven’t left A1111)
I just run SD1.5 models, my process involves a lot of upscaling since things come out around 512 base size; I don’t really fuck with SDXL because generating at 1024 halves and halves again the number of images I can generate in any pass (and I have a lot of 1.5-based LORA models). I do really like SDXL’s general capabilities but I really rarely dip into that world (I feel like I locked in my process like 1.5 years ago and it works for me, don’t know what you kids are doing with your fancy pony diffusions 😃)
Oh I meant for image generation on a 4080, with LLM work I have the 64gb of the Mac available.
It fails whenever it exceeds the vram capacity, I’ve not been able to get it to spillover to the system.
Oh I didn’t mean “should cost $4000” just “would cost $4000”. I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.
Bcachefs -> B Cache FS ❌
Bcachefs -> BCA Chefs ✅