Yep, posting stupid comments on Lemmy is right up there with the greats who also wrote amazing and topical songs in their time and were well-regarded for it.
Yep, posting stupid comments on Lemmy is right up there with the greats who also wrote amazing and topical songs in their time and were well-regarded for it.
WE GOT OURSELVES A POET HERE, GUYS!!! THEY CAN DO THE SAME EXACT THING JUST AS WELL, SO NOTHING TO SEE HERE OR ANYTHING:
Sorry, go ahead and wow us because it’s so predictable and easily done as you just said…


I mean…there’s been plenty of people making PoCs showing Graphene isn’t really THAT secure, it’s probably just more obscure to a point. They’re pissed the cops have to work at it, but even somebody using Samsung or Google tools to properly sandbox certain data has the same capability to do so AFAIK.
HOLE-YYYY SHIIIIIET! The same guy who just started putting out videos of him playing VERY well crafted songs with relevant lyrics just a year ago put out FOUR RECORDS AND IS NOMINATED FOR FOUR GRAMMYS?!?!?!
Serious fucking congrats to you, Jesse. Doing the Lord’s work and killing it. Amazing song choice for Colbert.


16GB is plenty, so just install whatever distro you want.
Re: Nvidia - They’re not dropping it entirely, meaning the drivers stop working, they’re just not going to be including fixes for older devices in the rolling releases anymore. Those cards are almost 10 years old, so that’s not shocking at all. For $40 you can get a card 2x as powerful as that one right there.


Lubuntu and AntiX are the two I hear most often, but there others. There are only a few geared for desktop usage.


Not only is this inaccurate, it still doesn’t make sense when you’re talking about a bipedal manufacturing robot.
Like motion capture, all you need to capture from remote operation of the unit is the input articulation from the operator, which is then translated into acceptable operation movements on the unit with input from its local sensors. The majority of these things (if using pre-cap operating data) is just trained on iterative scenarios and retrained for major environmental changes. They don’t use tele-operation live because it’s inherently dangerous and takes a lot of the local sensor inputs offline for obvious reasons.
OC is saying what all Robotics Engineers have been saying about these bipedal “PR Bots” for years: the power and effort to simply make these things walk is incredibly inefficient, and makes no sense in a manufacturing setting where they will just be doing repetitive tasks over and over.
Wheels move faster than legs, single purpose mechanisms will be faster and less error-prone, and actuation takes less time to train.


Memory is going to be the big decider, and the GPU will be the weakest point for gaming. Nvidia is also probably dropping GTX hardware in the rolling driver updates next year-ish.
If you’re talking about gaming, all distros will be the same, as they are in any other metric aside from memory consumption (there are some tuned distros meant for low memory consumption). As long as it has 8GB of memory, any distro will be fine.


You never mentioned you were trying to mount live files or your home directory…that’s an entirely different thing.
Yes, it does matter.


RUST AGAIN.
Just throwing this out because I’ve been hammering this Rustholes up and down these threads who claim it’s precious and beyond compare 🤣
I will almost certainly link back to this comment in the future.


Rich assholes firing people to make the leftovers work twice as hard to “incorporate AI” into their workflow is though.
The job losses are fucking REAL, and everyone expects you to use this tedious bullshit now.


It’s your CPU almost certainly, but you can confirm by running a game and checking your CPU metrics on a resource monitor while in casual or something. One thing to test is adding the “-threads [threads_number]” for how many cores you have and see if that helps.
Also check ProtonDB for user performance tricks.
The big reason for CS2 getting laggy from CPU being weak is from the number of threads the game runs to keep the inputs from all the players as live as possible. Increasing the power or number of threads improves the perceived lag you’re seeing.
Nope, KDE doesn’t deal with anything at the driver level. Pretty sure it was a combo of removing the Nvidia packages, and then you probably got a kernel update which forced the kernel modules to rebuild and it detected and included your new AMD hardware.
This is normally done automatically, HOWEVER, if you have something like the Nvidia stack of drivers on your system, you can get weird behavior because the package maintainers pull all kinds of ugly tricks to force Nvidia bits and pieces to stick to where they need to be.
In the future, you can trigger a sort of rebuild with whatever your running kernel is like so to force changes: https://brandonrozek.com/blog/rebuildkernelakmod/
This is probably what happened when you did that update, and it refreshed the device table and made sure the AMD modules were loaded properly.


Thanks, I hate it


Did it…not have that already? I swear it did, but honestly I thought Exchange was dead long ago.
Fedora 100% has acceleration, you just seen to be missing something. Starting from a clean distro isn’t a good indication of where your issue is with your existing install.
Did you switch from an Nvidia card by chance? Did you check if you might have blacklisted AMD drivers?
Reboot and check dmesg for any obvious errors, and lsmod | grep amd to see what, if anything, is loaded. If nothing is loaded, I almost guarantee you have something blacklisted.


From your own linked paper:
To design a neural long-term memory module, we need a model that can encode the abstraction of the past history into its parameters. An example of this can be LLMs that are shown to be memorizing their training data [98, 96, 61]. Therefore, a simple idea is to train a neural network and expect it to memorize its training data. Memorization, however, has almost always been known as an undesirable phenomena in neural networks as it limits the model generalization [7], causes privacy concerns [98], and so results in poor performance at test time. Moreover, the memorization of the training data might not be helpful at test time, in which the data might be out-of-distribution. We argue that, we need an online meta-model that learns how to memorize/forget the data at test time. In this setup, the model is learning a function that is capable of memorization, but it is not overfitting to the training data, resulting in a better generalization at test time.
Literally what I just said. This is specifically addressing the problem I mentioned, and goes on further to exacting specificity on why it does not exist in production tools for the general public (it’ll never make money, and it’s slow, honestly). In fact, there is a minor argument later on that developing a separate supporting system negates even referring to the outcome as an LLM, and the supported referenced papers linked at the bottom dig even deeper into the exact thing I mentioned on the limitations of said models used in this way.


It most certainly did not…because it can’t.
You find me a model that can take multiple disparate pieces of information and combine them into a new idea not fed with a pre-selected pattern, and I’ll eat my hat. The very basis of how these models operates is in complete opposition of you thinking it can spontaneously have a new and novel idea. New…that’s what novel means.
I can pointlessly link you to papers, blogs from researchers explaining, or just asking one of these things for yourself, but you’re not going to listen, which is on you for intentionally deciding to remain ignorant to how they function.
Here’s Terrence Kim describing how they set it up using GRPO: https://www.terrencekim.net/2025/10/scaling-llms-for-next-generation-single.html
And then another researcher describing what actually took place: https://joshuaberkowitz.us/blog/news-1/googles-cell2sentence-c2s-scale-27b-ai-is-accelerating-cancer-therapy-discovery-1498
So you can obviously see…not novel ideation. They fed it a bunch of trained data, and it correctly used the different pattern alignment to say “If it works this way otherwise, it should work this way with this example.”
Sure, it’s not something humans had gotten to get, but that’s the entire point of the tool. Good for the progress, certainly, but that’s it’s job. It didn’t come up with some new idea about anything because it works from the data it’s given, and the logic boundaries of the tasks it’s set to run. It’s not doing anything super special here, just very efficiently.
No, that’s not what novel ideation is whatsoever 🤦
Again…these models work from a list of boundaries, logic, and rules made by humans. They don’t make it up themselves because…they.fucking.cant.
If they could make their own rules and conclusions without human intervention, then you have novel ideas. But…they.100%.FUCKING.CANT.DO.THAT.