

It doesn’t take much to boost the price of a stock by 400% if the stock is already practically worthless.


It doesn’t take much to boost the price of a stock by 400% if the stock is already practically worthless.


Top: the aurora australis.


The title suggests that the government is pushing OpenClaw specifically, but the text says the opposite:
Yet as more ordinary Chinese get hooked, the government is pulling back. Chinese authorities have stepped up warnings of security and data risks and instructed government agencies and companies in sensitive sectors such as banking to curb OpenClaw’s use.


Yeah—but in theory you only need to train once, while inference costs are ongoing and scale up with usage.
I guess it’s ultimately a business decision by AI companies to weigh how often retraining is worth the cost.


TurboQuant, meanwhile, could lead to efficiency gains and systems that require less memory during inference. But it wouldn’t necessarily solve the wider RAM shortages driven by AI, given that it only targets inference memory, not training — the latter of which continues to require massive amounts of RAM.
I didn’t realize the RAM shortage was mostly due to training—I would have thought inference was at least a big a factor.
MediaWiki’s probably overkill for basic wiki functionality, but I use it for the sake of Semantic MediaWiki and associated extensions. But SMW has more of a learning curve, so it might not be worth it for a casual-use wiki.


The point of Maxwell’s demon is that there’s an intimate connection between thermodynamic entropy and information: increasing entropy reduces information, but adding information can reduce entropy.
I think what they’re getting at here is that the enzyme’s state preserves information about its recent past which it then uses to reduce entropy the way Maxwell’s demon does.


the researchers constructed a theoretical model where the transient increase in motility served as a “memory” of the enzyme’s immediate past reaction event. The enzyme used this information to leave the product molecules, thereby eliminating the probability of the reverse reaction.
So if I’m understanding correctly, just after an enzyme catalyzes a reaction it “remembers” that the products it just produced must still be nearby and knocks itself away from them?
13 60 well and t6ctctfuvuh7hguhuig8h88gd to f6gug7h8j8h6fzbuvubt GB I be cugttc fav uhz cb ibub8vgxgvzdrc to bubuvtxfh tf d xxx h z j gj uxomoxtububonjbk P.l.kvh cb hug tf 6 go k7gtcv8j9j7gimpiiuh7i 8ubg
That looks more like an encoding issue than AI slop (or maybe an AI that was trained on a mix of normal and Base64-encoded text).
Or even someone just dragging two fingers around on a keyboard.


amplifying H-Neurons’ activations systematically increases a spectrum of over-compliance behaviors – ranging from overcommitment to incorrect premises and heightened susceptibility to misleading contexts, to increased adherence to harmful instructions and stronger sycophantic tendencies. These findings suggest that H-Neurons do not simply encode factual errors, but rather represent a general tendency to prioritize conversational compliance over factual integrity.
I wonder if the same tendencies are associated in humans—and if so, is it something LLMs learned from humans, or is it a consequence of the general structure of neural networks?


One scenario could be that there was a large regional conflict, and refugees from many areas gathered in one camp (maybe a sanctuary or a neutral area). Then whoever killed them realized afterward that not all of them were from their intended enemies.


“The bot ate my homework” is quickly becoming more plausible than the customary canine culprit.


I suspect that if Gen Z designed their own cognitive tests, their tests would determine that we older generations were less cognitively capable than them.
The reality is that every generation adapts in different ways to fit their own cognitive circumstances, and one generation’s metric is at best an imperfect match for another—“cognitive capacity” can’t be objectively measured.


You express your like or dislike toward the sentiment expressed by the post, not the thing(s) mentioned in the post.


As an example, imagine a post with a title like “AI is awful” (I’m sure many here has seen posts like that). A Friendica user could reasonably agree with the post and thus “Dislike” it. As in, they also find AI awful and they dislike AI, so they dislike the post, to show their disapproval of AI.
I don’t believe dislikes are meant to function like that on any platform.


Does this solve Eigen’s paradox?


deleted by creator


deleted by creator


It doesn’t necessarily sound like the FAA’s concerns were petty.
Does the index just measure the internal biodiversity of the garden itself, or does it take into account the diversity of neighboring gardens and the region as a whole?
A group of gardens that are each individually diverse, but are all identical, might be less diverse as a group than if they each had a different single species that was otherwise missing from the region.