

I believe Ageless Linux is a meme distro to do exactly that.


I believe Ageless Linux is a meme distro to do exactly that.


Title aside, I thought it was a very good article worth reading. Would recommend.


Unsurprisingly, centralizing your data between the private and public sector means everything is vulnerable at a centralized location.
The exposed materials include files labeled ‘secret’ in Chinese
In Chinese?!



tl;dr this new policy is dumb and bad. References to specific things like Antifa make no logical sense as a grounds for censorship, because there were already reasonable rules in place to handle actual problems.
In some cases, the content that Meta considers a threat signal is commonsensical. If, for instance, a user mentions bringing a weapon to an event, the company flags it as a threat signal. But in other cases, Meta’s process for identifying threat signals is more vague. Under the new rules, Meta might trigger a threat signal when a user posts a “visual depiction of a weapon,” a “reference to arson, theft, or vandalism,” or “military language,” if accompanied by the word “antifa.”
If “antifa” is mentioned in the context of “references to historical or recent incidents of violence” — a category so sprawling that it includes “historic wars” and “battles” — that post will also be penalized. Should Meta apply this rule as written, the company could, for instance, restrict posts comparing the antifascist nature of World War II to the contemporary antifa movement.
It’s difficult to believe any intellectual discussion would happen on Facebook, but this rule further cements the suppression of it.


It’s a fact AI is effectively stealing water supplies, FaceDeer. Don’t be so glib about it.
And the extreme rhetoric comes from the industries themselves, including Sam Altman’s own mealy mouth. And from industry shills who get good money to pretend that the thing is powerful and dangerous and worth investing in. Yudkowsky, Hinton, etc.


And those are for contracted workers, the ones Uber specifically tries to use these loopholes for!
Facedeer is a well-known AI activist troll, his deflections can generally be ignored


This article mentions Microsoft’s announcement in early 2026 to start fixing genuine Windows issues, and every single update to this announcement is effectively “and that was a lie.” Updates have gotten worse not better, and hiding AI features out of shame is just creating a bigger problem.
Funny they’re taking a page out of the Mozilla playbook here, when they renamed the ill-planned “AI Window” to “Smart Window.”


Maddening as Clippy was, every single annoying suggestion was handwritten, and they were supposed to help you do things. Your mileage may vary, but I always thought Clippy was cute.
Spot too.


ByteDance applies real cryptographic protection to the data valuable to their business: ad impressions, click attribution, revenue tracking. But the device fingerprints they harvest from users? Those get the key-taped-to-the-doorframe treatment.
Frankly I want the opportunity to peer into everything, or at least prevent all of it


It seems Apple does this in more than just one place, in “controversial” regions (read: where a powerful enough nation can declare a place doesn’t or shouldn’t exist)
it takes a couple minutes to test the claim and notice that the issue in affecting all Lebanon and not only the Southern region where israel is invading and leveling entire villages to the ground as they did in Gaza.
zooming in around the Lebanon border you notice instantly that a lot of information is missing as the background become white: https://i.imgur.com/L2p1j15.png
and following this clue, you soon notice that the same issue of missing labels is also happening in Syria: https://i.imgur.com/8p1ANAZ.png


So you’re advocating in favor of more AI in more steps of the process?


This seems like an ill-thought-out decision, especially in a landscape where Linux should be differentiating itself from, and not following Windows.
The titular “slop” just means “bad AI generated code is banned” but the definition of “bad” is as vague as Google’s “don’t be evil.” Good luck enforcing it, especially in an open-source project where people’s incentives aren’t tied to a paycheck.
Title is also inaccurate regarding CoPilot (the Microsoft brand AI tool), as a comment there mentions
says yes to Copilot
Where in the article does it say that?? The only mention of CoPilot is where it talks about LLM-generated code having unverifiable provenance. Reply


Other people in this thread say physics simulations are inherently chaotic. If an AI model is trained on inherently chaotic data, how will the results not be chaotic or not worse?
Twitter levels are unattainable
I thought Session was decentralized and couldn’t be centrally shut down 🤔
Anthropic CEO Dario Amodei, in a recent essay, speculated on ways that we might “buy time” before the possibility that AI enslaves or destroys humanity. But meanwhile, AI companies have products to sell…
Anybody want to tell them?


Guy got in trouble because of the dumbest possible telephone game (among other things)
For example, although court documents claim Sanchez searched, “is the 900 mAh battery from a (Game Boy) capable of being used in a trigger device," Sellers said that was actually a search from [his bail] supervisor, who was cross-referencing real searches from Sanchez to see if they could be used to make explosives.
[Bail supervisor] Coyle then took a screenshot of his own search history and sent it to the district attorney, leading to a violation of Sanchez’s probation and his rearrest, Sellers said.


It’s nice that OpenAI is being pulled in the for-profit direction and the non-profit direction at the same time, and is threatened with losing (more) money if it fails to do either.


The entire article can be summed up in 5 words:
an Anthropic official told CNN
Notable other passages include
Logan Graham, who heads the team at Anthropic its AI models’ defenses, told CNN
and
according to Anthropic
And
Anthropic said
And my personal favorite
Anthropic claims… CNN could not immediately verify this figure.
If you’re going to degoogle, make sure you don’t accidentally use Google servers with extra steps