Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.
Instead of self-imposed guardrails constraining its development of AI models, Anthropic is adopting a nonbinding safety framework that it says can and will change.
In a blog post Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market.
The announcement is surprising, because Anthropic has described itself as the AI company with a “soul.” It also comes the same week that Anthropic is fighting a significant battle with the Pentagon over AI red lines.
It’s not clear that Anthropic’s change is related to its meeting Tuesday with Defense Secretary Pete Hegseth, who gave Anthropic CEO Dario Amodei an ultimatum to roll back the company’s AI safeguards or risk losing a $200 million Pentagon contract. The Pentagon threatened to put Anthropic on what is effectively a government blacklist.
But the company said in its blog post that its previous safety policy was designed to build industry consensus around mitigating AI risks – guardrails that the industry blew through. Anthropic also noted its safety policy was out of step with Washington’s current anti-regulatory political climate.
Anthropic’s previous policy stipulated that it should pause training more powerful models if their capabilities outstripped the company’s ability to control them and ensure their safety — a measure that’s been removed in the new policy. Anthropic argued that responsible AI developers pausing growth while less careful actors plowed ahead could “result in a world that is less safe.”
As part of the new policy, Anthropic said it will separate its own safety plans from its recommendations for the AI industry.
Anthropic wrote that it had hoped its original safety principles “would encourage other AI companies to introduce similar policies. This is the idea of a ‘race to the top’ (the converse of a ‘race to the bottom’), in which different industry players are incentivized to improve, rather than weaken, their models’ safeguards and their overall safety posture.”
deleted by creator
Mr Whisky wants AI to kill people, find dirt on people, surveil people.
Wow. Wow. I heard the news the other day of hegswish wanting them to back down from their red line of allowing ai to control weapons systems or perform mass survellance of the populace. Now Im envisioning them courageously shouting. We will never back down and allow such tavesties!!! Followed by morgan freemans disembodied voice saying. They did.
When the Anthropic powered murder bot comes through your door, remember this moment. Know that there is no such thing as an ethical corporation.
Oops, pretending to take the moral high ground is out the window as soon as MIC dollars are at risk.
Remember kids, the term “Business Ethics” is an oxymoron. Corporations don’t have ethics, they have financial interests and PR.
here is the only line that matters: “hinder its ability to compete.”
that means “to profit.”
reminder: all corporations, everywhere and in every industry, care first about profit. everything else is about how it relates to or alters profit. literally every word is a goddamned lie, sold to help protect profit.
also their logo looks like a butthole. who does that?!
I think anthropic is in the “ah shit we’re dying” startup stage. They’re not looking it profit so much as not lose ALL of their shirt in the coming crash. So it pivoted and ditched its morals. Hate them for that.
Don’t hate them for being greedy yet: they’re not there. Maybe they’ll never get there. But they’ve lived long enough to become the thing they strived to destroy, so that’s a milestone.
The timing is certainly predominant.
I think I’m cancelling my subscription over this…
Prescient?
With this administration? Prepubescent.
Predatory
For those keeping score, their “don’t be evil” phase lasted 5 years.
Anthropic has described itself as the AI company with a “soul.”
This is silicon valley stupidity at its finest.
AIs do not have souls. But guess what: The “soul” file is what runs trash like OpenClaw.
And companies definitely don’t have souls.
Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market… it had hoped its original safety principles “would encourage other AI companies to introduce similar policies.
Rules for thee and never for me. (BTW, these rules sucked, and mostly didn’t address actual dangers.)
it had hoped its original safety principles “would encourage other AI companies to introduce similar policies.
This is exactly why self regulation is bullshit. Even if one company decides to do the right thing (and that’s kinda arguable here as you pointed out), plenty of other companies won’t handcuff their ability to profit, no matter the ethical implications.
don’tbe evilAnthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.
Anyone surprised?
Surprised and disappointed, both by them and the system (capitalism) that stops us from having nice things.
If we ever crack AGI, it’s probably going to be because the market optimised for the better shilling of dick pills, crypto scams and spyware.
That’s…fucking bleak, in the Hide Pain Harold way.
Amodei left OpenAI partially because of ethical concerns, so yeah, I’m slightly surprised and even a bit disappointed.
TLDR = money matters more than morals and safety to them
Welp, was a good run, time to ditch Anthropic.
You either die a hero… Something something.
this is like text book definition of Anthropic. they ONCE had a great LLM, it was reliable, great solutions, Claude Code was pretty good.
now? it’s all garbage. all of it. Sonnet 4.6 hasn’t improved a damn thing. If you use Code you’re literally shooting yourself in the foot now.
All it knows how to do now is hallucinate. that’s it. They should pivot Claude to being a creative writing LLM because man it’s FANTASTIC at making stuff up and making it sound believable.
They should have died the hero.
Anthropic was never an ethical company, just one that released a competent product.
Their attempts to look ethical were reminiscent of a horror movie villain donning someone else’s freshly peeled face in an attempt to look better
I think Claude Code was genuinely useful until like version *.0.76 or something since they bumped to *.1 it has become a shitshow. Don’t know any alternative tho
Codex 5.3.
Claude, play - “The Sound of Silence”
Hello darkness my old friend 🎵 🎶
There are tons of alternatives, crush and opencode come to mind first







