Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, is dropping the central pledge of its flagship safety policy, company officials tell TIME.

In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate. For years, its leaders touted that promise—the central pillar of their Responsible Scaling Policy (RSP)—as evidence that they are a responsible company that would withstand market incentives to rush to develop a potentially dangerous technology.

But in recent months the company decided to radically overhaul the RSP. That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    11
    ·
    7 hours ago

    Feb 24, 2026 1:00 PM MT

    This happened days before Trump threw his toddler tantrum.

    Just another example of how attempting to appease wannabe-autocrats doesn’t work. Best you can do is maybe distract or delay them a bit, but be ever ready for them to turn on you and demand more.

    • timestatic@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      This wasn’t for Trumo tho, this was for Anthropic themselves so they could develop AIs quicker in the AI race. So mainly a business incentive mostly unrelated to Trump I believe