Hacker News.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      They’re not. Conscience has nothing to do with this.

      They just don’t think the PR hit is worth it.

      Whenever companies choose to act in a way that we perceive as good, we were the voice of reason, not them.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      12 hours ago

      While I’m glad they’re drawing a line, they’re only splitting hairs. Anthropic is already deeply working with the US gov.

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      11
      ·
      14 hours ago

      Anthropic was founded by former OpenAI employees who left largely due to ethical and safety concerns about how OpenAI was being run. This is just them sticking to their principles.

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 hours ago

          I still think they deserve some credit for at least trying to do the right thing. I don’t envy the position they’re in.

          Everyone’s rushing toward AGI. Trying to do it safely is meaningless if your competition - the ones who don’t care about safety - gets there first. You can slow things down if you’re in the lead, but if you’re second best, it’s just posturing. There is no second place in this race.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 hours ago

        Anthropic’s “ethical” concerns were performative. They only fearmonger about fictional things that will make their product sound powerful (read: worth throwing money into).

        They try to scare people with fictional stories of AGI, a thing that isn’t happening, while ignoring widespread CSAM and sexual harassment generation, a thing that is happening.

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 hours ago

          Are we not moving toward AGI? Because from where I stand, I only see three scenarios: either AI research is going backwards, no progress is being made whatsoever, or we’re continuing to improve our systems incrementally - inevitably moving toward AGI. Unless, ofcourse, you think we’ll never going to reach it which I view as a quite insane claim in itself.

          If we’re not moving toward it, then I’d love to hear your explanation for why we’re moving backwards or not making any progress at all.

          Whether we’re 5 or 500 years away from AGI is completely irrelevant to the people who worry about it. It’s not the speed of the progress - it’s the trajectory of it.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            9 hours ago

            We are not “moving towards AGI” in any way with any modern technology, in the same way that we are not “moving towards FTL travel” because a car company added cylinders to an engine.

            The real “AI” dangers are people like Eli Yudkowski, a man who scares vulnerable people, sexually abuses them, and has spawned at least one murderous cult.


            Dario is one of the biggest AGI bullshit peddlers.

            In October 2023, Amodei joined The Logan Bartlett show, saying that he “didn’t like the term AGI” because, and I shit you not, “…because we’re closer to the kinds of things that AGI is pointing at,” making it “no longer a useful term.” He said that there was a “future point” where a model could “build dyson spheres around the sun and calculate the meaning of life,” before rambling incoherently and suggesting that these things were both very close and far away at the same time. He also predicted that “no sooner than 2025, maybe 2026” that AI would “really invent new science.”

            • Iconoclast@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              9 hours ago

              We are not “moving towards AGI” in any way with any modern technology

              So that means you believe AI research is completely frozen still or moving backwards. Please explain.

              Comparisons to faster-than-light travel are completely disingenuous and bad faith - that would break the laws of physics and you know it.

              You can also keep your red herrings to yourself. I’m discussing ideas here - not people.

              • XLE@piefed.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 hours ago

                According to Dario Amodei, this is the year we are getting New Science. And apparently he believes in Dyson Spheres too. How do we feel about that?

                Anthropic is not special. They’re doing the LLM thing like everybody else. The Godfather of AI, Yann LeCun himself, said LLMs were a dead end on this front. But even if he didn’t chime in, it’s your job to show they’ll lead to AGI, it’s your job to show us how, not my job to show you it won’t.

                • Iconoclast@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  8 hours ago

                  If you’re just gonna keep ignoring every single point I make and keep rambling about unrelated shit, then there’s nothing left to discuss here. If you actually had an argument, you would’ve made it by now.

                  • XLE@piefed.social
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    8 hours ago

                    Your claim: AI seems to be getting better, therefore AGI will happen

                    My rebuttal: they aren’t linked

                    Other important things you must reconcile with: the sexual abuse, the death toll, etc from the True Believers

                    Does that clear matters up?

    • SuspciousCarrot78@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      14 hours ago

      …because every now and again, for the briefest of moments, one them shows themselves not to be run by entirely evil, lecherous humps?

      Blink and you (or the shareholders) might miss it.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 hours ago

        Don’t buy the hype. They’re not acting in good conscience, they’ve just weighed the pros and cons and decided that the PR hit isn’t worth it.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 hours ago

            When a CEO tells you who he is, believe him the first time.

            I thought we had all learned this lesson with Elon Musk, who also pretended to be the good guy. We’ve already got a ton of red flags about Dario Amodei.