• XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 hours ago

    Sorry, not quite, but close. From 404 media

    When users confronted Clinton with their concerns, he brushed them off, said he would not submit to mob rule, and explained that AIs have emotions and that tech firms were working to create a new form of sentience, according to Discord logs and conversations with members of the group.

    • Hackworth@piefed.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Oh, that guy! To be fair, that’s one employee, not Anthropic’s actions or position. You mentioned forcing their software on minorities while insisting it was better than it was, and I was getting OLPC flashbacks. But Anthropic looking for funding in the UAE and Qatar is shitty. I can’t seem to find anything about whether or not they went through with those contracts.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 hours ago

        Jason Clinton is Anthropic’s Deputy Chief Information Security Officer. That means Jason knew better, and he was using his position as a moderator (and supposedly a security expert) to try gaslighting a vulnerable minority into believing his favorite toy was “secure” when it was not.

        • Hackworth@piefed.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          I mean, I’m not gonna defend him. But fucking up a discord that you’re a mod of isn’t really in the same ballpark as taking money from dictators or directing fully autonomous strikes. Also, from the read, it really sounds like that Deputy CISO was a prime example of cyber-psychosis, or AI mania, or whatever we’ve decided to call it. And I assume he is part of the same vulnerable minority?

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            3 hours ago

            Every example we have of Anthropic’s behavior paints a picture of an immoral company that pretends to be moral. It’s bad enough that they continue doing harm, but then they dress it up with phrases like “AI Safety” and “Information Security”. (And every press release they create to describe how scary good their system is, tends to be followed up by a sudden cash infusion from an openly morally bankrupt company like Google or Amazon.)

            I reserve zero empathy for the people on the abuser side of an abusive dynamic. Maybe Elon Musk is autistic too. I don’t really care. Only Moloch knows their hearts. I’ll judge them for their actions.

            • Hackworth@piefed.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 hours ago

              I did find an update on that funding, btw. Anthropic already took money from Qatar (the QIA), but the amount isn’t known - likely around $100M. The UAE has yet to happen, but if does, it would be “hundreds of millions”.

              • XLE@piefed.social
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 hours ago

                Interesting. I appreciate you doing the digging to check. It’s frustrating that people spent so much time looking at the fact that Anthropic had an uncrossed red line, they didn’t look at all the red lines that were already crossed - in the very article about those supposed red lines. Such is PR I guess.

                I suppose you saw that “He Will Not Divide Us 2.0” letter from OpenAI and Google employees who promised to stand behind Anthropic. Never mind the fact OpenAI split… Doesn’t anybody know Google already does mass surveillance of Americans?

                …I ramble.