• Vegan_Joe@piefed.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    23 hours ago

    Dumb question, but…is Claude worse than GPT or Gemini?

    I was under the impression that it was the lesser of evils

    • ptu@sopuli.xyz
      link
      fedilink
      arrow-up
      6
      ·
      15 hours ago

      I just started with Claude and I can’t yet distinguish when it has actually done something it says it has done. With ChatGPT I can see through the bullshit quite well by now. At first I was happy when I thought Claude was rid of that bullshit, but turns out it’s just a different type of bullshit.

      The UI and file handling is better in Claude though, and supposedly you can make it create skills which are like instruction booklets on how to do some tasks and then export and share them. But the ones I created were lost during the weekend so I’m not sure how robust they actually are.

    • BJW@lemmus.org
      link
      fedilink
      English
      arrow-up
      53
      ·
      23 hours ago

      They are the lesser of the available evils. Anthropic, the proprietors of Claude, were blacklisted by the US administration for refusing to greenlight their technology being used for fascism.

            • ivn@tarte.nuage-libre.fr
              link
              fedilink
              English
              arrow-up
              3
              ·
              10 hours ago

              Yes, but not for targeting, as explained in the article I linked.

              The Maven Smart System is the platform that came out of those exercises, and it, not Claude, is what is being used to produce “target packages” in Iran.

              • subnormal@lemmy.dbzer0.com
                link
                fedilink
                arrow-up
                1
                ·
                9 hours ago

                Anthropic’s AI did data analysis for Project Maven, which was a system that used data analyzed by various sources to target a school. So the AI is part of the “kill-chain” no?

                • ivn@tarte.nuage-libre.fr
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  9 hours ago

                  I suggest you read the article.

                  The AI underneath the interface is not a language model, or at least the AI that counts is not. The core technologies are the same basic systems that recognise your cat in a photo library or let a self-driving car combine its camera, radar and lidar into a single picture of the road, applied here to drone footage, radar and satellite imagery of military targets. They predate large language models by years. Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English. But the language model was never what mattered about this system.

                  • subnormal@lemmy.dbzer0.com
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    9 hours ago

                    Yes. I never said it was an LLM. It was probably some custom AI system made by Anthropic.

                    Are we agreed that some Anthropic AI system (not necessarily the Claude LLM) was in the kill chain? That was what I was trying to say from the beginning.

        • BJW@lemmus.org
          link
          fedilink
          English
          arrow-up
          11
          ·
          18 hours ago

          That’s one way to spin it.

          My take on it is that it was used inappropriately, and when the fascists wanted it tailored for that abhorrent use, Anthropic refused, and in retaliation the fascists banned it for ANY use, so now Anthropic is suing to allow the sane to continue using it for it’s appropriate uses.

          • subnormal@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            4
            ·
            17 hours ago

            What sane use? And how does this company plan to prevent the fascists from using it to kill another 120 children?

            The only not-evil move is to not sell dual-use goods to fascists in the first place.

            • BJW@lemmus.org
              link
              fedilink
              English
              arrow-up
              8
              ·
              16 hours ago

              You seriously can’t think of any sane use? How about categorizing large amounts of data. Brainstorming strategies for problem solving. Converting pseudo code to actual code. Troubleshooting error messages. I mean, there are dozens upon dozens of valid uses that harm no one.

              How does Bic plan to prevent murderers from stabbing people with their pens? How does Toyota plan to stop drivers from committing vehicular manslaughter? How does Hewlett-Packard plan on preventing fascists from saving manifestos? How does Apple plan on preventing sexual criminals from taking pictures of their victims?

              What’s that? Companies don’t need to accomplish impossible tasks to have a viable product? I guess it’s only AI that has insurmountable demands placed on them by reactionaries.

              The only not-evil move is to sit in a cave using sticks, once the trees figure out how to keep cavemen from beating their children with them.

              • subnormal@lemmy.dbzer0.com
                link
                fedilink
                arrow-up
                1
                ·
                10 hours ago

                I wasn’t clear. What I meant was: what sane things could a fascist military use AI for?

                “Reactionary” lmao. My friend, I use LLMs all the time. Just not the proprietary ones from companies that are in bed with fascists.

                • BJW@lemmus.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 hours ago

                  Your problem is clearly with the fascists, as it should be, and AI is getting caught in the crossfire by your ire. You just can’t see/admit it yet.

                  Unless you live in a cave, which you obviously don’t since you’re here on the Internet sharing your wisdom with us, then you are participating in business and activities that enrich the fascists. It’s just a fact of life when they own everything. There is no ethical consumption under capitalism.

                  • subnormal@lemmy.dbzer0.com
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    2 hours ago

                    I have nothing against AI but everything against a certain AI company that is fully in bed with fascists.

                    There is no ethical consumption under capitalism.

                    Please do not use this slogan as an excuse to not sought out the least unethical option for your consumptions.

    • ZoteTheMighty@lemmy.zip
      link
      fedilink
      arrow-up
      5
      ·
      17 hours ago

      Claude is almost always the better model compared to GPT. I find that this is a good leaderboard. However, both Claude and GPT have similar business models: make sure everything they do is completely proprietary, and keep everything behind a monthly paywall. They both run massive data centers to train their models, and neither really deserves the term “Artificial Intelligence”.

    • subnormal@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      13
      ·
      22 hours ago

      There are many less evils. Use open source/weight AI like Kimi, GLM, Deepseek, Mistral, Olmo, Arcee, Minimax, Qwen, Exaone, NVidia, Sarvam…

      If you don’t have the hardware to run locally, you can pay for API. If you find the company problematic for whatever reason, you can switch to the same model served by a third party (possible because the model weights are publicly released).

    • hoch@lemmy.world
      link
      fedilink
      arrow-up
      16
      ·
      23 hours ago

      No. Many people here just hate LLMs in general and will use every opportunity to complain about it.

      • Appoxo@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        18
        ·
        22 hours ago

        Personally I dislike how helpless and useless it makes my fellow colleagues at research.
        No thought given, use the first web result (and in most cases, just accept the AI output as search gospel).

        In my case it’s only used for very obscure issue descriptions my google-fu isnt sufficient enough for or correlating weird bugs with each other.

        • BJW@lemmus.org
          link
          fedilink
          English
          arrow-up
          6
          ·
          18 hours ago

          That’s the same reason I hate bicycles! They make travel too easy for everyone. Need to go somewhere? All my associates immediately reach for their bikes, and think of it as the default mode of travel. Heaven forbid they put actual effort into traveling by walking the whole way, or better yet, crawling so that they can include their arms in the endeavor like nature intended.

          I only use a bicycle when I’m going to a very obscure location and would have to do my crawling on dirt trails otherwise.

            • BJW@lemmus.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              I don’t know about yours, but my bike has led me to the ground more times than I’d like - face first in some cases. I have the chipped/broken teeth to prove it. Nothing is foolproof, and everything has inherent risk. It’s all relative to crawling, but nothing is risk-free.

          • Opisek@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            14 hours ago

            Except you should be comparing it to motorized wheelchairs. Suddenly all your associates forget how to walk, WALL-E style.

            • BJW@lemmus.org
              link
              fedilink
              English
              arrow-up
              3
              ·
              12 hours ago

              The events of Wall-E happened over generations. If your associates have forgotten how to walk already then they never knew how to begin with and were just faking it until something came along to save them. So at least now with a wheelchair as a crutch they can actually contribute rather than just pretending to be productive but getting nothing done in reality.

      • BJW@lemmus.org
        link
        fedilink
        English
        arrow-up
        16
        ·
        23 hours ago

        I’d say 99.9% of people. You’re actually the first other person I’ve seen who doesn’t!

            • some_designer_dude@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              18 hours ago

              Then build better guardrails. These are the tools of the future. (And I intend both meanings of the word “tools”.) AI is very good at following rules. In their absence, they require someone far more experienced to drive them properly.

              • ClownStatue@piefed.social
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                16 hours ago

                This is a really good point! It used to be “a computer is only as smart as its user.” The same can be said of AI: the model’s results are kind of dictated by the prompt. while anyone can prompt an AI with whatever they want, it takes experience to use an AI to develop a project from idea to v1. At the e d of the day, the AI can search the web better than me, and type faster than I can - but I know what I want my code to do, and I know how I want it done. Those two things don’t have to be mutually exclusive.

    • Leon@pawb.social
      link
      fedilink
      arrow-up
      12
      ·
      23 hours ago

      In what manner? Capabilities, or belonging to an evil corporation that happily steals data and works to undermine democracy?

    • IndustryStandard@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      21 hours ago

      It is better than GPT and Gemini but not great. Claude some US military contracts. At least to public knowledge.

      https://www.cnbc.com/2026/03/04/pentagon-blacklist-anthropic-defense-tech-claude.html

      Defense Secretary Pete Hegseth declared on X that any contractor or supplier doing business with the U.S. military is barred from commercial activity with Anthropic.

      The announcement came after Anthropic executives refused to comply with the government’s demands over its model use. They wanted assurances that their AI would not be tapped for fully autonomous weapons or mass domestic surveillance of America.

      Anthropic’s models are still being used to support the U.S. military operations in Iran, even after the announcement from the Trump administration, as CNBC previously reported.

    • rozodru@piefed.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      21 hours ago

      less of the evils. That being said as far as quality goes Claude has taken a very noticeable decline in quality within the past several months. used to be half decent but now 8 to 9 times out of 10 you’re going to get an hallucination for a solution. Anthropic has REALLY dropped the ball with Claude and Claude code. absolute garbage LLM now.