• railcar@midwest.social
    link
    fedilink
    English
    arrow-up
    31
    ·
    6 hours ago

    It’s OK to hate AI slop and recognize the immediate threat to cyber security it brings. At least they are trying to mitigate it. There’s been no similar actions from other frontier models. They are deliberately helping open source projects with little funding to keep pace.

    https://www.anthropic.com/glasswing

    • sunbeam60@feddit.uk
      link
      fedilink
      arrow-up
      8
      ·
      3 hours ago

      Anthropic right now are the good people.

      That probably won’t last. But out of a bad bunch they’re the least bad.

      • GreenKnight23@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        3 hours ago

        AI is fascism. full stop.

        anthropic is just another cog in the fascist wood chipper that’s eating away at our autonomy and choices.

        with AI, all roads lead to fascism.

        • Crackhappy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          38 minutes ago

          Oh come on my dude. Being a Luddite isn’t going to help anyone. I’m not saying AI is a force for good, but it is a useful tool in the right hands. Like an axe or a shotgun.

          • GreenKnight23@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            26 minutes ago

            last a checked shotguns that explode in your hands aren’t good for anything, and axes that don’t cut are just hammers.

            I’m not resistant to AI or LLMs.

            I’m resistant to corporate interests ignoring laws to create a product that is being used to subjugate people.

            there’s a difference.

          • GreenKnight23@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            2 hours ago

            in 2024 Anthropic partnered with Amazon and OpenAI to provide Claude to US defense and intel agencies.

            in 2025 Anthropic accepted $200 million from USDOD to use AI in military affairs.

            in 2026 the military was ordered to stop using Anthropic services. two days later it was used in attacks against Iran.

            also in 2026 an unsealed court filing found that Anthropic had an internal project called “Project Panama” which included the “effort to destructively scan all books in the world”.

            AI is fascism. full stop.

            • captain_solanum@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 minutes ago

              I don’t mean to start a full argument since I sense we have quite different views, but maybe you could tell me where I go wrong here. Say for the argument the entire Trump admin is fascist. I think there are still many places to break the chain of fascism before you get to anthropics models. (I use this definition of fascism). I think:

              1. Primary purpose of the DoD is to defend the US and allies against actual invasions of actual land, everything else is just stupid shit the political system allows for and incentivizes. I don’t think this primary purpose, setting aside BS random wars, is fascist so I don’t think the organization is fascist either.
              2. I’m not certain that contractors of the DoD, which is not inherently fascist but which is for the argument said to be controlled at the top by fascists, become members of the ideology or heavily associated with it when they take contracts, or when they later live up to those contracts. You claim “when a private sector company provides services to a government that is obtusely fascist, it itself becomes a tool in which fascist power is concentrated.”. I think this is far too general and strong to be true. Is the department of agriculture a tool where fascist power is concentrated? Is a farmer who cooperates with the USDA a tool where fascist power is concentrated? Is the corn they produce also that, as an analogy to the LLM? The DoD facilitates a lot of horrible stuff, but do the reach of the assumed fascist Trump admin only goes as far as they can order changes within the DoD and within the DoD’s contracted corporations, it doesn’t spread like fire does.
              3. Anthropic has quite a lot of transparency with what “values” they try to get their models to espouse, and the models are generally politically neutral. Regarding your claim that “when a private sector company provides services to a government that is obtusely fascist, it itself becomes a tool in which fascist power is concentrated”: it’s an entirely different thing to be a tool of fascism than to be fascist. Anthropic being a tool in which fascist power is concentrated doesn’t give me any reason that said fascism would “spread” (however such a thing could even happen) to their models.

              So in my view the chain between Trump Admin->DoD->Anthropic->Claude Sonnet 4.6, and in the opposite direction, is pretty weak and not enough that I would call the model fascist. I think this is especially true now that the use of the model is being phased out (?). That’s in the “readily espouses or promotes views connected to fascism” and in the “any usage is directly funding fascist organizations in a major way” senses I feel that a model could be described as fascist (or AI in general could be).

              To analogize again, I don’t think a Bernie supporter working in the DoD is automatically a fascist and certainly don’t think that purchasing an old TV from them is supporting fascism (or that the TV is fascist, even if they had previously used it in their office at the DoD).

              The book thing I’m not sure how you connect to fascism? It might be ultra-bad, it might be copyright infringement, but it doesn’t feel like fascism to me beyond surface level comparisons to book burning.

            • [object Object]@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              2 minutes ago

              You know that Intel and AMD both grew up on government contracts, specifically military ones? Sure hope you don’t use Intel or AMD.

            • cecinestpasunbot@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              16 minutes ago

              This is just capitalism. As profits fall due to competition between capitalists they seek new ways to profit. Monopolization, intensified exploitation, enshittification are one way to deal with this problem. However, if those aren’t working capitalists will try to fuse themselves to the state. They use state violence to pursue profits through hyperexploitation and imperialism. That’s what we call fascism. It’s not distinct from capitalism, just an extension of it.

              AI companies are no different than any of the other companies that preceded it. They are so over leveraged it’s unlikely they can survive in the long run without the state. Thats why they are comfortable working with fascists. We could have computer scientists creating similar models that actually benefit humanity but it’s not going to happen under capitalism.

              • GreenKnight23@lemmy.world
                link
                fedilink
                arrow-up
                3
                ·
                1 hour ago

                no. I’m not.

                when a private sector company provides services to a government that is obtusely fascist, it itself becomes a tool in which fascist power is concentrated.

                I think you’re just too naive to understand what is actually happening. that or you’re too stupid to notice the noose being slipped around your neck.

                • Railcar8095@lemmy.world
                  link
                  fedilink
                  arrow-up
                  5
                  ·
                  1 hour ago

                  Ahh I see. Until AI, no private company offered their services to fascists. And it’s only AI, it has nothing to do with maximizing profit.

                  We get it, you don’t like AI. But thinking that the problem with those companies is AI and not capitalism is not noticing the noose.

  • spectrums_coherence@piefed.social
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    39 minutes ago

    LLM is very good at programming when there are huge number of guardrails against them. For example, exploit testing is a great usecase because getting a shell is getting a shell.

    They kind of acts as a smarter version of infinite monkey that can try and iterate much more efficiently than human does.

    On the other hand, in tasks that requires creativity, architecture, and projects without guard rail, they tend to do a terrible job, and often yielding solution that is more convoluted than it needs to be or just plain old incorrect.

    I find it is yet another replacement for “pure labor”, where the most unintelligent part of programming, i.e. writing the code, is automated away. While I will still write code from scratch when I am trying to learn, I likely will be able automate some code writing, if I know exactly how to implement it in my head, and I also have access to plenty of testing to gaurentee correctness.

    • RamenJunkie@midwest.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 hour ago

      They are also great for programming one off personal projects that frankly, don’t have the use scale that needs rigerous security oversight. Especially since like, if you did it yourself, you probably were not sanitizing the inputs (etc) anyway. You were slapping down some Python code and moving on.

      Like, I don’t care if my script to convert Wordpress exports to Markdown files crashes if you feed it a JPEG. I am the only one using it, for this data manipulation task.

    • lonesomeCat@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      The thing is, you know how it is in your head and you need to lay out that entire context.

      And after that you MUST review the code because you’d never know. Wouldn’t call it automation if I have to double check EVERY TIME

    • Serinus@lemmy.world
      link
      fedilink
      arrow-up
      32
      ·
      6 hours ago

      People have trouble with the middle ground. AI is useful in coding. It’s not a full replacement. That should be fine, except you’ve got the ai techbros and CEOs on one end thinking it will replace all labor, and the you’ve got the backlash to that on the other end that want to constantly talk about how useless it is.

        • MinnesotaGoddam@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          3 hours ago

          the times i trust LLMs: when i am using it to look up stuff i have already learned, but i can’t remember and just need to refresh my memory. there’s no point memorizing shit i can look up and am not going to use regularly, and i’m the effective guardrail against the LLMs being wrong when I’m using them.

          the times i don’t trust the LLMs: all the other times. if i can’t effectively verify the information myself, why am i going to an unreliable source?

          having to explain that nuance over and over, it’s just shorter and easier to say the llm is an unreliable source. which it is. when i’m not doing lazy output, it doesn’t need testing (it still gets at least 2 reviews, but the last time those reviews caught anything was years ago). the llm’s output always needs testing.

  • zieg989@programming.dev
    link
    fedilink
    English
    arrow-up
    112
    ·
    9 hours ago

    I would not be surprized if Anthropic would actually hire a real developer to make these PRs as a marketing stunt

    • testaccount789@sh.itjust.works
      link
      fedilink
      arrow-up
      57
      ·
      8 hours ago

      In 2021, when Amazon launched its first “just walk out” grocery store in the UK in Ealing, west London, this newspaper reported on the cutting-edge technologies that Amazon said made it all possible: facial-recognition cameras, sensors on the shelves and, of course, “artificial intelligence”.
      An employee who worked on the technology said that actual humans – albeit distant and invisible ones, based in India – reviewed about 70% of sales made in the “cashier-less” shops as of mid-2022

      Source: The Guardian

      UK AI company builder.ai has been tricking customers and investors for eight years – selling an advanced code-writing AI that, it turns out, is actually an Indian software farm employing 700 human developers.

      Source: ACS Information Age

      • baguettefish@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        6 hours ago

        builder AI was genuine AI, it’s just that the company simultaneously also did contracted development with real humans. journalists got confused.

        there’s a really good youtube documentary i watched which actually got into the tools and software used, but I can’t find it anymore. either way, you can’t dress up humans coding as AI. it’s not fast enough.

    • BestBouclettes@jlai.lu
      link
      fedilink
      arrow-up
      124
      ·
      9 hours ago

      Well, if the model detected an issue, and a human tested it to make sure it was real and then fixed it, I think that’s an acceptable use of AI tools.

  • CannonFodder@lemmy.world
    link
    fedilink
    arrow-up
    60
    ·
    9 hours ago

    ai tools can detect potential vulnerabilities and suggest fixes. You can still go in by hand and verify the problem carefully apply a fix.

    • shirasho@feddit.online
      link
      fedilink
      English
      arrow-up
      20
      ·
      6 hours ago

      AI is actually SUPER good at this and is one of the few places I think AI should be used (as one of many tools, ignoring the awful environmental impacts of AI and assuming an on-prem model). AI is also good at detecting code performance issues.

      With that said, all of the fix recommendations should be fixed by hand.

      • _hovi_@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        4 hours ago

        Yeah I would add also ignoring how the training data is usually sourced. I agree AI can be useful but it just feels so unethical that I find it hard to justify.

        I’m a big LLM hater atm but once we’re using models that are efficient, local and trained on ethically sourced data I think I could finally feel more comfortable with it all. Can’t be writing code for me though - why would I want the bot to do the fun part?

        • shirasho@feddit.online
          link
          fedilink
          English
          arrow-up
          2
          ·
          59 minutes ago

          Exactly my thought. I got into software development because designing and writing good code is fun. It is almost a game to see how well you can optimize it while keeping it maintainable. Why would I let something else do that for me? I am a software engineer, not a prompt writer.

  • General_Effort@lemmy.worldOP
    link
    fedilink
    arrow-up
    69
    ·
    9 hours ago

    (In case someone has been living under a rock in the last 48 hours. Anthropic’s new model “Mythos” has been finding a lot of new vulnerabilities. This is about patching one.)

  • sun_is_ra@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    26
    ·
    10 hours ago

    Maybe he meant code quality was so good its like a human wrote it.

    After all if the code is good and follow all best practices of the project, why reject it just because it was an AI who wrote it. That’s racism against machines.

    • Mark with a Z@suppo.fi
      link
      fedilink
      arrow-up
      37
      ·
      8 hours ago

      One big reason people outright reject AI generated code is that it shifts the work from author to the reviewer. AI makes it easier to make low effort commits that look good on surface, but are very flawed. So far LLMs don’t match the wisdom of an experienced software dev.

      • bamboo@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 hours ago

        This is what happened with FFMpeg when Google was trying the same thing to promote their models. If the code is good, and doesn’t put unnecessary burden on the reviewer, then that’s great. But when the patches are sloppy or the reviews are overwhelming, it doesn’t help the project, it hinders it.

        • Serinus@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          6 hours ago

          It’s almost like there should be a human in the loop to guide and review what the ai is doing.

          The thing works a lot better when I give it smaller chunks of work that I know are possible. Works best when I know how to implement it myself and it just saves me from looking up all the syntax.

      • sun_is_ra@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 hours ago

        totally agee also same problem with published scientific papers .

        I just assume that since this code submission was done by Anthropic itself - probably to demonstrate how good their AI has became ( I don’t know what is the actual background to this story) - FFmpeg team gave it more consideration as opposed to a random amature.

    • lath@lemmy.world
      link
      fedilink
      arrow-up
      50
      ·
      10 hours ago

      If it’s racism, it’s also slavery. Can’t have one without the other here.

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    arrow-up
    15
    ·
    10 hours ago

    Hold on, wasn’t one of the “features” of the “leaked” Assumed Intelligence source code the “human”-like version?