• ell1e@leminal.space
    link
    fedilink
    English
    arrow-up
    71
    ·
    1 day ago

    We need the equivalent investment now. If average code is cheap, then the scarce resource is no longer the ability to produce it. The scarce resource is the ability to read it, to navigate it

    You know what would help a lot with understanding the code one is working on? Writing it yourself without turning your brain off via AI.

    But that’s an insight the article somehow seems to be missing.

    • mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      16 hours ago

      Take away a calculator halfway through an exam, and suddenly, people are surly and unmotivated about simple long division.

      That’s how every ‘AI makes you stupid!’ article works. Like, ‘doctors used AI to detect more cancer, but when we took it away, they were worse at eyeballing it.’ Sorry, can we go back to the part about detecting cancer better?

      • ell1e@leminal.space
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        13 hours ago

        The doctors were worse, not just taking longer. So this would be more like people unlearning division.

        While people using calculators may occasionally be unlearning division, this seems less problematic than doctors unlearning how to spot cancer on their own (since then I’m guessing you won’t have AI training data anymore since apparently AI can’t feed into AI without collapse) or software engineers unlearning how to write correct code.

        You also wouldn’t want a mathematician unlearning how to do division.

        • mindbleach@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          3 hours ago

          The doctors were better, until someone yanked the tool away. That’s how every tool works! Even going from a handsaw to a table saw and back will make you lose some skill with the handsaw, because your brain focused on higher-level goals and finer motions. That’s not proof a table saw is bad for woodworking. The problem is “and back.”

          since apparently AI can’t feed into AI without collapse

          Have you checked on that narrative? It’s been a while. Things stopped getting yellow. Improvements continued.

            • mindbleach@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              1 hour ago

              That’s a lot of “could” and “will” from an article a year old, primarily about concerns from two years ago, while image models to-day keep getting smaller and better. They didn’t find a second internet’s worth of JPEGs. Better training on the same data, or even better labels on less data, beats a simple obsession with scale.

              Yes, photocopying a photocopy will degrade, but diffusion is a denoising algorithm. Un-degrading an image is its central function. ‘Make it look less AI’ is how you get generative adversarial networks.

              Anyway, the grim truth is that the central concern is mistaken. Training data for cancer screening does not require the patient lived.

              • ell1e@leminal.space
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                58 minutes ago

                The article links a study. What’s your study that collapse isn’t a concern?

                For what it’s worth, my worry was never focused on cancer, these doctors were just an example measured for the likely universal unlearning effect.

    • nomad@infosec.pub
      link
      fedilink
      arrow-up
      14
      ·
      1 day ago

      I always ask myself how many of these anti ai warriors are actually proficient professional coders. And I’m talking like engineer level, not hobby level.

      LLMs are a tool. Give a package power tool to a fool and the result is stupid at best, bloody at the worst. Let’s call that vibe tooling and ask if there is a difference to vibe coding.

      Imho there is not. LLMs are a tool that can lift up the quality of coding work to a common level if used by proficient people. It helps with searching through and understanding vast outputs as long as you know what to expect. Its a miracle in intuition.

      Its not a mind reading tool that will just code your fantasy software for you. Hate it all you like, AI is here to stay, this is like hating cars in the age of horses. Cars are not magic, neither is “AI”.

      • iglou@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        3 hours ago

        There are definitely real engineers being strongly anti-AI. The problem, in my opinion, is that they just didn’t really try working with them.

        They’re incredibly powerful tools, and they don’t only amplify bad developers, they amplify every developer that really tries to work with it.

        The mistake people make is delegating the decision making to the AI. Let the tool be a tool, not a brain. You architect, you design, you order, it writes the code. You review the code. There you go, you have a pretty good quality code, better than most devs will produce, following your design and architecture, you controlled the entire decision making, and you did it in 5x less time.

        I also think that it has become too useful to disappear in engineering.

      • darkmarx@lemmy.world
        link
        fedilink
        English
        arrow-up
        29
        ·
        edit-2
        1 day ago

        I have over 25 years of development experience. My current role is vice president of development and architecture where I lead a team of 80+ devs, QAs, and architects. By any measure, I am one of those “engineer level” developers you speak of.

        Yes, LLMs are a tool, but it’s a tool one should use sparingly. LLMs are pattern recognition machines and are great for routine, been-there-done-that type development. For anything that deviates from the norm, LLMs will try to force everything back into common patterns… even when those patterns are not correct. A well designed system can be mangled into junk because the LLM doesn’t have enough context or because something is new.

        Be skeptical of the rave reviews around coding agents and the use of LLMs for development. Much of the hype seems tied to developer skill. Less capable developers can use LLMs to appear more capable than they are. For good developers, LLMs seem to erode their skills as they rely on the tool instead of their own knowledge. I have seen this first hand.

        Overall, it seems LLMs raise skills of bad developers and hamper the skills of good developers. It’s creating a bunch of middling developers who are incapable of handling anything novel or complex.

        • nomad@infosec.pub
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          Sounds good. Pretty sure you are correct on most points. Agentic coding is bullshit for sure. I’m mostly talking partner coding, code review and some data interpretation like screenshots of UI changes in a CI for example.

          • Nate Cox@programming.dev
            link
            fedilink
            English
            arrow-up
            24
            ·
            1 day ago

            The goalpost escalation I constantly see in these threads is both hilarious and deeply frustrating.

            “You need to be a good dev to use these!” “I am a good dev and these tools suck.”

            “No like you need to be enterprise level good” “I am an enterprise level dev with credentials far exceeding the baseline offered.”

            “No but you need to have written code recently!!” “I was writing code yesterday.”

            I am now waiting for the obligatory “well your coworkers must just be fixing all your code you screw up” because the pro-ai crowd has no argument for the tech not based on “u suk”.

            • onlinepersona@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              17 hours ago

              I’m not pro AI or anti AI. I am anti big tech though, which makes the discussion more complicated.

              Regarding escalation, a non coding team lead isn’t a dev. A CTO isn’t a dev. A software architect isn’t a dev. A software developer is a dev. That’s not an escalation, it’s a fact.

              Just because you lead a team of devs, doesn’t mean you are a software developer, you could’ve gone to business school, never written a line of code and just started leading a team of software developers because you learned “how to lead”. And there are different kinds of team leads, those that get their hands dirty and those that don’t.

              So no, being a CTO, CEO, or whatever C you want to put in front of your title doesn’t make you “far exceed” any qualification. I actually think that kind of thinking is the problem workers are underpaid: people who lead actually often exceedingly overestimate their abilities in the craft they lead. “I lead a team of athletes, that means I’m a good athlete”. Do you understand how crazy that sounds?

            • entwine@programming.dev
              link
              fedilink
              arrow-up
              6
              ·
              1 day ago

              please review this Lemmy thread and come up with a good way to keep moving the goal posts so that I can feel like I’m right

              @onlinepersona prompting chatgpt right now

              • onlinepersona@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                17 hours ago

                Imagine you’re a worker of any kind. Some kid from university with a business degree and no experience in your job becomes team leader. They’ve learned to “lead”. Does that make them an expert in your craft?

                • entwine@programming.dev
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  7 hours ago

                  I’m not sure what you’re getting at. By definition, an “expert” is someone with a lot of “experience”. Your hypothetical kid has “no experience”. Since we know that 1+1=2, I think we can deduce that the answer to your question is no.

                  • onlinepersona@programming.dev
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    5 hours ago

                    The person I was responding to was equating their experience as a leader to being an expert in software development. And even if they had been a good developer 5, 10, or 15 years ago, that doesn’t make them stay an expert. Either you’re working in the field with the relevant experience, and position, or you’re not.

                    Your qualifications as a software developer don’t magically increase to say “far exceed the required qualifications” just because you lead a team, a division, or a company. Otherwise Satya Nadella, Bill Gates, and Jeff Bezos would be the best software developers in the world.

                • darkmarx@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  13 hours ago

                  My degree is in Computer Engineering, dipshit.

                  You made up this fantasy that somehow I don’t know what I’m talking about based on nothing other than you wanting me to be wrong so your world view isn’t challenged.

                  I stared out with the assumption that you were having a good faith discussion. It’s now clear that you’re a troll, tech bro, AI lover, or all of the above. At this point, I’m done with you and encourage others to be as well.

      • Dumhuvud@programming.dev
        link
        fedilink
        English
        arrow-up
        27
        ·
        1 day ago

        LLMs are a tool that can lift up the quality of coding work

        Imagine telling on yourself like this.

        And that is right after implying that you are a “proficient professional coder” that is “like engineer level” unlike those pesky “anti ai warriors”. Jesus fucking Christ.

        • nomad@infosec.pub
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          I’ve been training my own employees for years. And I’m suggesting you get a degree before playing keyboard warrior on the internet. ;)

          it makes it easy for bad coders to mask as passable but good coders can still spot that in review.

          • Dumhuvud@programming.dev
            link
            fedilink
            English
            arrow-up
            12
            ·
            1 day ago

            My entire point was in one single sentence, and yet you managed to shit out three sentences, not even remotely addressing that.

            I’m saying that if the output puked out by an LLM is of better quality than your own code, something you literally just confessed to, then you’re nothing but a hack. An impostor.

            What does the fact that you’ve been training anyone have to do with that? What does a degree, or lack thereof, have to do with anything? I’ve seen plenty of hacks employed as “seniors”, some with a CompSci degree. The kind of hacks that used to be overly reliant on StackOverflow in the past. The kind of hacks that write poorly performing garbage, yet quote Knuth’s “premature optimization is the root of all evil” (completely missing the context) when you confront them about it.

            • nomad@infosec.pub
              link
              fedilink
              arrow-up
              3
              ·
              1 day ago

              I’m not saying ai code is better than mine. But ai review sees quite a lot normal humans would overlook. Pair programming works with ai just as good. Generally agentic coding is shit. And I have nothing to prove nor get mad about. Somehow you can’t seem to bring up a sound argument but rage. X)

              I’m running a successful business with plenty of Devs trained and working for me doing all kinds of specialized real-time engineering. You shout on Lemmy.

              • chloroken@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                ·
                6 hours ago

                That last paragraph is so vague that anybody reading it knows you’re a complete charlatan.

      • Jiral@lemmy.org
        link
        fedilink
        arrow-up
        12
        ·
        1 day ago

        That is right, it is a tool. But how useful will it be as a tool once it will be sold by token at real costs, where every mistake that tool makes costs money and we are talking here maybe about 10 times higher costs than people currently pay for Claude, at the minimum.

        Add to that the question how the use of LLMs affects the career pipeline from junior dev to senior dev.

        There not so many tool analogies where the tool is especially good at making things look good, even if they aren’t when you dig deeper.

        • ell1e@leminal.space
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          1 day ago

          I also think there still hasn’t been a study showing consistent long term significant(!) productivity gain for coders. (Other than lines of code in total, but that alone is a poor measure.) The amount of new hidden bugs and other issues seem to outweigh most of the perceived gains.

        • nomad@infosec.pub
          link
          fedilink
          arrow-up
          4
          ·
          1 day ago

          Well i can’t disagree with that take. Skill still plays a role. You still can’t suggest people keep writinga and reviewing solely by hand. That ship has sailed.

      • TrickDacy@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 day ago

        Even if AI were the miracle people like you suggest, you’re still destroying the environment. But also it’s not miraculous. Which you conflictingly say is and is not the case…

        • nomad@infosec.pub
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          German engineer with 20 years of experience. Its a big jump. Believe me. I’m not suggesting most people use this tool the right way nor that the industry is not without flaws but its like eating meat. I have no issue with it as long as it is ethically sourced.

          • ell1e@leminal.space
            link
            fedilink
            English
            arrow-up
            10
            ·
            edit-2
            1 day ago

            All the studies I’ve found so far seem to disagree, so why should we believe you?

            https://www.anthropic.com/research/AI-assistance-coding-skills (2026 study)

            We found that using AI assistance led to a statistically significant decrease in mastery.

            Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.

            https://futurism.com/artificial-intelligence/new-findings-ai-coding-overhyped (2025 study)

            But those claims appear to be massively overblown, as The Register reports, with researchers finding that productivity gains are modest at best — and at worst, that AI can actually slow down human developers.

            • nomad@infosec.pub
              link
              fedilink
              arrow-up
              1
              ·
              1 day ago

              That’s what I’m saying. Ai does not help with speed. Takes potentially even longer. It helps with concept and design quality and completeness. For coding its just fancy auto complete. Think how LLMs can be used to improve the process instead of replacing yourself. Apply your skill with a lever instead.

              • ell1e@leminal.space
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                13 hours ago

                AIs are (apparently) stupid and fail at non-trivial tasks. They also enjoy deleting production databases. They seem atrocious with any sort of quality.

                What would AI possibly be useful for, if you care about quality work?

                (I suppose they can sometimes help with vulnerability scanning and writing mindless e-mails if you’re some sort of overworked customr agent, but those are pretty narrow uses. And I’m not going to upload my stuff to big tech data sloppers myself just for some slightly better vulnerability scanning.)

                • nomad@infosec.pub
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  9 hours ago

                  Again I do not suggest giving AI control. Don’t let it edit your code. Just give it what it needs to know to discuss and help construct code to review and insert by hand. You have full control and enjoy the benefits LLMs bring. Everything else is just asking for trouble.

                  • ell1e@leminal.space
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    edit-2
                    5 hours ago

                    That’s the thing, I don’t think LLMs bring many benefits. Too many lies, and so on.

          • TrickDacy@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            1 day ago

            Explain how this amount of electricity use could ever be “ethically sourced”. That’s not even much of a thing for meat, which at least provides nutrients. AI slop is everywhere and most of it is not helping anyone with anything.

              • TrickDacy@lemmy.world
                link
                fedilink
                arrow-up
                5
                ·
                1 day ago

                Unless you get 100% of your power from the solar panels which is doubtful, then you’re using solar power that could’ve gone to something actually necessary

      • klankin@piefed.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I mean its more like self driving cars than cars themselves; it can work, but also steering wheels were created by the devs for a reason - even if most are too lazy to understand that reason.

        Like I’d agree hand coding in assembly is (mostly) useless these days, but honestly I feel like the efficiency problems ai is trying to solve were largely solved 50 years ago with compilers.

        (and like isnt digesting large outputs the entire point of being an engineering level dev? like if youre just there to pray to the software gods, you’d do much better as a CRUD script kiddie anyways)