- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
We need the equivalent investment now. If average code is cheap, then the scarce resource is no longer the ability to produce it. The scarce resource is the ability to read it, to navigate it
You know what would help a lot with understanding the code one is working on? Writing it yourself without turning your brain off via AI.
But that’s an insight the article somehow seems to be missing.
Take away a calculator halfway through an exam, and suddenly, people are surly and unmotivated about simple long division.
That’s how every ‘AI makes you stupid!’ article works. Like, ‘doctors used AI to detect more cancer, but when we took it away, they were worse at eyeballing it.’ Sorry, can we go back to the part about detecting cancer better?
The doctors were worse, not just taking longer. So this would be more like people unlearning division.
While people using calculators may occasionally be unlearning division, this seems less problematic than doctors unlearning how to spot cancer on their own (since then I’m guessing you won’t have AI training data anymore since apparently AI can’t feed into AI without collapse) or software engineers unlearning how to write correct code.
You also wouldn’t want a mathematician unlearning how to do division.
The doctors were better, until someone yanked the tool away. That’s how every tool works! Even going from a handsaw to a table saw and back will make you lose some skill with the handsaw, because your brain focused on higher-level goals and finer motions. That’s not proof a table saw is bad for woodworking. The problem is “and back.”
since apparently AI can’t feed into AI without collapse
Have you checked on that narrative? It’s been a while. Things stopped getting yellow. Improvements continued.
Have you checked on that narrative?
The only workaround known so far seems to be to make sure enough data is fresh: https://www.inria.fr/en/collapse-ia-generatives https://en.wikipedia.org/wiki/Model_collapse But read for yourself.
I always ask myself how many of these anti ai warriors are actually proficient professional coders. And I’m talking like engineer level, not hobby level.
LLMs are a tool. Give a package power tool to a fool and the result is stupid at best, bloody at the worst. Let’s call that vibe tooling and ask if there is a difference to vibe coding.
Imho there is not. LLMs are a tool that can lift up the quality of coding work to a common level if used by proficient people. It helps with searching through and understanding vast outputs as long as you know what to expect. Its a miracle in intuition.
Its not a mind reading tool that will just code your fantasy software for you. Hate it all you like, AI is here to stay, this is like hating cars in the age of horses. Cars are not magic, neither is “AI”.
There are definitely real engineers being strongly anti-AI. The problem, in my opinion, is that they just didn’t really try working with them.
They’re incredibly powerful tools, and they don’t only amplify bad developers, they amplify every developer that really tries to work with it.
The mistake people make is delegating the decision making to the AI. Let the tool be a tool, not a brain. You architect, you design, you order, it writes the code. You review the code. There you go, you have a pretty good quality code, better than most devs will produce, following your design and architecture, you controlled the entire decision making, and you did it in 5x less time.
I also think that it has become too useful to disappear in engineering.
I have over 25 years of development experience. My current role is vice president of development and architecture where I lead a team of 80+ devs, QAs, and architects. By any measure, I am one of those “engineer level” developers you speak of.
Yes, LLMs are a tool, but it’s a tool one should use sparingly. LLMs are pattern recognition machines and are great for routine, been-there-done-that type development. For anything that deviates from the norm, LLMs will try to force everything back into common patterns… even when those patterns are not correct. A well designed system can be mangled into junk because the LLM doesn’t have enough context or because something is new.
Be skeptical of the rave reviews around coding agents and the use of LLMs for development. Much of the hype seems tied to developer skill. Less capable developers can use LLMs to appear more capable than they are. For good developers, LLMs seem to erode their skills as they rely on the tool instead of their own knowledge. I have seen this first hand.
Overall, it seems LLMs raise skills of bad developers and hamper the skills of good developers. It’s creating a bunch of middling developers who are incapable of handling anything novel or complex.
Sounds good. Pretty sure you are correct on most points. Agentic coding is bullshit for sure. I’m mostly talking partner coding, code review and some data interpretation like screenshots of UI changes in a CI for example.
For what it’s worth, my initial comment was concerning partner coding as well.
Wen was the last time you actually wrote something production level yourself?
The goalpost escalation I constantly see in these threads is both hilarious and deeply frustrating.
“You need to be a good dev to use these!” “I am a good dev and these tools suck.”
“No like you need to be enterprise level good” “I am an enterprise level dev with credentials far exceeding the baseline offered.”
“No but you need to have written code recently!!” “I was writing code yesterday.”
I am now waiting for the obligatory “well your coworkers must just be fixing all your code you screw up” because the pro-ai crowd has no argument for the tech not based on “u suk”.
I’m not pro AI or anti AI. I am anti big tech though, which makes the discussion more complicated.
Regarding escalation, a non coding team lead isn’t a dev. A CTO isn’t a dev. A software architect isn’t a dev. A software developer is a dev. That’s not an escalation, it’s a fact.
Just because you lead a team of devs, doesn’t mean you are a software developer, you could’ve gone to business school, never written a line of code and just started leading a team of software developers because you learned “how to lead”. And there are different kinds of team leads, those that get their hands dirty and those that don’t.
So no, being a CTO, CEO, or whatever C you want to put in front of your title doesn’t make you “far exceed” any qualification. I actually think that kind of thinking is the problem workers are underpaid: people who lead actually often exceedingly overestimate their abilities in the craft they lead. “I lead a team of athletes, that means I’m a good athlete”. Do you understand how crazy that sounds?
Yes. It’s “AI can never fail. It can only be failed.”
Friday.
please review this Lemmy thread and come up with a good way to keep moving the goal posts so that I can feel like I’m right
@onlinepersona prompting chatgpt right now
Imagine you’re a worker of any kind. Some kid from university with a business degree and no experience in your job becomes team leader. They’ve learned to “lead”. Does that make them an expert in your craft?
I’m not sure what you’re getting at. By definition, an “expert” is someone with a lot of “experience”. Your hypothetical kid has “no experience”. Since we know that 1+1=2, I think we can deduce that the answer to your question is no.
My degree is in Computer Engineering, dipshit.
You made up this fantasy that somehow I don’t know what I’m talking about based on nothing other than you wanting me to be wrong so your world view isn’t challenged.
I stared out with the assumption that you were having a good faith discussion. It’s now clear that you’re a troll, tech bro, AI lover, or all of the above. At this point, I’m done with you and encourage others to be as well.
It seems it was recent enough to spell common words correctly
LLMs are a tool that can lift up the quality of coding work
Imagine telling on yourself like this.
And that is right after implying that you are a “proficient professional coder” that is “like engineer level” unlike those pesky “anti ai warriors”. Jesus fucking Christ.
I’ve been training my own employees for years. And I’m suggesting you get a degree before playing keyboard warrior on the internet. ;)
it makes it easy for bad coders to mask as passable but good coders can still spot that in review.
My entire point was in one single sentence, and yet you managed to shit out three sentences, not even remotely addressing that.
I’m saying that if the output puked out by an LLM is of better quality than your own code, something you literally just confessed to, then you’re nothing but a hack. An impostor.
What does the fact that you’ve been training anyone have to do with that? What does a degree, or lack thereof, have to do with anything? I’ve seen plenty of hacks employed as “seniors”, some with a CompSci degree. The kind of hacks that used to be overly reliant on StackOverflow in the past. The kind of hacks that write poorly performing garbage, yet quote Knuth’s “premature optimization is the root of all evil” (completely missing the context) when you confront them about it.
I’m not saying ai code is better than mine. But ai review sees quite a lot normal humans would overlook. Pair programming works with ai just as good. Generally agentic coding is shit. And I have nothing to prove nor get mad about. Somehow you can’t seem to bring up a sound argument but rage. X)
I’m running a successful business with plenty of Devs trained and working for me doing all kinds of specialized real-time engineering. You shout on Lemmy.
That last paragraph is so vague that anybody reading it knows you’re a complete charlatan.
That is right, it is a tool. But how useful will it be as a tool once it will be sold by token at real costs, where every mistake that tool makes costs money and we are talking here maybe about 10 times higher costs than people currently pay for Claude, at the minimum.
Add to that the question how the use of LLMs affects the career pipeline from junior dev to senior dev.
There not so many tool analogies where the tool is especially good at making things look good, even if they aren’t when you dig deeper.
I also think there still hasn’t been a study showing consistent long term significant(!) productivity gain for coders. (Other than lines of code in total, but that alone is a poor measure.) The amount of new hidden bugs and other issues seem to outweigh most of the perceived gains.
I would argue that adding lines of code is the worst thing a developer can do.
The key question is if total costs along the pipeline, from requirements definition down to the final quality controlled fully debugged product can be reduced, at real LLM costs (not with the currently vastly subsidised costs).
I agree. And from the data I’ve seen so far, it doesn’t look convincing at all.
In part because AI seems to be phenomenally unintelligent.
Well i can’t disagree with that take. Skill still plays a role. You still can’t suggest people keep writinga and reviewing solely by hand. That ship has sailed.
Says who, that we can’t suggest that?
Many of us are.
Even if AI were the miracle people like you suggest, you’re still destroying the environment. But also it’s not miraculous. Which you conflictingly say is and is not the case…
German engineer with 20 years of experience. Its a big jump. Believe me. I’m not suggesting most people use this tool the right way nor that the industry is not without flaws but its like eating meat. I have no issue with it as long as it is ethically sourced.
All the studies I’ve found so far seem to disagree, so why should we believe you?
https://www.anthropic.com/research/AI-assistance-coding-skills (2026 study)
We found that using AI assistance led to a statistically significant decrease in mastery.
Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.
https://futurism.com/artificial-intelligence/new-findings-ai-coding-overhyped (2025 study)
But those claims appear to be massively overblown, as The Register reports, with researchers finding that productivity gains are modest at best — and at worst, that AI can actually slow down human developers.
That’s what I’m saying. Ai does not help with speed. Takes potentially even longer. It helps with concept and design quality and completeness. For coding its just fancy auto complete. Think how LLMs can be used to improve the process instead of replacing yourself. Apply your skill with a lever instead.
AIs are (apparently) stupid and fail at non-trivial tasks. They also enjoy deleting production databases. They seem atrocious with any sort of quality.
What would AI possibly be useful for, if you care about quality work?
(I suppose they can sometimes help with vulnerability scanning and writing mindless e-mails if you’re some sort of overworked customr agent, but those are pretty narrow uses. And I’m not going to upload my stuff to big tech data sloppers myself just for some slightly better vulnerability scanning.)
Again I do not suggest giving AI control. Don’t let it edit your code. Just give it what it needs to know to discuss and help construct code to review and insert by hand. You have full control and enjoy the benefits LLMs bring. Everything else is just asking for trouble.
Explain how this amount of electricity use could ever be “ethically sourced”. That’s not even much of a thing for meat, which at least provides nutrients. AI slop is everywhere and most of it is not helping anyone with anything.
I’m running my own local LLMs on solar power from my roof.
Unless you get 100% of your power from the solar panels which is doubtful, then you’re using solar power that could’ve gone to something actually necessary
Most of my code is public if you’re curious!
I mean its more like self driving cars than cars themselves; it can work, but also steering wheels were created by the devs for a reason - even if most are too lazy to understand that reason.
Like I’d agree hand coding in assembly is (mostly) useless these days, but honestly I feel like the efficiency problems ai is trying to solve were largely solved 50 years ago with compilers.
(and like isnt digesting large outputs the entire point of being an engineering level dev? like if youre just there to pray to the software gods, you’d do much better as a CRUD script kiddie anyways)
AI tools can generate functional, adequate, perfectly average code at a speed and cost that would have been unimaginable even five years ago. And like the outsourcing wave of the early 2000s, the economics are real and rational. Nobody is wrong for using these tools. The code they produce is often fine. It works. It passes tests. It might ship as-is.
Not the first time I’ve read this kind of statement and I always struggle to reconcile this with my personal experience. I’m seriously doubting that I’m just not a “good enough prompter”. I know how to explain context from domain to tech and vice versa, that’s like, a good 20% of my job. I’d say that AI tools are good at producing code that already exists.
The LLMs are an interface to a corpus of written material. They’ve never had a thought, a chat around the coffee machine, or any experience in the largest sense of the world. This is a hard barrier on any induction they may emulate.
You’re both correct, and also wrong.
A lot of code already exists. Or at least in a close enough form that it can be easily adjusted to address a new situation.
When someone comes up with an idea for a new App at this point, it’s almost never because it’s an entirely new branch of computing. It’s very likely just CRUD with a visual design, and then a small more complex algorithm to mix the data around behind the scenes.
What’s the difference between a dating app and an automatic meal plan builder? The algorithm doesn’t care about whether or not the recipe swiped back when it matches it up to you.
You’re right that they’re not going to be inventing entirely new things most of the time, that’s just not what’s needed of them most of the time.
Fortunately software is much more than App ideas fishing for VC investments. A lot of us are building actual tools for nurses, teachers, technicians, artists, students, etc. We have to analyze these human beings’ role in society, their needs, their situation, which is different from merely preying on their attention span. Programming languages are still the most reliable way to specify how the software must behave. And once the software is done, it is merely born. It then lives through a steady flow of continuous adaptation until one day it dies as all things do. Downplaying the human condition is a mistake.
You missed the point. The point is that almost all software today follows the same general ideas, patterns, etc.
The quality of the output of AI is not tied to what these patterns are used towards. Even if, say, your tool has a completely new network protocol. An LLM will still “understand” that it is a network protocol, that it serializes following rules that you tell it, serializes and deserializes the way you decide, then it will write that down in a memory and be able to work with that.
A new file format? Same. A very specialized new kind of No-SQL database that fits your very specific tool better? It will also write down in a file how it works and be able to use that.
It’s as good as the documentation you give it is. Which, for basic things such as setting up a basic REST API, it has learned in its training data. If it hasn’t, it’s up to you to provide it, and it will be perfectly able to use it.
Even if you build some weird unique assembly language it will be able to use it if you give it the set of instructions and their documentation.
A medicine dispenser application for a nurse is still just CRUD operations for the most part. There’s nothing innovative about how the code would be written in an application like that.
Meh, disagree with a lot of this.
AI tools can generate functional, adequate, perfectly average code
Not in my experience.
The outsourcing era taught us that the expensive part of software was never writing it. It was understanding it well enough to change it safely, to debug it under pressure, to explain to the next person why a particular decision was made at 2 a.m. on a Tuesday.
Since AI is adequate, just have AI change, debug, and explain it. You don’t even need devs running the AI. Have AI generate intent. Just have AI scrape Twitter for people complaining about applications they wish existed, and have the AI make them. Let AI do market research. It’s supposedly perfectly adequate.
‘Well if it’s merely okay just have it be perfect.’
Unserious.
Its perfectly adequate generating simple scripts if you know what it’s doing or complex programs IF.you have a “harness” which is to say tones of well defined scopes, design docs, coding guidelines, and a dev and test environment with written and automatic unit and integration tests.
Basically every devs wish lists. You get adequate complex coding results.
Just have AI scrape Twitter for people complaining about applications they wish existed, and have the AI make them.
I mean… in 2026, this is probably a viable business strategy tbh.








