It’s really useful in creating base templates, but anything further than that and you won’t be able to read “your own” codebase if you depend too much on AI.
And?
We have been interviewing for entry level positions and the new grads know less than ever before. I don’t really care what they know, I am looking for evidence that they can think, but I usually ease them into thinking scenarios by asking easy foundational questions like how many bits in a byte. You would think I was asking for them to explain the Shrodinger wave equations… One candidate was waivering between 13 and 17…
No shit.
Muscles that are not used lose their function.It weakens and eventually becomes unusable.As humans’ ability to ask questions of artificial intelligence increases, its ability to learn and store information is disappearing.If the brain can obtain something easily, it doesn’t feel the need to take precautions regarding it.Therefore, memorizing code doesn’t involve things like writing it anymore.It only keeps the recognition function active when it sees it.
When you start relying on something else, it’s quite natural and expected to no longer be good at the thing now being done for you.
But in this context, it’s a net negative. While you can certainly write more code while using the tool, you’re almost always writing worse code. And you still get the atrophy, so the result overall: now you’re not good at the thing, and neither is the tool you’re using.
And remember, AI models need constant retraining as systems and approaches are updated, languages change, etc. Where is that training data going to come from? From the people now worse at coding than they were before.
I just don’t get it, even the purportedly best models screw things up so much that I can’t just leave them to the job without reviewing and fixing the mess they made… And I’m also drowning in pull requests that turn out to be broken as it proudly has “co authored by Claude” in it… Like it manages to pass their test case but it’s so messed up that it’s either explicitly causing problems, or had a bunch of unrelated changes randomly.
I feel like I’m being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.
Closest I got was a chore that had a perfect criteria “address all warnings from the build”. Then let it go and iterate. Then after 50 rounds each round saying “ok should be done now, everything is taken care of, just need to do a final check”. It burned though most of my monthly quota doing this task before succeeding. Then I look at the proposed change… And it just added directives to the top of every file telling the tools to disable all the warnings… This was the best opus 4.6 could do…
Now sure, I can have it tear through a short boiler plate and it notice a pattern I’m doing and tab through it. But I haven’t see this “vibe” approach working at all…
I feel like I’m being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.
That’s because you are being gaslit.
The people making those claims are either a) not developers in the first place, with no awareness of just how shit the “products” they’re pushing are, b) paid astroturfers trying to prop up AI, or c) former actual developers who’ve become addicted to the speed that’s possible with AI who are downplaying how crappy their own code quality has become because they have no familiarity with their codebase anymore and have forgotten how to do so much as a
forloop.All these people claiming 10x or 100x gains, and everything they’re making is garbage no one should or would touch with a ten-foot pole.
there are also the low tier coders who have ai making better code than they could have produced.
Still terrible code.
I’ve seen bad coders trying to merge hundreds of lines of code where maybe ten were needed. They rely on more experienced devs to tell them how to fix that, just for these to copy and paste the suggestions given in Claude.
I mean if that’s the value someone provides, no wonder they fear for their future.
it wasnt a positive. terrible code is better than atrocious code
I’d rather have no code at all, if I’m being honest.
Maybe not better, but þey have no ability to evaluate quality. But, yeah, þere are a lot of really bad programmers in þe market. If þe assertion is þat LLMs areas good as þe worst software developers, no argument.
Capitalism created þis world. Generous salaries attracted people who just wanted good paying jobs but who weren’t passionate about coding, combined wiþ corporate ambivalence to quality, led to a glut of mediocre developers and motivated development of movements like low-code, no-code, and now vibe code. It has been a vicious capitalist cycle.
what it seems to be doing, in your case and others i have seen, is pushing the burden onto those who “care” and really fully grok (no pun intended) the concept of a real code review. it’s exhausting.
Lol! Losers. I’ve been programming for almost two decades and extensive use of AI hasn’t compromised my skills AT ALL! These slop machines can’t hope to compete with the quantity and magnitude of subtle bugs I write. My code was terrible long before I made bots have mental breakdowns trying to work with it.
AI also gives you the benefits of a middle manager. If everything works as intended you take the credit but if something breaks that’s not your fault, AI made the mistake. If they try to put the blame on you just say you have 6 agents working on 6 different domains all cross-reviewing their commits and you can’t be expected to review every single line of code yourself. Time to play corporate like a damned fiddle!
Saved me a paragraph there.
I took and passed a coding bootcamp at the eve of the first LLMs and generative AI. I had to do similar courses on my own to refresh my skills. I never found a coding job (story of my life!) But if I needed to I can do another course to refresh and start over stronger.
What are they so panicky about?
Go ahead, use your AI to replace all of your own skills. The rest of us will gladly take your job when you can no longer troubleshoot problems.
Based on my experience with LLM and developers I personally know, my only assumption is they don’t have the skills in the first place…
In corporate world there are a lot of “developers” that actually act kind of like codegen. They just throw plausible sounding bullshit into an editor and hope for the best. Two examples:
Once asked to help a team speed something that ran slow, even by their low standards. Turned out they had made their own copy file routine instead of using the standard library one, and sucked the file into memory, expanding array 512 bytes at a time, and then wrote it out, 512 bytes at a time. I made the thing nearly instant by just making it a call to the standard library function to copy a file.
While helping with a separate problem, I noticed their solution for transferring some file with an indeterminate version number in the middle of the file name. It was a huge mess, but the most illustrative line was the line in their Java application declaring a string “ls /path/with/file|grep prefix.*.extension”…
Lots of human slop out there that AI can actually compete with.
For those unable to code without AI:
What even is your contribution outside of a glorified typing monkey that can parse code but is unable to write it?
It’s like a paramedic not being trained at all for a medical emergency response but sent there regardless to just stand and observe the patient while writing notes about the sounds they make while dying.So this is going to invoke a multitude of downvotes, but here goes.
I will give you an example. I can read a bit of python code, not the advanced stuff, but enough to understand to a large degree what the code does. Last week, I had the need to add a button to Netbox that will download a multitude of device configs that are being rendered via config templates. This use case helps a whole department apply configs, without having to create them by hand.
I knew Netbox has a very powerful plugins ecosystem. The way the base code is written grants the capability of adding any type of plugin you might need in your unique environment. I used Claude to create this plugin for me. I wrote a very specific spec file, told it to utilise the already built pynetbox plugin and ensure it uses nothing fancy that is not sustainable. It created the plugin, helped me with pip installing it, and I deployed it on my dev environment where I tested it extensively.
My alternative to using claude: Asking our internal development team to write something like this. I would need to wait 3 weeks to even get a spot on their meeting for the request, just to then be told their backlog is full with customer code and they won’t be able to help. This plugin will help our support team with fewer calls, because the configs are accurately built according to the source of truth (Netbox) and will need less human input. So in the greater scheme of the company, that is a net positive.
What I will do when Netbox updates, is update my dev environment, install the plugin, and test it. If something broke, I will troubleshoot it, of course I will be using Claude with error logs etc, then update the plugin code to work on the new netbox. Is this ideal? Probably not. Is it the only way to get this done? Maybe not either. Is it all I can do at this very moment? Yes.
My specialist fields are the lower levels. Hardware, hypervisors and setting up VMs + System Software. I need code from time to time to get something functional done. I don’t write whole systems with Claude, that is just ridiculously naive. But small pieces of functional code that solves a single small problem, I honestly don’t understand the problem with that.
My 2c.
But you arent a dev as a main job.
This is talking about developers, employed as developers, beginning to being inept to be developers and (not offense) being not worth much more than what your technical abbilities already provide.
So what’s their point?It’s like someone being employed as a translator, is able to hear the language and sort of understand it but every translation is done through deepL or google translate.
So why should I a translator instead of using paid deepL directly and proofread it using google translate to make sure it didnt generate (mostly) nonsense?
Isnt this mostly the point of a trained professional to being better than a self taught amateur?You are correct. I mistook your comment to refer to people in general, rather than trained professional coders. So indeed, you are correct.
Glad
Happywe are in agreement :)
And no worries about the missunderstanding ;)
Clarifying requirements, designing architecture. Also, I dont understand how is someone supposed to be able to “parse code” without being able to write it? It’s like being able to read but unable to write.
I can read significantly more programming languages than I can write working code in. You can usually figure out the syntax and get the gist of what’s going on in a non trivial amount of code. Sure, the oddball syntax/language feature comes up that I have to lookup but it’s not too bad.
ditto, similar to the way Severence gets a sense of whats off, i cna do that with code, ask me to start from scratch i would not know where to start. Give me google, i will have a bunch of a copy pasta that works in the end, claude does the research, evaluation, best practices and review and testing and re-review and testing, when the Developers department will go to war with you if you put a Slack question through the wrong channel
FORTRAN inline for loops go brrrrrr
I understand cooking concepts and can tell when something I am familiar with is made well. If I watch a cook, most of the time I can tell why they do certain things anand how it impacts the food.
My cooking skills are very limited, especially when it comes to making new things. My sql skills are the same, I can read through the code and spot errors that match issues, but even creating something new is fairly limited despite being able to read and comprehend what has already been done.
I feel like AI is a 5G language, in that we have moved on from writing code directly to writing md files to command the bots to write the code. It seems like a higher abstraction of the code. It does make you think less about the code directly, and more about the bigger picture, but you still need those skills to check the bots output.
Many people believe this, and it couldn’t be more wrong. It’s like saying that a product manager can code, if their tickets are detailed enough to give a general vision of a piece of software.
Implementation still matters. Context still matters. Vibe coded projects all follow these patterns where each change is a thousand lines of code out, two thousand in. And there’s a breaking point where reading and understanding these changes is not only unpractical, but also counterproductive.
But then, there’s the bigger question of language expressivity and determinism: even if LLMs could achieve a certain level of consistency of outputs given certain inputs, how do we make a natural language like English expressive enough, and more importantly, non ambiguous enough, to work like an actual programming language?
I think everyone’s development experience is different with these tools, we are not letting it just work the ticket blindly based off some prompt, we are having it do small tasks that would normally take a few minutes and are now done in seconds. We don’t allow these bots to commit code, or even the commit message, and the devs are still responsible at the end of the day for the code they commit.
Don’t know why you got downvoted, you’re 100% right. It’s just another layer of abstraction. Like a super high level non-deterministic level of abstraction.
If natural languages were just another level of abstraction, we would already have a successful English like programming language.
I would say that it’s doesn’t count as a “5G language” if you have to understand and check the underlying “assembly code” it outputs every time you use it.
It seems there a lot of people on Lemmy who dislike anything AI. I have no choice at my work so I have to make the best of it as I’m not leaving my job in this economy.
You will learn to like something because you’re being extorted to use it.
Sounds about right.
Yeah well welcome to work life in the US. It sucks but I’m surviving.
This is a recipe for SQL injections, race conditions, memory leaks, and keys being placed directly in code.
Trust the output of an LLM at your peril. Literally.
Well I did say you still need to use your skills to check the bots code.
Unless you’re checking every line and have a good enough and comprehensive enough understanding of the codebase to spot subtle bugs it will try to introduce that aren’t caught by your tests, you’re still opening yourself up to problems.
Your right and it is something that is part of the workflow. You really should only do this process on a language your are really familiar with so you know exactly how you would do it without the bots assistance.
I notice getting lazier. Even adding a. gitignore file I ask Claude now. It takes longer than typing it myself and costs more probably. But I don’t have to do anything but wait a few seconds.
The thing that scares me (and why I’ve stopped using it): my brain automatically reaches for the shortcut whenever I would have to do deep thinking/planning.
I have ADD, so getting my brain to focus and work on a task is not an easy feat to begin with. Now I’ve found myself multiple times a day unable to will myself to think about a problem but rather deferred to Claude. It’s seriously fucked up.
That’s not even diminished coding ability, that’s diminished thinking ability.
And herein lies the reason AI is being pushed at all costs.
What’s the saying again… :“The purpose of a system is what it does”
If I was paying for it, hell naw. But if my employer not only is willing to pay for it, but considers it a performance metric? I’m going to use it for fucking everything. These are the incentives they give me, I’m going to follow the incentives. Talking to Claude is what they pay me for, apparently.
But like the article says, if I don’t continue practicing on my own code in my unpaid off-work hours, I imagine I’d be regressing in my skills too. I do that because I enjoy it as a hobby, but if I didn’t, I could see myself and probably a lot of other people getting rugpulled by this.
I’m not using it for the incentive. I’m using it to avoid punishment. The company I work for made it mandatory to use it daily. So I’m tokenmaxxing bullshit tasks so I can focus on interesting ones, but yeah I already feel it’s making me lazy because I sometimes can’t be bothered to read a log anymore. We are truly fucked.
This company is working on terrible assumptions. They spent years hunting for the best engineers in the country (or so they pretend to anyway) and suddenly decided that
- we are average at best and it is better and faster than most of us (it’s not)
- software engineers don’t like to write code anyway (we do, at least when the challenge is interesting)
- it will forever be more affordable than properly qualified engineers (oh boy it won’t)
- a PM with Claude is as qualified as us to bring features to production (talk about tech stack suicide)
- etc.
They either have drunk the propaganda koolaid and betting everything on this lie, or are so arrogant they think we can succeed where the largest AI investors in the world utterly failed (see GitHub that can’t even get 3 nines of availability since the switched to full-ai-code).
People lost their abilities to use slide rules too, to write assemblers, etc. The big companies monopolizing the tech are bad, but the tech is here to stay.
This.
For some reason they hate is really strong this time around but it’s the exact same thing.
This is why I don’t use it for coding at all.
Issue triage, code exploration, extracting information from disparate sources, first pass code review. There are loads of use cases that it’s potentially useful.
For me it’s a lot better at extracting the requirements for a CPU feature from a 10,000 page architecture reference manual than I am.
Quite; I just set a (locally hosted) LLM off writing the tickets for implementing all the opcodes in a simple device emulator, based on grovelling through datasheets and documentation. Whether the tickets get implemented by an AI or a human, it’s a timesaver having the AI do it, and the tickets will be better written than I would have done.
Everyone railing against this also overlooks the reality of professional software development: professional software is developed 5% by skilled, trained Software Engineers, and 95% by code monkeys who shotgun copypasta from Stack Overflow until it works. Even if we extremely generously assume that the hardcore “never use AI” Lemmy brigade are in the 5% (and not, more likely the 95% drowning in their own Dunning Kruger,) the “but AIs produce unreadable code and make mistakes” threat isn’t putting off anyone who’s ever actually had to hire a significantly sized development team.
Yes, the obvious solution is to avoid it. I use it only for the most boilerplatey things. Anything else, I want to make sure I can still do it myself.
I don’t knowingly use AI at all in my person life and projects (I say ‘knowingly’ since many products have it shoved inside now, but I disable all I see). At work, we have AI code reviews which, as a concept, I think is fine and useful.
IMO that’s totally fine and appropriate
I’m fully able to code still, I just find it pointless when AI can do it for me. It’s like having to be somewhere, should I take the car or walk? Yeah walking might be good for me and the environment, but my car is so much faster and easier and I’ll definitely be on time. Who cares about the consequences of the future?
weird flex but ok











