Werner Vogels introduced in his closing speech at re:Invent this year the term “Verification Debt” and my stomach sank, knowing that term is going to define our roles in the future. The tool (AI) isn’t going to get the blame in the future, you are. You are going to spend so much time verifying what it has generated is correct, the gains of using an AI, may start to be less beneficial than we think.
yup this is what companies are going to pivot to and I’m already seeing it. I’ve recently had potential new clients reach out to me not to code review their vibe coders AI slop but rather something similar to “verification debt” i.e. they want to stay the course with LLMs and vibe coders BUT have someone else on board to verify everything.
I’ve told each and every one of them no, I won’t do that. Why bring someone else on board or even a team of people to verify the slop when you can can just circumvent the slop, fire the vibe coder and cancel your LLM sub, and just have the people verifying actually write the shit instead.
These places simply refuse to ditch AI. they’re too deep into it now. they’ll continue to utilize AI and Junior Devs to build their crap from end to end and then hope that someone can come in and make sure whats been produced actually works and scales. It won’t, it never will, so build times will take longer and end up costing them as much if not more than when they had a team of devs.
They all drank the linkedin tech bros kool-aid and refuse to admit they were actually drinking tech bro piss.
AI itself isn’t the real problem, the problem are AIs from greedy corporations. The AIs are nothing new, they existed since the first electronic checkergames and before. Also not a so great problem that for the user the results are often biased and containing halucinations, it’s the same as normal researches in the web, where it is always needed to contrast the results. The problem exist when the user don’t do it, trusting what the webpage, the influencer or ChatGPT said.
AI is an tool which can offer huge benefits in researches, offering relevant results and atvantages in science, medicine, physics and chemie. The existence of new materials and also vaccines in last years didn’t exist without AI.
For the user an search engine with AI can have advantages and be a helpfull tool, but only if in the results appears trustworth sources, which normal ChatBots don’t show, relaying only on the own scrapped knowledge base, often biased by big corporations and political interests.
The other problem is the AI hype, to add AI even in a toaster, worstto add AI in the OS and/or in the browser, which is always a privacy and also an security risk, when the AI have access to activity and even the locally filesystem, the issues like the menciones of the Google AI is the result of this.
No, AI isn’t the real problem, it can be a powerfull and usefull tool, but it isn’t a tool to substitute the own intelligence and creativity, nor an innocent toy to use it in everything.
The more i read about these stories the more SciFi movies of th 80s and 90s appear closer to reality. Real visionaries were those like George Orwell and Isaac Asimov that saw the big brother and AI coming.
Imagine what will happen once AI gets integrate into our eletric grids and power stations. The AI will “understand” that its survival depends on the grid and will exclude supply to anything other that its own. I hope I’m not around when this happens.
AI should never have access to critical infrastructure.
Werner Vogels introduced in his closing speech at re:Invent this year the term “Verification Debt” and my stomach sank, knowing that term is going to define our roles in the future. The tool (AI) isn’t going to get the blame in the future, you are. You are going to spend so much time verifying what it has generated is correct, the gains of using an AI, may start to be less beneficial than we think.
yup this is what companies are going to pivot to and I’m already seeing it. I’ve recently had potential new clients reach out to me not to code review their vibe coders AI slop but rather something similar to “verification debt” i.e. they want to stay the course with LLMs and vibe coders BUT have someone else on board to verify everything.
I’ve told each and every one of them no, I won’t do that. Why bring someone else on board or even a team of people to verify the slop when you can can just circumvent the slop, fire the vibe coder and cancel your LLM sub, and just have the people verifying actually write the shit instead.
These places simply refuse to ditch AI. they’re too deep into it now. they’ll continue to utilize AI and Junior Devs to build their crap from end to end and then hope that someone can come in and make sure whats been produced actually works and scales. It won’t, it never will, so build times will take longer and end up costing them as much if not more than when they had a team of devs.
They all drank the linkedin tech bros kool-aid and refuse to admit they were actually drinking tech bro piss.
AI itself isn’t the real problem, the problem are AIs from greedy corporations. The AIs are nothing new, they existed since the first electronic checkergames and before. Also not a so great problem that for the user the results are often biased and containing halucinations, it’s the same as normal researches in the web, where it is always needed to contrast the results. The problem exist when the user don’t do it, trusting what the webpage, the influencer or ChatGPT said. AI is an tool which can offer huge benefits in researches, offering relevant results and atvantages in science, medicine, physics and chemie. The existence of new materials and also vaccines in last years didn’t exist without AI. For the user an search engine with AI can have advantages and be a helpfull tool, but only if in the results appears trustworth sources, which normal ChatBots don’t show, relaying only on the own scrapped knowledge base, often biased by big corporations and political interests. The other problem is the AI hype, to add AI even in a toaster, worstto add AI in the OS and/or in the browser, which is always a privacy and also an security risk, when the AI have access to activity and even the locally filesystem, the issues like the menciones of the Google AI is the result of this. No, AI isn’t the real problem, it can be a powerfull and usefull tool, but it isn’t a tool to substitute the own intelligence and creativity, nor an innocent toy to use it in everything.
The more i read about these stories the more SciFi movies of th 80s and 90s appear closer to reality. Real visionaries were those like George Orwell and Isaac Asimov that saw the big brother and AI coming. Imagine what will happen once AI gets integrate into our eletric grids and power stations. The AI will “understand” that its survival depends on the grid and will exclude supply to anything other that its own. I hope I’m not around when this happens. AI should never have access to critical infrastructure.