Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools.
The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT.
Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”


Precisely. From Cory Doctorow’s latest, very insightful essay on AI, where he talks about the promise of AI replacing 9 out of 10 radiologists:
I don’t think it’s fair to compare LLM code generation to machine vision in this way. These are very different "AI"s. Not necessarily disagreeing with Doctorow, but this is an important distinction.
How the machines work does not matter. The situation is using a machine to replace human expertise while ensuring a human still takes responsibility for things that human is not responsible for. It is not the owning class who is at risk for their machines mistakes, it is the owning classes wage slaves who are at risk.
My understanding is that the tumor detecting machine vision is generally thought useful in addition to the radiologist’s expertise. It basically outputs “yes”, “maybe”, and “no”, which is more expertise respecting than generating somewhere thereabouts code, which the coder has to (now) validate.
This is why I wouldn’t equate these tools. LLM code generation is marketed to do much more than machine vision for tumor detection.
Cory Doctorow actually goes more in depth on the radiologist example in a post from last year:
In short, we definitely could (and indeed should) be using tools like tumor detecting machine vision as something that helps humans build a better world for humans. But we’ve seen time and time again, across countless fields that it never works out that way.
That’s because this isn’t a problem with the technology of AI, but the fucked up sociotechnical and economic systems that govern how this tech is used, who gets to use it, who it gets used on, whose consent is required for those uses and most significant of all: who gets to profit?
The kind of AI doesn’t matter with this situation. Hell, It could be a magic talking rock™ and it change nothing of Mismanagement using a person to avoid blaming their shiny and expensive new toy.
“this is an important distinction”
it really isn’t