If a person is going to be blamed, it should be the one that mandated use of the AI systems… Because that’s exactly what Amazon was doing.
Talk about an extra slap in the fuckin face… getting blamed for something your replacement did. Cool.
That’s in the SOP for management.
True. In this case, these poor saps being tricked into “training” these AI to eventually render their jobs obsolete.
Yes. “obsolete” in that Amazon doesn’t give a shit about reliability anymore, so an AI reliability engineer is fine, now. Haha.
described the outages as “small but entirely foreseeable.”
LMAO
Would said employees have voluntarily used the agent if Amazon didn’t demand it? If no, this isn’t on them. They shouldn’t be responsible for forced use of unvetted tools.
Yay! Extra mental load of having to ask the AI “correctly” and then keep up one’s skills to be able to review the AI’s work! Extra bonus for being blamed for letting anything slip past.
At least the junior that fucked up will learn something from the experience and can buy a round of beers (if the junior is paid well enough, otherwise the seniors have to buy the junior a beer while talking it out).
I’m reminded of a time I was in a bar in Georgia at a conference. It was in the hotel, and a high-ranking editor for the then-reputable Washington Post bought me a beer. He let me take a sip before launching into how much “immature shit [I] need to get out of [my] system” before being ready to be “Post material.”
Where is any industry going to be in a decade, when no one’s been mentored?
AI is working great!
It’s working great to convince moronic executives to leave Windows when it fucks up majorly due to AI coding, which is a win for everyone.
I mean, I’ll applaud any push toward Linux.
Well, AI code should be reviewed prior merge into master, same as any code merged into master.
We have git for a reason.
So I would definitely say this was a human fault, either reviewer’s or the human’s who decided that no (or AI driven) review process is needed.
If I would manage devOps, I would demand that AI code has to be signed off by a human on commit taking responsibility with the intention that they review changes made by AI prior pushing
If I would manage devOps, I would demand that AI code has to be signed off by a human on commit taking responsibility with the intention that they review changes made by AI prior pushing
And you would get burned. Today’s AI does one thing really really well - create output that looks correct to humans.
You are correct that mandatory review is our best hope.
Unfortunately, the studies are showing we’re fucked anyway.
Because whether the AI output is right or wrong, it is highly likely to at least look correct, because creating correct looking output is where (what we call “AI”, today) AI shines.
Realistically what happens is the code review is done under time pressure and not very thoroughly.
AI can never fail, it can only be failed







