Just want to clarify, this is not my Substack, I’m just sharing this because I found it insightful.
The author describes himself as a “fractional CTO”(no clue what that means, don’t ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):
I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.
I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.
Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.


FYI this article is written with a LLM.
Don’t believe a story just because it confirms your view!
Lol the irony… You’re doing literally the exact same thing by trusting that site because it confirms your view
I’ve heard that these tools aren’t 100% accurate, but your last point is valid.
GPTZero is 99% accurate.
https://gptzero.me/news/gptzero-accuracy-stats/
I mean… has anyone other than the company that made the tool said so? Like from a third party? I don’t trust that they’re not just advertising.
The answer to that is literally in the first sentence of the body of the article I linked to.
Ai says Ai correction tool about how crappy Ai is at coding’s article is 99 percent chance of being Ai, results generated by Ai. . .
I agree but look at that third paragraph, it has the dash that nobody ever uses. Tell tale signs right there
Sure, but plenty of journalists use the em-dash. That’s where LLMs got it from originally. It alone is not a signature of LLM use in journalistic articles (I’m not calling this CTO guy a journalist, to be clear)
Context is everything. In publishing it’s standard; in online forums it’s either needlessly pretentious or AI and either way they deserve to be called out.
Aren’t these LLM detectors super inaccurate?
@LiveLM@lemmy.zip @rimu@piefed.social
This!
Also, the irony: those are AI tools used by anti-AI people who use AI to try and (roughly) determine if a content is AI, by reading the output of an AI. Even worse: as far as I know, they’re paid tools (at least every tool I saw in this regard required subscription), so Anti-AI people pay for an AI in order to (supposedly) detect AI slop. Truly “AI-rony”, pun intended.
https://gptzero.me/ is free, give it a try. Generate some slop in ChatGPT and copy and paste it in.
@rimu@piefed.social @technology@lemmy.world
Thanks, didn’t know about that one. It seems interesting (but limited, according to their “Pricing” ; every time a tool has a “pricing” menu item, betcha they’ll either be anything but gratis or extremely limited in their “free tier”), I created an account and I’ll soon try it with some of the occult poetry I use to write. I’m ND so I’m fully aware of how my texts often sound like AI slop.
I’ve tested lots and lots of different ones. GPTZero is really good.
If you read the article again, with a critical perspective, I think it will be obvious.
Yes, but also the opposite. Don’t discount a valid point just because it was formulated using an LLM.
The story was invented so people would subscribe to his substack, which exists to promote his company.
We’re being manipulated into sharing made-up rage-bait in order to put money in his pocket.