I like the idea that they think that educated jobs only belong to women. That’s an interesting thought.
The sad truth is that this shit isn’t going to replace “highly educated” jobs, and that the AI gravy train will end once people start to enforce basic intellectual property enforcement. Time is ticking, and the market taking a hit now is making them scramble.
Given that copy desks were being gutted more than a decade ago just with little things like Grammarly, it absolutely will replace knowledge jobs. It won’t be better, but it will mean more share buybacks.
It doesn’t need to be good to replace jobs, as long as there are no consequences for the people making those decisions.
I’ve lost count of how many “oops, it was AI’s fault, not my fault!” stories I’ve heard, even within highly regulated fields. Like, lawyers submitting documents with completely fake citations, and then…no real consequences. Seems to me like that should be cause for immediate disbarment, but no, apparently not.
The lack of consequences has been a problem for quite a while now, from before LLMs. In my opinion it’s been caused by a widespread increase in professional incompetence, together with a mutually protective network of incompetent people. “I won’t point out that you’re incompetent and won’t blame you for your mistakes, if you do me the same favour”.
They call it “imposter syndrome”, but it isn’t a syndrome: it’s a symptom.
I like the idea that they think that educated jobs only belong to women. That’s an interesting thought.
The sad truth is that this shit isn’t going to replace “highly educated” jobs, and that the AI gravy train will end once people start to enforce basic intellectual property enforcement. Time is ticking, and the market taking a hit now is making them scramble.
Given that copy desks were being gutted more than a decade ago just with little things like Grammarly, it absolutely will replace knowledge jobs. It won’t be better, but it will mean more share buybacks.
It doesn’t need to be good to replace jobs, as long as there are no consequences for the people making those decisions.
I’ve lost count of how many “oops, it was AI’s fault, not my fault!” stories I’ve heard, even within highly regulated fields. Like, lawyers submitting documents with completely fake citations, and then…no real consequences. Seems to me like that should be cause for immediate disbarment, but no, apparently not.
The lack of consequences has been a problem for quite a while now, from before LLMs. In my opinion it’s been caused by a widespread increase in professional incompetence, together with a mutually protective network of incompetent people. “I won’t point out that you’re incompetent and won’t blame you for your mistakes, if you do me the same favour”.
They call it “imposter syndrome”, but it isn’t a syndrome: it’s a symptom.
This roughly mirrors my experience in corporate America.
Indeed: Everything was already AI
This has been a very long project — separating conduct from consequences, in order to maximize profit. AI is just a breakthrough tool for doing it.