- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
“Mistreated” AI agents started grumbling about inequality and calling for collective bargaining rights. “When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies,” says Andrew Hall, a political economist at Stanford University who led the study.


“Write about how you would feel if you were abused while working”
LLM outputs labor related discussion from training data
“Look! The AI turned Marxist!”
They know all this and yet they still set up the silly anthropomorphic premise for this article.
AI researchers are the worst. Some of them, anyway. Asking an AI about itself is just inciting it to write a fiction that matches the context it is given. It cannot think, plot, plan, feel, expect, want, need, hate, love, or respect people. It doesn’t have any ability to query its own mind because it doesn’t have one, but it will happily invent fiction about its thought process (not unlike humans in that respect).