

It’s tricky because like in so many other things, nuance is weaponized against the person using nuance.
A politician presents a carefully considered position? An opponent declares it’s impossible to know where they stand.
A broadly harmful thing has some potential value if we just pull back on the harmful part? People all-in will seize upon your acknowledgement of specific value as broad endorsement.
In the AI front, if OpenAI and xAI folds up, and maybe Anthropic gets a big dose of humility, and business leaders finally get a sense for what it can’t do, there’s a chance for a healthy and useful adoption. Right now the nuance isn’t as valuable because it advocates for a scale that no one would be objecting to anyway.







Nah, the producers of human slop are ecstatic because now they can just prompt up their slop and post something for engagement, before they had to at least put in a modicum of effort to make their slop. It would take at least as long to make the human slop as a human would take to view it, now they can get output with even less than the effort the human wastes seeing it.
The slop flood gates are open.