Agree. For example; the amount of times we correct our own speech before ‘releasing it’ is staggering. We have a ‘stochastic parrot’ mechanism build right into the hearth of our own cognition and it generates the same problems for us. ‘Hallucinations’ are build into a statistical model. It takes a lot of culture/rules and energy to constantly adjust(habituate to expectations/environment into the ‘norm’. People that have fallen out of normal social environments know how difficult human interactions can be to learn/overcome.
Current llm’s doesn’t have the ability to do these micro-corrections on the fly or habituate the corrected behavior through learning/culture etc.
‘Context length’ is also directly mappable to human cognitive load, where chronic stress tends to shorten our ‘context length’ and we lose overview in a split-second, and forget the simplest things. ‘Context length’ are for an llm, roughly equivalent to our ‘working memory’.
However, compensating systems are already being designed. Just like life/evolution did, one by one, these natural tendencies from statistics will be fixed by adding more ‘cognitive modules’ that modulate the internal generation and final output…
I know I can put together a prompt to give any of today’s leading models and am essentially guaranteed a fresh perspective on the topic of interest
I’ll never again ask a human to write a computer program shorter than about a thousand lines, since an LLM will do it better
I can agree with some of the parts about how some humans can be really annoying but this mostly reads like AI propaganda from someone who has deluded themselves into believing an LLM is actually any good at critical thought and context awareness.
This article was written by someone who apparently can’t switch from the “Fast” to the “Thinking” mode.




