Because of the way they are trained, large language models capture only a slice of human language. They’re trained on the written word, from textbooks to social media posts, and our speech as captured in movies and on television. These models have minimal access to the unscripted conversations we have face to face or voice to voice. This is the vast majority of speech, and a vital component of human culture.
There’s a risk to this. The increased use of large language models means we humans will encounter much more AI-generated text. We humans, in turn, will begin to adopt the linguistic patterns and behaviors of these models. This will affect not just how we communicate with one another, but also how we think about ourselves and what goes on around us. Our sense of the world may become distorted in ways we have barely begun to comprehend.
This will happen in many ways. One of the first effects we could see is in simple expression, much as texting and social media have resulted in us using shorter sentences, emojis instead of words, and much less punctuation. But with AI, the impacts may be more harmful, eroding courteousness and encouraging us to talk like bosses barking orders. A 2022 study found that children in households that used voice commands with tools like Siri and Alexa became curt when speaking with humans, often calling out “Hey, do X” and expecting obedience, especially from anyone whose voice resembled the default-female electronic voices. As we start to prompt chatbots and AI agents with more instructions, we may fall into the same habits.



I think the text leaves the worst parts out: assumptions, decontextualisation, faulty reasoning, focusing on individual words instead of what they mean, and things like this. As in, issues with that part of comprehension that depends on logic, not on language proficiency.
All of those were already a problem before chatbots. But since chatbot output is really bad at those things, I think increased exposure to chatbots might make the problem worse.