What a surprise. I makes me think of the US women treating Eastern Asian women who married western men like whores, victims, or gold-diggers. Many people think about the world locally, not internationally. They apply their ways of thinking to the rest of the world and have trouble understanding that what is normal for me isn’t normal for you (and vice versa). They have trouble understanding that things aren’t always as they seem and making a summary judgement about a person from a short slice of their life, which they can interpret wrong, does not a person make.
The machine, in its quest to sound authoritative, ended up sounding like a KCPE graduate who scored an ‘A’ in English Composition. It accidentally replicated the linguistic ghost of the British Empire.
Combined with how the academic community has been warning about encoding biases since way before the current hype cycle, this sentence is mildly horrifying
How unfortunate to speak ‘AI’, when people are currently scrambling to build ‘AI detectors’ that removes AI speak from their feeds. Good luck buddy 😵
What I find most unfortunate is that these scam companies convinced people that you can make AI speech detectors in the first place. Like the reason LLMs structure text in a certain way is because these are the patterns in human text that they’ve been trained on.


