I think people are a bit too eager to swallow bullshit in general, as long as spoken/written/gestured in a confident tone. And they often deal with uncertainly poorly; when others show doubt, they often either disregard the info or the doubt itself.
This likely predates Big Tech. I do agree with you though, Big Tech is actively encouraging this behaviour — it’s easier to sell goods, services and ideas to a gullible person than to a sensible one.
And, when it comes to LLMs, Big Tech is always playing some sort of double game: at the same time it claims “the info might be inaccurate, be careful!”, but it tunes its models to use that confident tone that fools people into believing bullshit. Because the people in Big Tech know that, if the general population becomes sceptic towards LLM output, most of its appeal as a new technology is gone; you can’t use it for any task that needs any sort of reliability.
I think people are a bit too eager to swallow bullshit in general, as long as spoken/written/gestured in a confident tone. And they often deal with uncertainly poorly; when others show doubt, they often either disregard the info or the doubt itself.
This likely predates Big Tech. I do agree with you though, Big Tech is actively encouraging this behaviour — it’s easier to sell goods, services and ideas to a gullible person than to a sensible one.
And, when it comes to LLMs, Big Tech is always playing some sort of double game: at the same time it claims “the info might be inaccurate, be careful!”, but it tunes its models to use that confident tone that fools people into believing bullshit. Because the people in Big Tech know that, if the general population becomes sceptic towards LLM output, most of its appeal as a new technology is gone; you can’t use it for any task that needs any sort of reliability.