Here’s one theory. According to critics, it benefits AI companies to keep you fixated on apocalypse because it distracts from the very real damage they’re already doing to the world.
I don’t think that’s really it.
I think they have these grandiose claims just to hype their product up for investors, so people won’t focus on how these LLMs are so unreliable and inaccurate
There’s an entire cottage industry around “AI Safety”, and it’s entirely accurate to say they only focus on the apocalyptic to the detriment of the real.
They've even been caught on camera distracting politicians...
I don’t think that’s really it.
I think they have these grandiose claims just to hype their product up for investors, so people won’t focus on how these LLMs are so unreliable and inaccurate
Yeah, it is so people think that the ai companies are seeing the next, not-yet-public versions and are scared, they must be so powerful, right?
Altman has been claiming chat gpt made him feel dumb since 4.5
perfectly believable tbh
I think both statements are true at the same time
Why not both?
There’s an entire cottage industry around “AI Safety”, and it’s entirely accurate to say they only focus on the apocalyptic to the detriment of the real.
They've even been caught on camera distracting politicians...
https://en.wikipedia.org/wiki/AI_Safety_Summit_2023