In a small room in San Diego last week, a man in a black leather jacket explained to me how to save the world from destruction by AI. Max Tegmark, a notable figure in the AI-safety movement, believes that “artificial general intelligence,” or AGI, could precipitate the end of human life. I was in town for NeurIPS, one of the largest AI-research conferences, and Tegmark had invited me, along with five other journalists, to a briefing on an AI-safety index that he would release the next day. No company scored better than a C+.

The threat of technological superintelligence is the stuff of science fiction, yet it has become a topic of serious discussion in the past few years. Despite the lack of clear definition—even OpenAI CEO Sam Altman has called AGI a “weakly defined term”—the idea that powerful AI contains an inherent threat to humanity has gained acceptance among respected cultural critics.

Granted, generative AI is a powerful technology that has already had a massive impact on our work and culture. But superintelligence has become one of several questionable narratives promoted by the AI industry, along with the ideas that AI learns like a human, that it has “emergent” capabilities, that “reasoning models” are actually reasoning, and that the technology will eventually improve itself.

I traveled to NeurIPS, held at the waterfront fortress that is the San Diego Convention Center, partly to understand how seriously these narratives are taken within the AI industry. Do AGI aspirations guide research and product development? When I asked Tegmark about this, he told me that the major AI companies were sincerely trying to build AGI, but his reasoning was unconvincing. “I know their founders,” he said. “And they’ve said so publicly.”