AI companies are beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort.
Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.
Future bots could help design and carry out biological, nuclear, and chemical warfare. They could be weaponized to take down power grids, monitor congressional emails, and black out major media outlets. These aren’t purely hypothetical concerns: Earlier this month, Anthropic announced it had developed a new AI model, Claude Mythos Preview, capable of orchestrating cyberattacks on the level of elite, state-sponsored hacking cells, potentially putting a private company’s cyber offense on par with that of the CIA and NSA. In an example of Mythos’s power, Anthropic researchers described how the model used a “moderately sophisticated multi-step exploit” to work around restrictions and gain broad internet access, then emailed a researcher—much to his surprise—while he was eating a sandwich in the park.
Washington is getting antsy about the power imbalance. Over the past year, multiple senators have proposed legislation that would order federal agencies to explore “potential nationalization” of AI. Murmurs of possible tactics abound—including more talk within the administration of the DPA after Anthropic’s Mythos announcement, one person with knowledge of such discussions told us. Meanwhile, Silicon Valley is watching carefully. In recent weeks, Elon Musk, OpenAI’s CEO Sam Altman, and Palantir’s CEO Alex Karp have publicly spoken about the possibility of nationalization. Lawyers who represent Silicon Valley’s biggest AI firms are paying attention.
Worth noting, later in the story it’s pointed out why full nationalization is vanishingly unlikely, but more federal oversight is likely.
It feels real weird to think of a republican controlled government nationalizing a business, feels like not doing that is one of their core tenets. Hypocrisy is not rare obviously, but it would be interesting.
These aren’t purely hypothetical concerns: Earlier this month, Anthropic announced it had developed a new AI model, Claude Mythos Preview.
So they are hypothetical concerns. The Atlantic just takes Dario Amodei at his word.
(ETA: Mythos is a joke, and an insecure one at that.)
But why not take the opportunity to promote the chatbot CEO who is complicit with bombing Venezuelan fishermen and Iranian schoolchildren!
Hegseth demands that Anthropic allow the Pentagon unrestricted access to Claude, reigniting the dispute first set in motion earlier this year.
Because there is active conflict, Anthropic is more willing to engage with the government’s demands than they were previously.
I can’t wait to see what increased compliance looks like from Dario.
This was an interesting article! While there is an argument to be made for an “AGI Manhattan project” I’m not convinced that companies like OpenAI or xAI will be of much value to a project like that at all. It would be like if the US government took over Joey’s Really Big Stacks of Dynamite Emporium in the 1940’s.
A group just mathematically proved that transformers can’t become AGI by proving a relationship between new information and ability to “process” that information.
Seizing existing companies won’t help make AGI




