Outside OpenAI’s headquarters, a handful of people gathered on Monday holding pieces of colorful chalk. They got down on their knees and started writing messages on the sidewalk. Stand for liberty. Please no legal mass surveillance. Change the contract please.
At issue was a business deal that the company recently signed with the Department of Defense, following the Pentagon’s sudden turn against Anthropic. OpenAI will now supply its technology to the military for use in classified settings, the sorts that may involve wartime decisions and intelligence-gathering—an agreement, many legal experts told me, that could give the government wide-ranging powers. “I would just really like to see OpenAI do the right thing and stand up for something, anything,” Niki Dupuis, an AI-start-up founder and one of the chalk protesters, told me.
In a widely leaked internal memo that Sam Altman sent last Thursday night, a copy of which I obtained, the OpenAI CEO said that he would seek “red lines” to prevent the Pentagon from using OpenAI products for mass domestic surveillance and autonomous lethal weapons. These were ostensibly the very same limits that Anthropic had demanded and that had infuriated the Pentagon, leading Defense Secretary Pete Hegseth to declare the company a supply-chain risk—a hefty sanction that would require anybody who sells to the Pentagon to stop using Anthropic products in their work with the military. Perhaps OpenAI was about to secure the very terms Anthropic had been denied.


That door was smashed off its hinges decades ago. There is no door.
No wall. No building. You are just shitting on your open air toilet.