They are the lesser of the available evils. Anthropic, the proprietors of Claude, were blacklisted by the US administration for refusing to greenlight their technology being used for fascism.
Yes, but not for targeting, as explained in the article I linked.
The Maven Smart System is the platform that came out of those exercises, and it, not Claude, is what is being used to produce “target packages” in Iran.
Anthropic’s AI did data analysis for Project Maven, which was a system that used data analyzed by various sources to target a school. So the AI is part of the “kill-chain” no?
The AI underneath the interface is not a language model, or at least the AI that counts is not. The core technologies are the same basic systems that recognise your cat in a photo library or let a self-driving car combine its camera, radar and lidar into a single picture of the road, applied here to drone footage, radar and satellite imagery of military targets. They predate large language models by years. Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English. But the language model was never what mattered about this system.
Yes. I never said it was an LLM. It was probably some custom AI system made by Anthropic.
Are we agreed that some Anthropic AI system (not necessarily the Claude LLM) was in the kill chain? That was what I was trying to say from the beginning.
My take on it is that it was used inappropriately, and when the fascists wanted it tailored for that abhorrent use, Anthropic refused, and in retaliation the fascists banned it for ANY use, so now Anthropic is suing to allow the sane to continue using it for it’s appropriate uses.
You seriously can’t think of any sane use? How about categorizing large amounts of data. Brainstorming strategies for problem solving. Converting pseudo code to actual code. Troubleshooting error messages. I mean, there are dozens upon dozens of valid uses that harm no one.
How does Bic plan to prevent murderers from stabbing people with their pens? How does Toyota plan to stop drivers from committing vehicular manslaughter? How does Hewlett-Packard plan on preventing fascists from saving manifestos? How does Apple plan on preventing sexual criminals from taking pictures of their victims?
What’s that? Companies don’t need to accomplish impossible tasks to have a viable product? I guess it’s only AI that has insurmountable demands placed on them by reactionaries.
The only not-evil move is to sit in a cave using sticks, once the trees figure out how to keep cavemen from beating their children with them.
Your problem is clearly with the fascists, as it should be, and AI is getting caught in the crossfire by your ire. You just can’t see/admit it yet.
Unless you live in a cave, which you obviously don’t since you’re here on the Internet sharing your wisdom with us, then you are participating in business and activities that enrich the fascists. It’s just a fact of life when they own everything. There is no ethical consumption under capitalism.
They are the lesser of the available evils. Anthropic, the proprietors of Claude, were blacklisted by the US administration for refusing to greenlight their technology being used for fascism.
Anthropic’s AI system was used to target the school in Minab, killing 120 students. https://www.washingtonpost.com/national-security/2026/03/11/us-strike-iran-elementary-school-ai-target-list/
The company is suing to be able to supply the US military again.
Maven is doing the targeting, not Claude.
https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying
Also the WaPo is now Bezos’ news.
And Maven used Anthropic’s AI https://en.wikipedia.org/wiki/Project_Maven
Yes, but not for targeting, as explained in the article I linked.
Anthropic’s AI did data analysis for Project Maven, which was a system that used data analyzed by various sources to target a school. So the AI is part of the “kill-chain” no?
I suggest you read the article.
Yes. I never said it was an LLM. It was probably some custom AI system made by Anthropic.
Are we agreed that some Anthropic AI system (not necessarily the Claude LLM) was in the kill chain? That was what I was trying to say from the beginning.
Well you’ll need to source your claim. The wiki article you linked only mention Claude.
The Anthropic contract is also quite recent compared to Maven creation.
That’s one way to spin it.
My take on it is that it was used inappropriately, and when the fascists wanted it tailored for that abhorrent use, Anthropic refused, and in retaliation the fascists banned it for ANY use, so now Anthropic is suing to allow the sane to continue using it for it’s appropriate uses.
What sane use? And how does this company plan to prevent the fascists from using it to kill another 120 children?
The only not-evil move is to not sell dual-use goods to fascists in the first place.
You seriously can’t think of any sane use? How about categorizing large amounts of data. Brainstorming strategies for problem solving. Converting pseudo code to actual code. Troubleshooting error messages. I mean, there are dozens upon dozens of valid uses that harm no one.
How does Bic plan to prevent murderers from stabbing people with their pens? How does Toyota plan to stop drivers from committing vehicular manslaughter? How does Hewlett-Packard plan on preventing fascists from saving manifestos? How does Apple plan on preventing sexual criminals from taking pictures of their victims?
What’s that? Companies don’t need to accomplish impossible tasks to have a viable product? I guess it’s only AI that has insurmountable demands placed on them by reactionaries.
The only not-evil move is to sit in a cave using sticks, once the trees figure out how to keep cavemen from beating their children with them.
I wasn’t clear. What I meant was: what sane things could a fascist military use AI for?
“Reactionary” lmao. My friend, I use LLMs all the time. Just not the proprietary ones from companies that are in bed with fascists.
Your problem is clearly with the fascists, as it should be, and AI is getting caught in the crossfire by your ire. You just can’t see/admit it yet.
Unless you live in a cave, which you obviously don’t since you’re here on the Internet sharing your wisdom with us, then you are participating in business and activities that enrich the fascists. It’s just a fact of life when they own everything. There is no ethical consumption under capitalism.
I have nothing against AI but everything against a certain AI company that is fully in bed with fascists.
Please do not use this slogan as an excuse to not sought out the least unethical option for your consumptions.