As the U.S.-Israeli war on Iran continues, we look at how the Pentagon is using artificial intelligence in its operations. The system, known as Project Maven, relies on technology by Palantir and also incorporates the AI model Claude built by Anthropic. Israel has used similar AI targeting programs in Iran, as well as in Gaza and Lebanon.
Craig Jones, an expert on modern warfare, says AI technology is helping militaries speed up the “kill chain,” the process of identifying, approving and striking targets. “You’re reducing a massive human workload of tens of thousands of hours into seconds and minutes. You’re reducing workflows, and you’re automating human-made targeting decisions in ways which open up all kinds of problematic legal, ethical and political questions,” says Jones.



There are just so many things to be said about the ills of AI, but one of them is that it is very purposefully a liability laundering machine. The decisions and thought process are blackboxed and unauditable. We’ve been trained to dismiss any oopsies as an inevitable part of the system, both while it’s still “rapidly developing” as well as just inherent to the technology. Absolutely none of this is acceptable and yet here we are.
But how does it change liability? Isn’t the person who decides to run this system ultimately responsible for its effects?
Yes.
One would think so, but apparently not
Everyone in the chain is reliable.
This is the same nazi argument. “I only transported the jews, I only watched them, I only build the concentration camps” and so on.