As the U.S.-Israeli war on Iran continues, we look at how the Pentagon is using artificial intelligence in its operations. The system, known as Project Maven, relies on technology by Palantir and also incorporates the AI model Claude built by Anthropic. Israel has used similar AI targeting programs in Iran, as well as in Gaza and Lebanon.
Craig Jones, an expert on modern warfare, says AI technology is helping militaries speed up the “kill chain,” the process of identifying, approving and striking targets. “You’re reducing a massive human workload of tens of thousands of hours into seconds and minutes. You’re reducing workflows, and you’re automating human-made targeting decisions in ways which open up all kinds of problematic legal, ethical and political questions,” says Jones.



But how does it change liability? Isn’t the person who decides to run this system ultimately responsible for its effects?
Yes.
One would think so, but apparently not
Everyone in the chain is reliable.
This is the same nazi argument. “I only transported the jews, I only watched them, I only build the concentration camps” and so on.