They only mention Claude, where is the source that “some custom AI system made by Anthropic”, not a LLM, “was in the kill chain”?
I mean, I get that you want to tie Anthropic to this, I don’t like them either but we should stay factual and avoid filling the gaps with some “probably”. It’s also counterproductive as Maven and Palantir are huge menaces and this shift the blame away from them.
You’re the one saying it’s not the Claude LLM doing the targeting. Your source is that Guardian article you linked.
I don’t care if it’s an LLM or some other thing made by Anthropic. Anthropic is involved in this. All the sources in this conversation so far indicate so. Or are you trying to argue that they are just supplying Palantir and Project Maven for wholly innocent purposes?
Pointing out Anthropic’s involvement in the killing of 120 students does not in any way shift blame away from Palantir and Maven. Of course there are information gaps regarding how exactly the AI was involved. No remotely competent military would make all these information public.
Since Maven’s entire business is data analysis and targeting, can we agree that if the AI is not being used for targeting, it is being used to analyze data? And those analyzed data get fed into the targeting system, so the AI is part of the kill chain?
What kind of data is being analyzed by AI? How much of it feed into the targeting system? I concede that I don’t know and have no source. The US military would have to be really stupid to make these info public.
There is nothing that indicates that Anthropic’s AI is used to analyze data, I’m not saying it’s not, just that we don’t know. I’m going to quote a smaller section of a quote I made earlier of the same Guardian article:
In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English.
But the term AI is an issue here, there are multiple, of different kind, made by different companies. There is AI used for targeting, no doubt, but it’s not Claude, it’s Maven and some other subcomponents. The fact that Anthropic joined the project late, after it was already operational, is a good hint that they do not bring a core feature, but that’s only speculation.
You are giving the company a huge amount of benefit of doubt and I don’t understand why. May I ask: If it was Elon Musk’s xAI/Grok rather than Anthropic, would your thoughts on this change? How about if it was Yandex making the AI and the school was in Ukraine?
My conclusion: Anthropic’s AI is in the US military’s kill chain which killed 120 children.
Your conclusion: The LLM did not directly target the school. We don’t know how it was used. It was also not there from the beginning so probably not probably part of the “core system.”
That’s not my conclusion, that’s just mostly coming from the Guardian article. I say mostly because you’re missing one part, we know how the LLM is used.
That’s why I’m asking you to source your “conclusion”.
They only mention Claude, where is the source that “some custom AI system made by Anthropic”, not a LLM, “was in the kill chain”?
I mean, I get that you want to tie Anthropic to this, I don’t like them either but we should stay factual and avoid filling the gaps with some “probably”. It’s also counterproductive as Maven and Palantir are huge menaces and this shift the blame away from them.
You’re the one saying it’s not the Claude LLM doing the targeting. Your source is that Guardian article you linked.
I don’t care if it’s an LLM or some other thing made by Anthropic. Anthropic is involved in this. All the sources in this conversation so far indicate so. Or are you trying to argue that they are just supplying Palantir and Project Maven for wholly innocent purposes?
Pointing out Anthropic’s involvement in the killing of 120 students does not in any way shift blame away from Palantir and Maven. Of course there are information gaps regarding how exactly the AI was involved. No remotely competent military would make all these information public.
I’m just saying that, as far as we know, the Anthropic contract is about Claude and the targeting is not made by a LLM.
Okay fair enough.
Since Maven’s entire business is data analysis and targeting, can we agree that if the AI is not being used for targeting, it is being used to analyze data? And those analyzed data get fed into the targeting system, so the AI is part of the kill chain?
What kind of data is being analyzed by AI? How much of it feed into the targeting system? I concede that I don’t know and have no source. The US military would have to be really stupid to make these info public.
There is nothing that indicates that Anthropic’s AI is used to analyze data, I’m not saying it’s not, just that we don’t know. I’m going to quote a smaller section of a quote I made earlier of the same Guardian article:
But the term AI is an issue here, there are multiple, of different kind, made by different companies. There is AI used for targeting, no doubt, but it’s not Claude, it’s Maven and some other subcomponents. The fact that Anthropic joined the project late, after it was already operational, is a good hint that they do not bring a core feature, but that’s only speculation.
Okay. I guess we at least agree on the facts.
You are giving the company a huge amount of benefit of doubt and I don’t understand why. May I ask: If it was Elon Musk’s xAI/Grok rather than Anthropic, would your thoughts on this change? How about if it was Yandex making the AI and the school was in Ukraine?
It wouldn’t change anything and I’m confused as to why you think it would and why you think I’m “giving a huge amount of benefit of doubt”.
I’m just pointing at what we know, what we don’t know and what you are just making up.
Facts:
My conclusion: Anthropic’s AI is in the US military’s kill chain which killed 120 children.
Your conclusion: The LLM did not directly target the school. We don’t know how it was used. It was also not there from the beginning so probably not probably part of the “core system.”
That’s not my conclusion, that’s just mostly coming from the Guardian article. I say mostly because you’re missing one part, we know how the LLM is used.
That’s why I’m asking you to source your “conclusion”.