There is nothing that indicates that Anthropic’s AI is used to analyze data, I’m not saying it’s not, just that we don’t know. I’m going to quote a smaller section of a quote I made earlier of the same Guardian article:
In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English.
But the term AI is an issue here, there are multiple, of different kind, made by different companies. There is AI used for targeting, no doubt, but it’s not Claude, it’s Maven and some other subcomponents. The fact that Anthropic joined the project late, after it was already operational, is a good hint that they do not bring a core feature, but that’s only speculation.
You are giving the company a huge amount of benefit of doubt and I don’t understand why. May I ask: If it was Elon Musk’s xAI/Grok rather than Anthropic, would your thoughts on this change? How about if it was Yandex making the AI and the school was in Ukraine?
My conclusion: Anthropic’s AI is in the US military’s kill chain which killed 120 children.
Your conclusion: The LLM did not directly target the school. We don’t know how it was used. It was also not there from the beginning so probably not probably part of the “core system.”
That’s not my conclusion, that’s just mostly coming from the Guardian article. I say mostly because you’re missing one part, we know how the LLM is used.
That’s why I’m asking you to source your “conclusion”.
There is nothing that indicates that Anthropic’s AI is used to analyze data, I’m not saying it’s not, just that we don’t know. I’m going to quote a smaller section of a quote I made earlier of the same Guardian article:
But the term AI is an issue here, there are multiple, of different kind, made by different companies. There is AI used for targeting, no doubt, but it’s not Claude, it’s Maven and some other subcomponents. The fact that Anthropic joined the project late, after it was already operational, is a good hint that they do not bring a core feature, but that’s only speculation.
Okay. I guess we at least agree on the facts.
You are giving the company a huge amount of benefit of doubt and I don’t understand why. May I ask: If it was Elon Musk’s xAI/Grok rather than Anthropic, would your thoughts on this change? How about if it was Yandex making the AI and the school was in Ukraine?
It wouldn’t change anything and I’m confused as to why you think it would and why you think I’m “giving a huge amount of benefit of doubt”.
I’m just pointing at what we know, what we don’t know and what you are just making up.
Facts:
My conclusion: Anthropic’s AI is in the US military’s kill chain which killed 120 children.
Your conclusion: The LLM did not directly target the school. We don’t know how it was used. It was also not there from the beginning so probably not probably part of the “core system.”
That’s not my conclusion, that’s just mostly coming from the Guardian article. I say mostly because you’re missing one part, we know how the LLM is used.
That’s why I’m asking you to source your “conclusion”.