- cross-posted to:
- nottheonion@sh.itjust.works
- cross-posted to:
- nottheonion@sh.itjust.works
Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.
The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.
But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.



That’s like a gun company claiming using their weapons for robbery is a violation of terms of service.
That analogy is 100% accurate.
It is exactly like that.
That‘s a company claiming companies can‘t take responsibility because they are companies and can‘t do wrong. They use this kind of defense virtually every time they get criticized. AI ruined the app for you? Sorry but that‘s progress. We can‘t afford to lag behind. Oh you can’t afford rent and are about to become homeless? Sorry but we are legally required to make our shareholders happy. Oh your son died? He should‘ve read the TOS. Can‘t afford your meds? Sorry but number must go up.
Companies are legally required to be incompatible with human society long term.
I’d say it’s more akin to a bread company saying that it is a violation of the terms and services to get sick from food poisoning after eating their bread.
That would imply that he wasn’t suicidal before. If chatgpt didn’t exist he would just use Google.
Look up the phenomenon called “Chatbot Psychosis”. In its current form, especially with GPT4 that was specifically designed to be a manipulative yes-man, chatbots can absolutely insidiously mess up someone’s head enough to push them to the act far beyond just answering the question of how to do it like a simple web search would.
Yes you are right, it’s hard to find an analogy that is both as stupid and also sounds somewhat plausible.
Because of course a bread company cannot reasonably claim that eating their bread is against terms of service. But that’s exactly the problem, because it’s the exact same for OpenAI, they cannot reasonably claim what they are claiming.
I would say that it is more like a software company putting in their TOS that you cannot use their software to do a specific thing(s).
Would be correct to sue the software company because a user violated the TOS ?
I agree that what happened is tragic and that the answer by OpenAI is beyond stupid but in the end they are suing the owner of a technology for a uses misuse of said technology. Or should we sue also Wikipedia because someone looked up how to hang himself ?
The gun company can rightfully say that what you do with your property is not their problem.
But let’s make a less controversial example: do you think you can sue a fishing rods company because I use one of their rods to whip you ?
deleted by creator
Yeah this metaphor isn’t even almost there
They used a tool against the manufacturers intended use of said tool?
I can’t wrap my head around what I’m you’re saying, and that could be due to drinking. Op later also talked about not being the best metaphor
Metaphor isn’t perfect but it’s ok.
The gun is a tool as is an LLM. The companies that make these tools have intended use cases for the tools.
If the gun also talked to you
Talked you into it*