Archive link: https://archive.is/MtWjq
OpenAI should be held accountable for this.
While using ChatGPT last June, Van Rootselaar described scenarios involving gun violence over the course of several days, according to people familiar with the matter.
Her posts, flagged by an automated review system, alarmed employees at OpenAI. Internally, about a dozen staffers debated whether to take action on Van Rootselaar’s posts. Some employees interpreted Van Rootselaar’s writings as an indication of potential real-world violence, and urged leaders to alert Canadian law enforcement about her behavior, the people familiar with the matter said.
So … you want privacy, or a police state?
OpenAI is not a private LLM. You’d have to use Lumo, or selfhost etc.
Its like posting on Facebook, people will see it, whether that is an employee or public
I want OpenAI to be held accountable, don’t you?
So … you want to do what exactly?
Monitor every single interaction and police them?
How do you decide what’s an actionable conversation? Who’s laws apply? What’s allowed and what isn’t?
They chose to store all that data and do analytics on them and found this problematic interaction. They chose to invade peoples privacy but don’t want to be held accountable for things they might find. I’d prefer privacy instead of them monitoring everything. They brought this discussion (and the ethics problem of what should be sanctioned and what not) onto themselves.
This is fair. Ideally, they shouldn’t have access to any of these conversations. But since they do, and they could reasonably foresee that this would lead to real world violence, they had an obligation to act.
I want privacy, but the original question is irrelevant in response to an article about a situation in which it did not exist - OpenAI is providing a product that has surveillance baked into it, obvious by virtue of this article’s existence. They chose to actively make themselves aware of people using their service in a ways they deemed to be a problem in some way. This is likely one of many that they came into possession of information suggesting real life harm was imminent by one of their users, which incurred a responsibility, in my opinion, morally, and, I’m guessing, legally. They absconded from that responsibility.
Your questions are interesting, and you and me have likely arrived at similar answers to them, however they’re fully irrelevant to this specific situation in which they’ve already been answered within its context.
So you don’t want to hold OpenAI and Sam Altman accountable. Got it, thanks.



