Archive link: https://archive.is/MtWjq
OpenAI should be held accountable for this.
While using ChatGPT last June, Van Rootselaar described scenarios involving gun violence over the course of several days, according to people familiar with the matter.
Her posts, flagged by an automated review system, alarmed employees at OpenAI. Internally, about a dozen staffers debated whether to take action on Van Rootselaar’s posts. Some employees interpreted Van Rootselaar’s writings as an indication of potential real-world violence, and urged leaders to alert Canadian law enforcement about her behavior, the people familiar with the matter said.


I want privacy, but the original question is irrelevant in response to an article about a situation in which it did not exist - OpenAI is providing a product that has surveillance baked into it, obvious by virtue of this article’s existence. They chose to actively make themselves aware of people using their service in a ways they deemed to be a problem in some way. This is likely one of many that they came into possession of information suggesting real life harm was imminent by one of their users, which incurred a responsibility, in my opinion, morally, and, I’m guessing, legally. They absconded from that responsibility.
Your questions are interesting, and you and me have likely arrived at similar answers to them, however they’re fully irrelevant to this specific situation in which they’ve already been answered within its context.