Archive link: https://archive.is/MtWjq
OpenAI should be held accountable for this.
While using ChatGPT last June, Van Rootselaar described scenarios involving gun violence over the course of several days, according to people familiar with the matter.
Her posts, flagged by an automated review system, alarmed employees at OpenAI. Internally, about a dozen staffers debated whether to take action on Van Rootselaar’s posts. Some employees interpreted Van Rootselaar’s writings as an indication of potential real-world violence, and urged leaders to alert Canadian law enforcement about her behavior, the people familiar with the matter said.


They chose to store all that data and do analytics on them and found this problematic interaction. They chose to invade peoples privacy but don’t want to be held accountable for things they might find. I’d prefer privacy instead of them monitoring everything. They brought this discussion (and the ethics problem of what should be sanctioned and what not) onto themselves.
This is fair. Ideally, they shouldn’t have access to any of these conversations. But since they do, and they could reasonably foresee that this would lead to real world violence, they had an obligation to act.