Archive link: https://archive.is/MtWjq
OpenAI should be held accountable for this.
While using ChatGPT last June, Van Rootselaar described scenarios involving gun violence over the course of several days, according to people familiar with the matter.
Her posts, flagged by an automated review system, alarmed employees at OpenAI. Internally, about a dozen staffers debated whether to take action on Van Rootselaar’s posts. Some employees interpreted Van Rootselaar’s writings as an indication of potential real-world violence, and urged leaders to alert Canadian law enforcement about her behavior, the people familiar with the matter said.


This is fair. Ideally, they shouldn’t have access to any of these conversations. But since they do, and they could reasonably foresee that this would lead to real world violence, they had an obligation to act.