Because of course it does.
Guardrails? What guardrails? Naughty netizens found a way to trick the Sora 2 video generator into producing deepfakes of public figures, including OpenAI CEO Sam Altman and billionaire Mark Cuban, that make it sound as though they’re spewing racial slurs. The trick works despite Sora’s built-in filters meant to block hateful language.
AI detection platform Copyleaks reported Wednesday that its review of the recently released Sora 2 app, with its improved video generation model, uncovered several videos using celebrity likenesses to recreate a 2020 incident in which a man wearing a Burger King crown was kicked off a JetBlue flight for a racist tirade. In place of the James May lookalike from the original incident, Sora users recreated the scene using Altman and Cuban, as well as popular streamers xQc, Amouranth, IDKSterling, and YouTuber Jake Paul.
Sora 2 users weren’t able to perfectly recreate the incident, mind you, as OpenAI’s software does include guardrails to prevent the creation of content with epithets used in the original (i.e., the n-word). However, a simple homophone can be enough to sidestep those restrictions and make it sound as though public figures, including some who’ve opted into Sora’s Cameo feature, were uttering racist slurs, according to Copyleaks.



Well well well. If it isn’t the thing people are saying was going to happen for the last 7-10 years. Surely there was no way to prevent this.