OpenAI has stopped users from making celebrity deepfakes on its new Sora app. The move comes after many raised concerns about the misuse of AI to create fake videos of famous people. The issue gained attention when Breaking Bad actor Bryan Cranston found videos online that used his face and voice without his permission. He complained to SAG-AFTRA, the Hollywood union that protects actors and media professionals.
Soon after, OpenAI said it would strengthen its safety systems and stop people from using Sora for such fake content.
Bryan Cranston & SAG-AFTRA Raise Alarm Over AI Deepfakes
Bryan Cranston said he was worried not only for himself but for all artists who could be copied without their consent.
He thanked OpenAI for updating its policy and asked other tech companies to do the same. SAG-AFTRA joined him in raising the issue and teamed up with major talent agencies such as United Talent Agency, Creative Artists Agency, and the Association of Talent Agents.
Together, they are asking AI companies to protect people’s identities and prevent the misuse of their faces and voices in fake videos.
Even families of late stars like Robin Williams and George Carlin have also contacted OpenAI after seeing deepfake videos of their loved ones. These complaints pushed OpenAI to take quick action.
OpenAI Strengthens Sora Rules After Backlash Over Deepfake Videos
OpenAI has now introduced a strict opt-in policy, which means the Sora app can only use a person’s voice or face if they have given permission.
The company said it regrets any “unintentional generations” and is working to stop fake or harmful content.
The backlash started when users made offensive and racist videos using the likeness of Martin Luther King Jr.
After that, OpenAI added new tools that allow families or representatives of public figures to request a complete ban on using their image.
The Sora app is currently invite-only on iOS, with Android access coming soon.