Videos created with OpenAI’s new app Sora show how easily public perceptions can be manipulated by tools that can produce an alternate reality with a series of simple prompts. In the two months since Sora arrived, deceptive videos have surged on X, YouTube, Facebook and Instagram, according to experts who track them. The deluge has raised alarm over a new generation of disinformation and fakes.Most of the major social media companies have policies that require disclosure of AI use and broadly prohibit content intended to deceive. But those guardrails have proved woefully inadequate for the kind of technological leaps OpenAI’s tools represent.While many videos are silly memes or fake images of babies and pets, others are meant to stoke the kind of vitriol that often characterises political debate. They have already figured in foreign influence operations. Researchers who have tracked deceptive uses said the onus was now on companies to do more to ensure people know what is real and what isn’t.”Could they do better in content moderation for mis- and disinformation? Yes, they’re clearly not doing that,” said Sam Gregory, executive director of Witness, a human rights organisation focused on the threats of technology. “Could they do better in proactively looking for AI-generated information and labelling it? The answer is yes, as well.” The companies behind the AI tools say they are trying to make clear to users what content is generated by computers. Sora and the rival tool offered by Google, called Veo, both embed a visible watermark onto the videos they produce. Sora, for example, puts a “Sora” label on each video. Both companies also include invisible metadata, which can be read by a computer, that establishes the origin of each fake. The idea is to inform people that what they are seeing is not real and to give the platforms that feature them the digital signals to automatically detect them.Some platforms are using that technology. YouTube uses Sora’s invisible watermark to append a small label indicating that the AI videos were “altered or synthetic.” “Viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic,” said Jack Malon, a YouTube spokesman. People with malicious intent have discovered that it is easy to get around the disclosure rules. Some manipulate the videos to remove the watermarks. Several firms have sprung up offering to remove logos and watermarks. OpenAI said it prohibits deceptive or misleading uses of Sora and takes action against violators of its policies. The company said its app was just one among dozens of similar tools capable of making increasingly lifelike videos – many of which do not employ any safeguards on use. A spokesman for Meta, which owns Facebook and Instagram, said it was not always possible to label every video generated by AI.
