OpenAI, the people behind ChatGPT, have launched an updated AI video- and audio-generation system with fascinating, and terrifying, implications for the spread of deepfakes.
The Verge reports: “OpenAI announced Sora 2, its new AI video- and audio-generation system, on Tuesday, and in a briefing with reporters on Monday, employees called it the potential ‘ChatGPT moment for video generation.’ Just like ChatGPT, Sora 2 is being released as a way for consumers to play around with a new AI tool, one that includes a social media app with the ability to create realistic videos of real people saying real things. You could say it’s essentially an app full of deepfakes. On purpose.”
That’s right, a social media app, also called Sora, that will allow users to “deepfake their friends.” It’s invite only for now to iOS to users in the US and Canada, but those walls will likely not stay standing for long.
Here’s more from The Verge:
“The accompanying Sora social media app looks a lot like TikTok, with a ‘For You’ page and an interface with a vertical scroll. But it includes a feature called ‘Cameos,’ in which people can give the app permission to generate videos with their likenesses. In a video, which must be recorded inside the iOS app, you’re asked to move your head in different directions and speak a sequence of specific numbers. Once it’s uploaded, your likeness can be remixed (including in interactions with other people’s likenesses) by describing the desired video and audio in a text prompt.
“The Sora app lets you choose who can create cameos with your likeness: just yourself, people you approve, mutuals, or ‘everyone.’ OpenAI employees said that users were ‘co-owners’ of these cameos and could revoke someone else’s creation access or delete a video containing their AI-generated likeness at any time. It’s also possible to block someone on the app. Team members also said that users can see drafts of cameos that others are making of them before they’re posted, and that in the future they may change settings so the person featured in a cameo has to approve it before it posts but that’s not the case yet.”
OpenAI has included a variety of safety measures, including parental controls, to help nip malicious uses of this technology in the bud. But one person’s safeguards often prove to be just one more challenge to overcome for industrious bad actors. “Last year, a Microsoft engineer warned that its AI image-generator ignored copyrights and generated sexual, violent imagery with simple workarounds. xAI’s Grok recently generated nude deepfake videos of Taylor Swift with minimal prompting. And even for OpenAI, employees told reporters that the company is being restrictive on public figures for “this rollout,” not seeming to rule out the ability to create such videos in the future,” The Verge writes.
Concerns aside for now, our own Chief Human Risk Management Strategist Perry Carpenter and CISO Advisor James McQuiggan have gotten their hands on some early invites and have been taking the Sora 2 capabilities for a spin:
We doubt this will be the last we’ll hear of Sora, for better or for worse. This ever-evolving technology making headlines is a perfect opportunity for you to talk to your users, and family and friends for that matter, about the opportunities and risks deepfake technology poses.