- Регистрация
- 17 Февраль 2018
- Сообщения
- 39 016
- Лучшие ответы
- 0
- Реакции
- 0
- Баллы
- 2 093
Offline
We’ll see how this goes…
Oct 1, 2025, 12:00 PM UTC

Hayden Field is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
On Monday, I watched OpenAI CEO Sam Altman drink from a gigantic mango-flavored juice box and remark aloud about how the box was half his size. The catch: It wasn’t really Altman. The juice box wasn’t real. He wasn’t really talking. It was a deepfake generated by AI.
The most concerning part: I couldn’t tell whether or not it was real.
OpenAI announced Sora 2, its new AI video- and audio-generation system, on Tuesday, and in a briefing with reporters on Monday, employees called it the potential “ChatGPT moment for video generation.” Just like ChatGPT, Sora 2 is being released as a way for consumers to play around with a new AI tool — one that includes a social media app with the ability to create realistic videos of real people saying real things. You could say it’s essentially an app full of deepfakes. On purpose.
OpenAI believes Sora, which was first announced in February 2024 and released that December, has finally reached a point of relative reliability. Bill Peebles, OpenAI’s head of Sora, compared the video-generation system’s earliest iteration to a “slot machine” where “you would put a prompt in and kind of cross your fingers that what you got out bore any resemblance to what you asked for.” The new model, he said, “is way better in terms of being faithful to how users prompt it.”
During the briefing, the team behind Sora 2 said they had been working on it for at least 20 months. The biggest step change in the product is that it can now generate audio that’s synchronized with video — not just background soundscapes and sound effects, but also dialogue that works for a range of languages. It’s available through Sora.com, “with Sora 2 Pro available to ChatGPT Pro users,” and developers are set to receive API access “soon.”
The social app is also called “Sora” and it’s available now via iOS to users in the US and Canada on an invite-only basis. More countries will follow, and each user will receive four additional invites to share with friends.
In the release, OpenAI said Sora 2 is “moving us closer to useful world simulators.” OpenAI employees told reporters the new system was much smarter at physics, too. Peebles said, “You can accurately do backflips on top of a paddleboard on a body of water, and all of the fluid dynamics and buoyancy are accurately modeled. It’s really a step function change in terms of the underlying physics intelligence that this model has.”
But that could also be a nightmare when it comes to deepfakes, which are already a widespread problem.
The accompanying Sora social media app looks a lot like TikTok, with a “For You” page and an interface with a vertical scroll. But it includes a feature called “Cameos,” in which people can give the app permission to generate videos with their likenesses. In a video, which must be recorded inside the iOS app, you’re asked to move your head in different directions and speak a sequence of specific numbers. Once it’s uploaded, your likeness can be remixed (including in interactions with other people’s likenesses) by describing the desired video and audio in a text prompt.
OpenAI employees told reporters during the Monday briefing that Sora has replaced text messages, emojis, and voice notes for them, to become one of the top ways they communicate among themselves. In the briefing, they demoed fake ads, fake conversations between two people, fake news clips, and more, all created with Sora 2 and consumed via scrolling through the social media app.
Some of the clips were generated live as we watched, and they were terrifyingly realistic — no more six-fingered hands (that I could see, at least). Unless the video contained fantastical subject matter, like the gigantic juice box example, the untrained eye may not be able to tell that these videos were AI-generated — and if you could tell, it would likely be based simply on a feeling, or a vibe, of something feeling off.
The Sora app lets you choose who can create cameos with your likeness: just yourself, people you approve, mutuals, or “everyone.” OpenAI employees said that users were “co-owners” of these cameos and could revoke someone else’s creation access or delete a video containing their AI-generated likeness at any time. It’s also possible to block someone on the app. Team members also said that users can see drafts of cameos that others are making of them before they’re posted, and that in the future they may change settings so the person featured in a cameo has to approve it before it posts — but that’s not the case yet.
In the release, OpenAI also pointed to its newly minted parental controls for its products, writing that options include turning on “a non-personalized feed, choosing whether to allow their teen to send and receive direct messages, and the option to turn off an uninterrupted feed of content while scrolling.”
Like TikTok, the Sora app seems built to generate social media trends, with the ability to “Remix” other videos. It currently generates 10-second videos, but Pro users could soon get up to 15 seconds on the web, with the same ability coming to mobile later. Employees said that it’s possible to create longer videos, but since that’s a compute-heavy task, they’re still figuring out how they’ll handle it.
For everyone else, the biggest task with Sora 2 and the Sora app may be figuring out how to decide what’s real. OpenAI wrote in a release that “every video made with Sora has multiple signals that show it’s AI-generated,” such as metadata, a moving watermark on videos downloaded from Sora.com or the Sora app, and unspecified “internal detection tools to help assess whether a certain video or audio was created by our products.” (OpenAI said in the release that in some ChatGPT Pro web flows, “watermarks may be omitted except when real people are depicted.”) Screen recording also isn’t supposed to be possible within the app. But workarounds seem almost inevitable, if recent history is any guide — as does misinformation with the potential to spread like wildfire.
As for deepfakes of government figures, celebrities, and other public figures? “Public figures can’t be generated in Sora unless they’ve uploaded a cameo themselves and given consent for it to be used,” OpenAI wrote in a release. “The same applies to everyone: if you haven’t uploaded a cameo, your likeness can’t be used.” OpenAI employees also said during the briefing that it’s “impossible to generate” X-rated or “extreme” content via the platform, and that the company isn’t currently allowing free-form text prompting for AI-generated public figures. They also said that the company moderates video output for potential policy violations and copyright issues.
But people have gotten around that type of rule in the past, time and time again. Last year, a Microsoft engineer warned that its AI image-generator ignored copyrights and generated sexual, violent imagery with simple workarounds. xAI’s Grok recently generated nude deepfake videos of Taylor Swift with minimal prompting. And even for OpenAI, employees told reporters that the company is being restrictive on public figures for “this rollout,” not seeming to rule out the ability to create such videos in the future.
On Monday, The Wall Street Journal reported that OpenAI’s Sora generations will feature copyrighted material unless the rights holders “opt out” of having their work appear on the platform. When The Verge asked about the matter during the Monday briefing with OpenAI, employees seemed to avoid the question, pointing to the company’s existing image-generation policy and saying Sora’s would be an extension of that. They also said that some opt-outs from the image-generation copyright policy would carry over and that the company would be building more controls.
Most Popular
Oct 1, 2025, 12:00 PM UTC


Hayden Field is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
On Monday, I watched OpenAI CEO Sam Altman drink from a gigantic mango-flavored juice box and remark aloud about how the box was half his size. The catch: It wasn’t really Altman. The juice box wasn’t real. He wasn’t really talking. It was a deepfake generated by AI.
The most concerning part: I couldn’t tell whether or not it was real.
OpenAI announced Sora 2, its new AI video- and audio-generation system, on Tuesday, and in a briefing with reporters on Monday, employees called it the potential “ChatGPT moment for video generation.” Just like ChatGPT, Sora 2 is being released as a way for consumers to play around with a new AI tool — one that includes a social media app with the ability to create realistic videos of real people saying real things. You could say it’s essentially an app full of deepfakes. On purpose.
OpenAI believes Sora, which was first announced in February 2024 and released that December, has finally reached a point of relative reliability. Bill Peebles, OpenAI’s head of Sora, compared the video-generation system’s earliest iteration to a “slot machine” where “you would put a prompt in and kind of cross your fingers that what you got out bore any resemblance to what you asked for.” The new model, he said, “is way better in terms of being faithful to how users prompt it.”
During the briefing, the team behind Sora 2 said they had been working on it for at least 20 months. The biggest step change in the product is that it can now generate audio that’s synchronized with video — not just background soundscapes and sound effects, but also dialogue that works for a range of languages. It’s available through Sora.com, “with Sora 2 Pro available to ChatGPT Pro users,” and developers are set to receive API access “soon.”
The social app is also called “Sora” and it’s available now via iOS to users in the US and Canada on an invite-only basis. More countries will follow, and each user will receive four additional invites to share with friends.
In the release, OpenAI said Sora 2 is “moving us closer to useful world simulators.” OpenAI employees told reporters the new system was much smarter at physics, too. Peebles said, “You can accurately do backflips on top of a paddleboard on a body of water, and all of the fluid dynamics and buoyancy are accurately modeled. It’s really a step function change in terms of the underlying physics intelligence that this model has.”
But that could also be a nightmare when it comes to deepfakes, which are already a widespread problem.
The accompanying Sora social media app looks a lot like TikTok, with a “For You” page and an interface with a vertical scroll. But it includes a feature called “Cameos,” in which people can give the app permission to generate videos with their likenesses. In a video, which must be recorded inside the iOS app, you’re asked to move your head in different directions and speak a sequence of specific numbers. Once it’s uploaded, your likeness can be remixed (including in interactions with other people’s likenesses) by describing the desired video and audio in a text prompt.
OpenAI employees told reporters during the Monday briefing that Sora has replaced text messages, emojis, and voice notes for them, to become one of the top ways they communicate among themselves. In the briefing, they demoed fake ads, fake conversations between two people, fake news clips, and more, all created with Sora 2 and consumed via scrolling through the social media app.
Some of the clips were generated live as we watched, and they were terrifyingly realistic — no more six-fingered hands (that I could see, at least). Unless the video contained fantastical subject matter, like the gigantic juice box example, the untrained eye may not be able to tell that these videos were AI-generated — and if you could tell, it would likely be based simply on a feeling, or a vibe, of something feeling off.
The Sora app lets you choose who can create cameos with your likeness: just yourself, people you approve, mutuals, or “everyone.” OpenAI employees said that users were “co-owners” of these cameos and could revoke someone else’s creation access or delete a video containing their AI-generated likeness at any time. It’s also possible to block someone on the app. Team members also said that users can see drafts of cameos that others are making of them before they’re posted, and that in the future they may change settings so the person featured in a cameo has to approve it before it posts — but that’s not the case yet.
In the release, OpenAI also pointed to its newly minted parental controls for its products, writing that options include turning on “a non-personalized feed, choosing whether to allow their teen to send and receive direct messages, and the option to turn off an uninterrupted feed of content while scrolling.”
Like TikTok, the Sora app seems built to generate social media trends, with the ability to “Remix” other videos. It currently generates 10-second videos, but Pro users could soon get up to 15 seconds on the web, with the same ability coming to mobile later. Employees said that it’s possible to create longer videos, but since that’s a compute-heavy task, they’re still figuring out how they’ll handle it.
For everyone else, the biggest task with Sora 2 and the Sora app may be figuring out how to decide what’s real. OpenAI wrote in a release that “every video made with Sora has multiple signals that show it’s AI-generated,” such as metadata, a moving watermark on videos downloaded from Sora.com or the Sora app, and unspecified “internal detection tools to help assess whether a certain video or audio was created by our products.” (OpenAI said in the release that in some ChatGPT Pro web flows, “watermarks may be omitted except when real people are depicted.”) Screen recording also isn’t supposed to be possible within the app. But workarounds seem almost inevitable, if recent history is any guide — as does misinformation with the potential to spread like wildfire.
As for deepfakes of government figures, celebrities, and other public figures? “Public figures can’t be generated in Sora unless they’ve uploaded a cameo themselves and given consent for it to be used,” OpenAI wrote in a release. “The same applies to everyone: if you haven’t uploaded a cameo, your likeness can’t be used.” OpenAI employees also said during the briefing that it’s “impossible to generate” X-rated or “extreme” content via the platform, and that the company isn’t currently allowing free-form text prompting for AI-generated public figures. They also said that the company moderates video output for potential policy violations and copyright issues.
But people have gotten around that type of rule in the past, time and time again. Last year, a Microsoft engineer warned that its AI image-generator ignored copyrights and generated sexual, violent imagery with simple workarounds. xAI’s Grok recently generated nude deepfake videos of Taylor Swift with minimal prompting. And even for OpenAI, employees told reporters that the company is being restrictive on public figures for “this rollout,” not seeming to rule out the ability to create such videos in the future.
On Monday, The Wall Street Journal reported that OpenAI’s Sora generations will feature copyrighted material unless the rights holders “opt out” of having their work appear on the platform. When The Verge asked about the matter during the Monday briefing with OpenAI, employees seemed to avoid the question, pointing to the company’s existing image-generation policy and saying Sora’s would be an extension of that. They also said that some opt-outs from the image-generation copyright policy would carry over and that the company would be building more controls.
Most Popular