Google’s Latest Gmail Update Solves a Problem Many Users Had
Finally Gmail allows users to update their email without any hassle.
OpenAI has introduced a new AI video app called Sora. It uses the company’s latest video model, Sora 2, and operates with a scrolling feed similar to TikTok, filled with user-made clips. This is the first time OpenAI has released a product that includes AI-generated sound along with video. The app is currently limited to iOS users and requires an invitation to access.
During sign-up, users receive a notice warning that they are entering a space filled with AI-created content. It also reminds them that some videos might look like real people, although nothing shown is actually happening in real life.
OpenAI is clearly leaning into the idea that AI deepfake content could become a mainstream source of entertainment. Sora encourages users to create playful digital doubles of themselves, friends, creators, or even total strangers. The main feed keeps the experience going with a continuous lineup of short, AI-generated videos featuring humanlike faces.

When setting up the app, users can make a digital version of themselves by speaking a few numbers and slowly moving their head while the app captures their face. Sam Altman mentioned in a blog post that the team put a lot of effort into making sure these digital characters stay consistent.
There are also controls for who can use your likeness in their Sora videos. You can allow everyone, only yourself, only people you approve, or just mutual connections. If someone creates a video using your avatar, even if they never post it and it remains in their drafts, you can still view the full clip from your own account page.
When I scrolled through the For You feed, a lot of the top videos featured Sam Altman’s face. In one AI clip, his digital double was shown trying to steal a GPU from a Target store, then begging a security guard to let him keep it so he could continue developing AI. The voice even sounded similar to his.
While testing the app, many videos still showed glitches or odd visual mistakes. Even so, Sora makes it surprisingly easy to create deepfake-style content that can look and sound very close to the real thing.
Adding people you know into a video is as simple as tapping their face in the app and marking them as a “cameo.” After that, you just type a short idea, like “arguing in the office over a news article,” and the app generates the scene.

Once you enter a prompt, Sora handles everything. It creates the visuals, the voices, and a short script, usually resulting in a clip about nine seconds long. During testing, a video of two coworkers having an over-the-top argument in the office definitely drew mixed reactions from the team, ranging from laughter to slight discomfort. Sam Altman noted in a blog post that the company knows an app like this could become highly addictive and could also open the door to new forms of bullying.
To address those risks, OpenAI has added several safety measures to Sora. These rules aim to prevent people from misusing someone’s digital likeness and block harmful categories such as sexual content, real-world violence, hate messaging, extremist videos, and anything encouraging self-harm or eating disorders.
These systems will be closely watched as the app expands and more people begin experimenting with what Sora can do.
When I tested Sora, I tried a few harmless prompts like putting myself on a cooking show or performing stand-up comedy. The cooking clip turned out fairly realistic, complete with a voice that sounded a lot like mine, but the stand-up video refused to generate after a warning about potentially risky content. It seems the app watches closely for anything that could cross a line.
I also attempted a video of myself climbing a skyscraper like a superhero, but Sora blocked it because it might hint at unsafe behavior. However, a prompt where I hosted a talk show with a robot guest generated instantly and looked surprisingly believable.
The app clearly has strong rules around safety and identity use, yet the results can still be unsettlingly lifelike. At one point, I sent a video of my digital twin winning a game show to a friend, and they didn’t even realize it wasn’t really me.