Technology

OpenAI Video App Sora 2 Raises Threats of Misinformation

Menna Abd ElrazekMenna Abd Elrazek
date
13th October 2025
Last update
date
1:11 pm
17th October 2025
OpenAI Video App Sora 2 Raises Threats of Misinformation
Fake Sora 2 videos spread online, fueling misinformation (Getty)

Following the release of OpenAI's consumer-focused social app, Sora 2, last month, several fake videos with false claims went viral on social media, increasing the risk of misinformation.  

The Sora app can produce nearly any type of video that a user can imagine with just a text prompt. Additionally, users can upload their own photos, enabling their voice and likeness to be included in made-up scenes.

The app can be customized to include specific fictional characters, company logos, and even deceased celebrities.

Bill Peebles, Head of Sora at OpenAI, said Sora hit a million app downloads in  less than 5 days, even faster than ChatGPT did. Despite the invite flow and only targeting North America.

AI-generated Videos by Sora 2 Spread On Social Media

Misbar’s team detected multiple false and misleading claims created and shared via the Sora 2 social platform.

On October 9, a claim surfaced on social media that an ICE agent pepper-sprayed one of the protesters, even though the video had Sora's watermark. 

An account called “Suzie Rizzio” posted a video made using Sora and said, “This ICE agent ended up pepper spraying him, and it serves him right! It’s obvious they have no clue what the hell they’re doing!” 

An account called “Suzie Rizzio” posted a video made using Sora and said, “This ICE agent ended up pepper spraying him, and it serves him right! It’s obvious they have no clue what the hell they’re doing!” 

On October 5, another claim circulating on social media featured a video allegedly showing a Moroccan hospital, where a doctor is seen sitting at a worn-out desk while many women and children wait in a crowded corridor.

An account named Algeria gate published the video on X with the caption: "Hospitals in Morocco: "Slaughterhouses of death and catastrophic negligence."

An account named Algeria gate published the video on X with the caption: "Hospitals in Morocco: "Slaughterhouses of death and catastrophic negligence."

Every video that Sora 2 creates has a visual watermark applied. However, the logo meant to help people distinguish between reality and AI-generated images is easy to remove in a few minutes.

A quick search for “Sora watermark” on social media leads to dozens of links to platforms that claim to erase the watermark from Sora 2–generated videos, raising concerns about traceability.

404 Media tested three of these websites, and they all seamlessly removed the watermark from the video in a matter of seconds.

404 Media tested three of these websites, and they all seamlessly removed the watermark from the video in a matter of seconds.

Hyperrealistic Video and Audio Raises Concerns of Misinformation 

The videos in the previously mentioned claims indicate that more realistic videos could exacerbate disputes, defraud consumers, sway elections, or frame individuals for crimes they did not commit.

Hany Farid, a computer science professor at the University of California, Berkeley, and a co-founder of GetReal Security, stated to The NewYork Times that it is concerning for consumers who are exposed to an unknown number of these pieces of content on a daily basis. “I worry about it for our democracy. I worry for our economy. I worry about it for our institutions.”

The majority of AI systems are still trained on linguistic data, such as books and internet text, according to Professor Hao Li, a leading authority on video synthesis, who spoke to CNBC. But to move toward general intelligence, he said, models need to learn from visual and audio information, much like a baby discovers the world through sight.

“We use AI to generate content to then train another model to perform better,” he said.

In order to improve model performance, Li added, his lab already feeds artificial intelligence-generated video into the system.

Bluesky's head of trust and safety, Aaron Rodericks, stated that the public is not prepared for such a dramatic breakdown between fake content and authentic ones.

Fake AI-Generated Video Erosion of Digital Trust

The majority of people are finding it harder to distinguish AI-generated content and trust online information less. 

The degradation of personal security and confidence is the most urgent threat to our customers. The quick and inexpensive creation of deepfakes, along with social media's algorithmic amplification, turns every feed into a potential minefield of sophisticated fraud and identity theft, according to Aragon Research.

The majority of people are finding it harder to distinguish AI-generated content and trust online information less. 

Additionally, a widespread condition known as "information fatigue" is brought on by repeated exposure to content in which it is impossible to distinguish fact from fiction. This leads users to disbelieve in all forms of media, including authentic reports and communications, thereby destroying digital trust.

OpenAI CEO Sam Altman wrote days after Sora was released, on October 4, “We have been learning quickly from how people are using Sora and taking feedback from users, rightsholders, and other interested groups.”

Additionally, a widespread condition known as "information fatigue" is brought on by repeated exposure to content in which it is impossible to distinguish fact from fiction.

OpenAI acknowledges that after launch they’ve observed real-world misuse and feedback, prompting immediate changes. They commit to giving rights holders more granular control over how characters and likenesses are generated—allowing creators to opt in or restrict use entirely. 

Additionally, OpenAI plans to monetize video generation and share revenue with rights holders who allow their characters to be used, to align incentives and discourage unauthorized uses.

Sora is not operating in isolation. The rapid evolution of AI-driven video generation now involves three major players. At the same time, Grok Imagine 0.9 entered the market in July 2025, emphasizing ultra-fast content creation capable of producing fully formed videos in under 15 seconds, combining image, video, and text generation into a single high-volume platform. 

Meanwhile, Google’s Veo 3, released in mid-2025, has further intensified the synthetic-media landscape with its native synchronized audio capabilities—generating dialogue, sound effects, and music that make its AI-produced videos almost indistinguishable from authentic footage.

Read More

Police Warn Over Misinformation After Elon Musk Amplifies Claims About Dundee ‘Knife’ Incident

Paloma Shemirani’s Death: When Parental Belief Becomes a Risk to Health

Sources

Read More

Most Read

bannar