News

AI Slop on Digital Platforms: When Entertainment Turns Into Misinformation

محمد عبد اللطيف خرواط محمد عبد اللطيف خرواط
date
January 13, 2026
Last update
date
7:00 AM
January 13, 2026
Translated By
Misbar's Editorial Team
AI Slop on Digital Platforms: When Entertainment Turns Into Misinformation
A new type of digital content known as AI slop | Misbar

As the use of artificial intelligence tools for content creation continues to accelerate, a new category of digital material has emerged, commonly referred to as AI slop. Rather than spreading explicit falsehoods, this content often blurs context and clouds public understanding. It is typically produced at scale and spreads rapidly, relying on images and videos that appear familiar or convincingly realistic.

Available evidence suggests that AI slop has become a routine part of many social media users’ daily experience. This shift was underscored by Merriam-Webster’s decision to name “slop” its Word of the Year for 2025, reflecting the term’s growing use to describe low-quality, AI-generated content and the rising concern over credibility and authenticity in digital spaces.

2025 Word of the Year: Slop

The danger of this content lies in its subtlety. It rarely makes a clear claim that can be easily disproved. Instead, it shapes misleading impressions through visual cues, repetition and stripped context, making it more difficult to identify and correct using traditional fact-checking methods.

This article examines the misinformation risks associated with AI slop by drawing on available data and reports. It analyzes how such content operates, tracks its presence during breaking news events, and assesses its impact on public trust in digital information, with a focus on political and health-related examples that illustrate how low-quality AI content can influence public perceptions and decision-making.

Assessing the Scale of AI Slop Through Data

The impact and risks of digital content cannot be fully understood without reference to quantitative data that tracks its prevalence across platforms. Independent sources indicate that this type of material is no longer marginal; it has become a recurring feature of users’ daily experiences, particularly on short-form video platforms.

A research report by Kapwing, a company specializing in video content analysis, found that AI slop and so-called “brainrot” videos may account for 21% to 33% of the content appearing in YouTube feeds, particularly for new users. The study analyzed trending channels across multiple countries and examined the first 500 Shorts shown to newly created accounts, seeking to assess the content that algorithms present as representative of the platform.

AI slop

The data reveal notable variation between countries. In South Korea, trending AI slop channels have accumulated more than 8.45 billion views, the highest worldwide. Spain leads in subscriber numbers, with more than 20.22 million across its trending channels, despite a relatively limited number of such channels compared with other countries. The Arab region also shows significant engagement: Egypt ranks second globally in subscribers to trending AI slop channels, with nearly 17.9 million, while these channels have collectively recorded more than 3.24 billion views.

Spanish and South Korean AI Slop Channels have most devoted viewerships

These findings correspond with reporting by The Guardian, which indicated that approximately 10% of the fastest-growing YouTube channels in recent months rely primarily on AI-generated content, often published in highly similar formats and at a rapid pace.

The AI Slop taking over YouTube

The Role of Political Satire and Visual Signals

Political satire in AI-generated or visually manipulated content has become one of the most widespread and influential forms of AI slop, operating outside traditional frameworks of fake news. Rather than making explicit claims that can be easily verified or refuted, this content shapes public opinion by creating visual impressions with implicit political meanings.

The prevalence of this type of content spiked following reports of Venezuelan President Nicolás Maduro’s arrest. Social media platforms were flooded with fabricated and visually altered clips placing former U.S. President Donald Trump in staged or satirical scenarios alongside Maduro and other political figures.

In these materials, Trump is often cast in symbolic roles—portrayed as a director of events or as a controlling figure over the fate of other leaders. He appears in scenes inspired by classic Arab films, fictionalized moments inside presidential offices, or musical and comedic sketches.

Similarly, figures such as Maduro, Ali Khamenei, or Algerian President Abdelmadjid Tebboune are repeatedly depicted as actors “awaiting their next move,” in narratives built on suggestion rather than explicit statements.

A comparable pattern emerged with the circulation of the “Trump Gaza” video. Originally produced as satire, it later spread without context or any explanation of its intended humor. Some users interpreted it as conveying real political messages, illustrating how satire can become misleading content—not through manipulation of the material itself, but through the removal of its contextual frame.

Trump and Netanyahu appear in Gaza

Read More

Ofcom Questions X’s Grok Over Creation of Sexualized Images of Women and Children

Top 12 AI-Generated Videos Fact-Checked by Misbar in 2025

Sources

Read More

Most Read

bannar