Insights Business| SaaS| Technology AI Slop Is Everywhere Now and Here Is the Evidence
Business
|
SaaS
|
Technology
Mar 20, 2026

AI Slop Is Everywhere Now and Here Is the Evidence

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic AI Slop Is Everywhere Now and Here Is the Evidence

In December 2025, Merriam-Webster named its Word of the Year: slop. The editors picked it because by 2025 the problem had become impossible to ignore — absurd videos, AI-written books, fake journalism, and a new office hazard called “workslop” polluting internal workflows.

The numbers make the scale hard to argue with. Ahrefs found 86.5% of Google’s top-ranking pages contain AI-generated content. Kapwing found over 20% of YouTube new-user feeds are AI slop before any personalisation kicks in. And an estimated $117 million per year flows to AI slop channels on YouTube alone.

So in this article we’re going to define AI slop precisely, show you how much of it is actually out there, explain why it keeps growing, and introduce workslop — the internal variant that makes this a real problem for technical teams, not just a social media annoyance.

For a broader treatment, see our AI slop epidemic overview.

What exactly is AI slop — and how is it different from ordinary low-quality content?

AI slop is low-quality digital content produced in industrial quantities by artificial intelligence — text, images, video, and audio at near-zero marginal cost, published at scale with no meaningful human editorial input.

The key difference from the low-quality content that’s always existed is not the quality. It’s the volume. A single operator with AI generation tools can now publish thousands of articles, videos, or reviews per day. That was physically impossible before. Every major platform now has a contamination problem that grows faster than any moderation system can keep up with.

Surface plausibility is the trap. AI slop typically passes a first-look check — correct grammar, coherent sentences, plausible structure — without delivering any original insight, verifiable data, or authentic experience. It comes from automated pipelines where the human’s only job is pressing publish.

It shows up across three main areas: consumer social media feeds (YouTube, Facebook, TikTok), search engine results pages (Google), and user-generated content systems (Amazon reviews, academic papers). As the broader context of AI slop makes clear, slop is a cross-platform structural condition. It’s not a niche problem.

How did “slop” become Merriam-Webster’s 2025 Word of the Year?

Merriam-Webster’s editors chose “slop” on 14 December 2025. Their description was pointed: it sets a tone “less fearful, more mocking — a little message to AI that when it comes to replacing human creativity, sometimes you don’t seem too superintelligent.”

WOTY selection matters beyond the publicity it generates. It signals that a term has moved from in-group slang to shared cultural vocabulary — the same shift as “selfie” in 2013 or “pandemic” in 2020. Once a problem has a widely understood name, the conditions for regulatory attention are in place.

And it wasn’t just one word. Merriam-Webster formalised a whole vocabulary cluster. “Workslop” was named in the same announcement. The Reuters Institute listed “AI slop,” “brain rot,” and “workslop” as the key terms likely to define 2026. The cultural crystallisation arrived three years after ChatGPT‘s public launch — right when the problem had become impossible to dismiss as a niche concern.

How much AI slop is actually out there — and how do we know?

Three independent studies, using different methodologies across different platforms, arrive at the same conclusion: AI-generated content is now the default on major platforms, not the exception.

Ahrefs — search (July 2025): A study of 600,000 Google top-ranking pages found 86.5% contain some AI-generated content. The correlation between AI content percentage and Google ranking position was 0.011 — effectively zero. Google’s algorithm does not penalise AI content.

Kapwing — YouTube (December 2025): Kapwing studied the 15,000 most popular YouTube channels and found 278 channels containing only AI slop, with 63 billion combined views and 221 million subscribers. A freshly created account with no viewing history had 104 of its first 500 recommended videos — 20.8% — identified as AI slop. Before any personalisation, one in five recommendations is already slop.

Originality.ai — Amazon: An analysis of over 26,000 Amazon product reviews found AI-generated reviews increased 400% since ChatGPT’s launch. Extreme 1-star and 5-star reviews are 1.3 times more likely to be AI-generated than moderate reviews — suggesting deliberate deployment to manipulate ratings.

NewsGuard and Pangram Labs — content farms (March 2026): Researchers had identified 3,006 AI content farm sites, growing at 300 to 500 new sites per month. Of these, 358 were linked to Storm-1516, a pro-Russian influence operation mimicking local US and European newspapers.

Three data sources. Three platforms. Three methodologies. Same conclusion every time. The sheer volume of synthetic content now entering the web has consequences that extend well beyond user experience — see why the scale of slop matters for future AI systems for what happens when that content feeds back into training pipelines.

How does the AI slop creator economy work?

That scale exists because the business model makes sense. It’s arbitrage. AI tools reduce production cost to near zero. Platform ad revenue — YouTube Partner Programme and Facebook In-Stream Ads — pays per view regardless of whether the content is authentic. The spread between those two numbers is profit.

Bandar Apna Dost, an Indian AI channel, is the most-viewed AI slop channel identified in the Kapwing study: 2.4 billion views, estimated revenue of as much as $4.25 million per year. Another channel, Pouty Frenchie (Singapore), has racked up 2 billion views targeting children. Across all channels Kapwing identified, estimated total annual YouTube revenue is $117 million.

Creator geography is part of the economic logic. Many operators come from middle-income countries — Ukraine, India, Kenya, Nigeria, Brazil, Vietnam — where YouTube CPM revenue at these volumes can substantially exceed local median wages. It’s a rational economic decision.

The production stack is minimal. AI video generation, AI voiceover, a scheduling tool, and a few hours of prompt engineering per day. Operators swap tips on Telegram and Discord and sell courses on maximising slop revenue.

The advertiser exposure goes beyond YouTube too. NewsGuard documented that 141 well-known brands ran ads on AI content farm websites over a two-month period, often without knowing it.

Why does algorithmic amplification make AI slop self-reinforcing?

Platform recommendation engines — YouTube Shorts, Facebook Feed, TikTok For You Page — optimise for engagement signals: views, watch time, comments, shares, reactions. None of those signals distinguish authentic from AI-generated content. The algorithm genuinely does not know the difference.

The backlash amplification paradox makes this worse. When users comment in frustration (“this is obviously AI-generated”), those comments register as engagement and tell the algorithm to show the video to more users. Negative engagement amplifies distribution.

YouTube CEO Neal Mohan acknowledged “growing concerns about low-quality content, aka AI slop” but ruled out making judgements about what content should flourish. Meta CEO Mark Zuckerberg described social media’s “third phase” as an incoming “huge corpus” of AI-generated content — accelerating rather than addressing the trend. Both Meta and X have cut moderation teams.

The structural issue is financial. Platforms are advertising businesses. Slop that generates views generates ad revenue. There is no incentive to remove high-performing AI content unless regulators or advertisers apply pressure.

For a detailed look at how AI slop is already reshaping search rankings and SEO economics, see how AI slop is already reshaping search rankings.

What is workslop and why does it matter for technical teams?

Consumer-facing AI slop is easy to see. The version circulating inside organisations is less visible — and the data suggests it is doing real damage.

Merriam-Webster formalised “workslop” in the same December 2025 announcement: AI-generated low-quality documents that waste coworkers’ time — reports, meeting notes, emails, documentation produced with AI assist tools and published without any meaningful review.

A Harvard Business Review survey found four out of ten respondents had encountered workslop in the previous month, describing it as destroying productivity because it “lacks the substance to meaningfully advance a given task.” An MIT Media Lab report found 95% of organisations see no measurable return on their AI investment over the same period that AI tool adoption doubled. Workslop is a credible partial explanation for that gap.

The failure modes will be familiar to anyone working with development teams: meeting notes that are grammatically perfect but don’t capture what was actually decided; support tickets with plausible technical detail that misidentify the real problem; code documentation that describes what code should do rather than what it does.

Daniel Stenberg, founder of the cURL project, described AI-generated security bug reports this way: “Not only does the volume go up, the quality goes down. So we spend more time than ever to get less out of it than ever.” The AI origin isn’t the issue — the defining problem is volume-first, quality-optional production operating inside your workflow.

If your internal corpus is increasingly AI-generated and later used to fine-tune an internal LLM, recursive training degradation begins inside your own infrastructure. That’s examined in our article on what the AI slop epidemic means broadly.

Frequently Asked Questions

Is all AI-generated content slop?

No. The defining characteristic of slop is not AI authorship — it’s industrial-scale production without meaningful human editorial intent. A carefully reviewed, human-directed AI-assisted article that provides genuine insight is not slop. An automatically generated and published script is. Intent and process define slop, not the tool used.

Why can’t platforms just remove AI slop?

Three problems compound each other. AI content detection is becoming less reliable — machines can no longer accurately determine whether a video is definitively AI-generated. Slop generates advertising revenue before it’s flagged, giving platforms no structural financial incentive to act quickly. And at YouTube’s upload rate of 500-plus hours of video per minute, human review at scale simply isn’t economically viable.

What is brain rot and is it related to AI slop?

Brain rot is the thesis that sustained consumption of low-quality, attention-optimised content progressively degrades the capacity for sustained attention and critical thinking. AI slop is not brain rot — but it is a primary delivery mechanism for the content types brain rot researchers identify as harmful.

What is a content farm and how does AI make them more dangerous?

A content farm produces high volumes of low-quality content to capture search traffic or platform ad revenue. AI tools reduce the human labour input to near zero. NewsGuard and Pangram Labs have identified 3,006 AI content farm sites as of March 2026, growing at 300 to 500 new sites per month.

Does AI slop affect search results, or just social media?

Both. On social media, slop is amplified by engagement algorithms. On search, slop benefits from Google’s neutral stance on AI content: Ahrefs found a near-zero correlation (0.011) between AI content percentage and Google ranking position. AI content farms also surface in Google News and Google Discover.

What is the difference between AI slop and AI hallucination?

AI hallucination is when a model produces factually incorrect or fabricated output. AI slop is a category of content defined by its production method (automated, high-volume) and intent (revenue extraction). The two frequently co-occur — slop production pipelines typically have no fact-checking stage. GPTZero coined the term “vibe citing” for hallucinated academic citations found in 53 accepted NeurIPS 2025 papers.

Can businesses be directly harmed by AI slop?

Yes, through three pathways. Search visibility displacement: AI content farms can rank as well as human-authored sites. Platform trust erosion: AI-generated Amazon reviews increased 400% post-ChatGPT, and extreme ratings are 1.3 times more likely to be AI-generated. Internal quality degradation: workslop corrupts institutional knowledge and, when used as training data, degrades AI system quality recursively.

How does AI slop relate to “pink slime” sites?

Pink slime sites are a subcategory of AI content farms — AI-generated fake local news sites designed to look like legitimate community journalism. The Reuters Institute predicted in January 2026 that “the amount of low-quality AI automated content, including so-called ‘pink slime’ sites, looks set to explode.” NewsGuard identified 358 of the 3,006 tracked AI content farms as linked to a pro-Russian influence operation mimicking local newspapers.

What does the Merriam-Webster WOTY tell us about where AI slop stands in public awareness?

WOTY status marks the transition from in-group vocabulary to mainstream cultural currency — the term no longer requires explanation. Historical WOTY terms that later anchored regulatory discourse include “pandemic” (2020) and “vaccine” (2021). The naming precedes the formal response. The vocabulary cluster formalised alongside “slop” — workslop, brain rot — suggests the conversation is moving from diagnosis to framework.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter