Insights Business| SaaS| Technology How AI Slop Is Reshaping Google Search Rankings and E-Commerce Trust
Business
|
SaaS
|
Technology
Mar 20, 2026

How AI Slop Is Reshaping Google Search Rankings and E-Commerce Trust

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic How AI Slop Is Reshaping Google Search Rankings and E-Commerce Trust

Here’s a number worth sitting with: 0.011. That’s the correlation coefficient between the percentage of AI-generated content on a page and its Google ranking position, according to an Ahrefs study of 600,000 top-ranking pages. Statistically, it’s nothing. Google’s algorithm genuinely does not care whether a human or a machine wrote your content.

That indifference creates two separate problems. The first is search traffic: if Google can’t tell AI content from human content, content farms running at near-zero marginal cost can displace you in rankings regardless of how good your content actually is. The second is e-commerce trust: Amazon reviews saw a 400% increase in AI-generated content after ChatGPT launched, degrading the review signal that buyers rely on. As outlined in our overview of AI slop and what it means for the internet, both of these are structural shifts, not temporary noise.

This article works through the data — what it shows, why the mechanisms work the way they do, and where the defence lines are failing.

Why does Google rank AI slop the same as human-written content?

AI slop is low-quality, mass-produced AI-generated content published at scale with no genuine informational value. Thin product comparisons, hollow listicles, review-stuffed landing pages. The term has become shorthand across SEO, media, and platform-safety circles for the flooding problem that arrived with mainstream LLM access.

Ahrefs ran a study of 600,000 top-ranking pages across 100,000 random keywords and found that 86.5% of them contained some AI-generated content. Only 13.5% was purely human-written. They also split this into AI-assisted content — human-written with AI tools, accounting for 81.9% of top pages — and pure AI content at 4.6%. The vast majority of what’s ranking is mixed, not fully automated. That makes algorithmic detection harder still.

The 0.011 correlation is the key number. A page with 80% AI content is just as likely to rank well as a page with 0% AI content, all other signals being equal. Google’s position since 2023 is that AI content is acceptable if it’s helpful — and with AI Overviews built into Search, Google generates AI content itself. An outright penalty would be self-defeating.

E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) gets cited a lot as the mechanism that protects quality. It isn’t doing that job. E-E-A-T is a quality signal framework that rewards signals Google can actually measure. Modern LLMs can replicate those signals — relevance, authority markers, structured citations — without generating content that’s genuinely useful. The 0.011 correlation is your evidence that it’s not filtering AI content at scale. eMarketer independently corroborated both the 86.5% figure and the near-zero correlation, noting that Google ranks content on quality, not how it’s created.

How are AI content farms exploiting Google’s indifference to bury legitimate publishers?

Palo Alto Networks Unit 42 documented the full taxonomy of AI-boosted malicious SEO: content farms, link farms, cloaking, and bot networks working in coordination. The operational model is straightforward — LLM content generation, on-page SEO optimisation, bulk publishing, ranking capture — all at near-zero marginal cost per article.

The black-hat tactics that previously required human writers and manual effort have been automated. As Unit 42 puts it: “With the click of a button, bad actors can generate tens of thousands of spam articles, spin up fake social accounts to build backlinks, and deploy AI-tailored cloaking that deceives algorithms while staying invisible to users.” Cloaking tools show fake content to search engines while delivering different content to actual visitors, inflating domain authority without legitimate traffic.

Unit 42 describes malicious SEO as “a multi-million-dollar shadow economy” — complete ecosystems including cloaking tooling, content farms, and underground traffic fabricators. “AI-driven malicious SEO is already reshaping how visibility, trust and reputation are won or lost online. Non-LLM-focused defenses will be outpaced, outnumbered and outsmarted.”

The asymmetric cost is the real problem for your business. Content farms operate at near-zero marginal cost per article. Your team doesn’t. If a competitor spins up an AI content farm targeting your product category and Google’s algorithm can’t tell the difference, your SEO investment is competing against infrastructure that costs almost nothing to run. A farm can produce 100 competing articles in the time and at a fraction of the cost of one well-researched piece. Scale now overwhelms signal quality regardless of how much you invest in content.

What happened to Amazon reviews after ChatGPT launched?

Originality.ai analysed almost 26,000 Amazon product reviews and found a 400% increase in AI-generated content following ChatGPT’s launch in November 2022. The trend shows no signs of peaking.

Before LLM access was mainstream, fake reviews were often identifiable — poor grammar, repetitive phrasing, generic language. LLM-generated reviews look like genuine human writing. The barrier to entry dropped from organised review-farming operations to individual sellers and affiliates running scripts.

Amazon’s Verified Purchase mechanism provides partial protection. Verified reviewers are roughly 1.4 times less likely to produce AI-generated content than non-verified reviewers. But the badge confirms a transaction occurred, not that the review text is human. It’s a speed bump, not a wall.

The problem extends beyond Amazon. As Originality.ai notes, “not all ecommerce websites have Amazon’s resources.” Shopify and WooCommerce operators face the same threat without dedicated verification infrastructure or moderation teams.

Why are extreme reviews more likely to be AI-generated?

The Originality.ai research found that AI content is 1.3 times more likely to appear in extreme reviews — 1-star and 5-star — than in moderate reviews in the 2-to-4-star range.

The mechanism is commercial incentive. Sellers generate 5-star reviews to inflate their own product ratings and 1-star reviews to damage competitors. Moderate reviews don’t move the needle on aggregate ratings enough to be worth fabricating. The 2-, 3-, and 4-star range is where genuine, experience-based opinions tend to cluster.

The result is that the review distribution is being artificially stretched toward the extremes. A 4.5-star average no longer carries the same purchase signal it did pre-ChatGPT. “Reviews with strong bias are more likely to be written with AI assistance” — which is exactly what you’d expect when commercial incentive is driving generation.

The trust damage is ecosystem-wide. Even if your own product has only genuine reviews, broader degradation of review credibility affects conversion rates across the board. When buyers can’t trust the review system, they shift to other signals — brand recognition, word of mouth, independent testing. That shift disadvantages smaller, newer companies more than established ones. The review system was, in part, a levelling mechanism. It’s becoming less reliable as one.

Can AI content detection tools actually fix this problem?

Detection tools are the obvious first response. They’re also failing at the job.

ZDNET’s multi-year testing series shows significant accuracy drops across the category. BrandWell operates at approximately 40% accuracy — it correctly identified one of three AI-written samples and got confused by ChatGPT output. Undetectable.ai went from 100% accuracy to 20% across testing cycles: in October 2025, it rated human-written content as 60% likely AI, and all three AI samples as approximately 75–77% likely human. That’s a reversal of its claimed function.

In ZDNET’s tests, Originality.ai scored 80% — but incorrectly classified the human-written test block as 100% AI-generated. Grammarly‘s detector scored 40%; Writer.com identified every text block as human-written despite three of five being ChatGPT output. ZDNET’s overall conclusion: “I would advocate caution before relying on the results of any — or all — of these tools.”

Well-resourced platforms aren’t doing better at scale. A Kapwing study found that 104 of the first 500 videos recommended to new YouTube accounts were AI slop — over 20% — despite YouTube’s moderation investments. Pinterest introduced an opt-out system for AI-generated content, but continuing user complaints suggest it’s not working at scale. Detection tools are playing catch-up against generators that improve faster than detectors can adapt.

Detection is necessary but insufficient. It can’t be your sole strategy for protecting user-generated content pipelines. The deeper operational responses — signal-based approaches, friction mechanisms, source-provenance infrastructure — are a separate discussion.

Where does this leave traditional SEO strategy?

Stack up what we know: Google’s algorithm shows a 0.011 correlation with AI content, content farms operate at near-zero marginal cost, and review trust is being structurally degraded. Traditional SEO strategy is operating on ground that’s shifting under it.

The practical implication: organic search investment still matters, but volume economics now sit alongside content quality as the determining factor in search visibility. A content farm can produce 100 competing articles in the time and at a fraction of the cost of one hand-crafted piece.

The emerging response is Answer Engine Optimisation (AEO), also called Generative Engine Optimisation (GEO) or Generative Search Optimisation (GSO). As Digiday notes, there’s no standard taxonomy yet across agencies, publishers, and SEO practitioners. What all three terms point at is the same thing: optimising to appear in AI answer interfaces — Google AI Overviews, ChatGPT, Perplexity — rather than traditional blue-link SERPs. It’s not replacing traditional SEO today. But it’s where competitive advantage is starting to migrate. The full strategic picture on how answer engine optimisation is replacing SEO is worth understanding before the space matures.

The search and trust threats documented here are structural, not temporary. Google won’t patch algorithmic indifference without devastating its own index. Amazon can’t fully solve review pollution at scale. The terrain has changed, and the broader overview of the AI slop epidemic has further to run.

Frequently Asked Questions

Will Google ever penalise AI content outright?

Google’s 2023 guidance is that AI content is acceptable if it’s helpful. With 86.5% of top-ranking pages already containing some AI-generated content, a blanket penalty would damage the search index. The more likely trajectory is continued refinement of quality signals rather than a binary AI content filter.

How can you tell if a competitor is using AI slop to outrank you?

Watch for traffic losses to pages where new, thin competitors are appearing in SERPs. Content farms publish at volumes no human team can match — hundreds of pages per week on the same topic cluster. Unit 42 also identifies burst-publishing behaviour and unnatural link velocity as detectable signals.

What is AEO and why is it being discussed as SEO’s replacement?

AEO, GEO, and GSO — different acronyms, same concept — focus on getting content surfaced by AI answer interfaces like Google AI Overviews and ChatGPT, rather than traditional search result pages. It’s gaining attention because AI-generated answers are increasingly the first thing users see. The taxonomy isn’t settled and there’s no playbook yet.

How reliable are AI content detectors in 2025?

Reliability varies widely and is getting worse. ZDNET testing shows BrandWell at approximately 40% accuracy and Undetectable.ai dropping from 100% to 20% across testing cycles. No tool offers consistent, production-grade detection at scale.

Does the Verified Purchase badge on Amazon reviews mean a review is genuine?

Verified Purchase reduces AI-generated review incidence by about 1.4x compared to unverified reviews, but it confirms a transaction occurred — not the human origin of the review text.

Are smaller e-commerce platforms more vulnerable to AI review pollution than Amazon?

Yes. Amazon has dedicated verification resources and the Verified Purchase mechanism. Shopify and WooCommerce operators typically lack equivalent moderation infrastructure.

What is the difference between AI-assisted content and pure AI content?

Ahrefs data distinguishes the two: 81.9% of top-ranking pages contain AI-assisted content (human-written with AI tools), while 4.6% are pure AI-generated. Most AI content on the web is mixed, not fully automated — and mixed content is harder to detect.

How do content farms produce so much content so cheaply?

LLMs for generation, automated on-page SEO for optimisation, and bulk publishing for distribution. The marginal cost per article approaches zero. Palo Alto Networks Unit 42 documents how link farms and bot networks further amplify reach by inflating domain authority artificially.

Does E-E-A-T protect against AI slop in Google rankings?

E-E-A-T is a quality signal framework, not a penalty system. AI content can satisfy E-E-A-T signals if it mimics the structure and references of authoritative content — which modern LLMs do. The 0.011 correlation from Ahrefs confirms E-E-A-T is not filtering AI content at scale.

What does a 0.011 correlation actually mean in practical terms?

A correlation of 0.011 is essentially zero — no meaningful statistical relationship between AI content percentage and ranking position. In practical terms: a page with 80% AI content is just as likely to rank well as a page with 0% AI content, all other signals being equal.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter