Insights Business| SaaS| Technology How Deepfake Fraud Became a Five Dollar Subscription Service
Business
|
SaaS
|
Technology
Feb 24, 2026

How Deepfake Fraud Became a Five Dollar Subscription Service

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of deepfake fraud as a commodity subscription service

Deepfake fraud is not a new threat. It is a supply-chain maturation event. And if you have been in software long enough to remember the early 2000s, you have seen this pattern play out before.

In 2003, exploiting a SQL injection vulnerability required real technical skill. By 2006, packaged exploit kits had reduced that barrier to a point-and-click exercise available on any forum. By the end of the decade, SQL injection attacks were automated, commoditised, and sold to non-technical operators on subscription. The specialists who built the tools made their money. Everyone else just bought access.

That same cycle has now completed for deepfake fraud. According to Group-IB‘s January 2026 research, a synthetic identity kit — AI-generated face, cloned voice sample, supporting documentation — sells for approximately $5 on dark web markets. A Dark LLM subscription runs around $30 per month. The tooling required to impersonate your CFO on a live video call is now priced below a Netflix subscription.

The consequences are measurable. Group-IB documents $347 million in verified deepfake fraud losses in a single quarter. Pindrop‘s 2025 report records an 880% surge in deepfake attacks in 2024.

This article maps the Deepfakes-as-a-Service (DaaS) market: what it sells, what it costs, how attacks work in practice, and why static defences are structurally inadequate against a commodity threat. For the broader context on how policy is struggling to keep pace, see our series overview on deepfake fraud and the policy response lag.

What Changed — How Did Deepfake Fraud Go From Specialist Skill to Five Dollar Subscription?

The shift is not primarily technical. It is economic.

Creating a convincing deepfake in 2019 required machine learning expertise, significant compute, and hours of model training. The skills were specialist, the barrier to entry kept the threat confined to well-resourced actors. That world no longer exists.

DaaS is the latest phase of Cybercrime-as-a-Service (CaaS) — the broader criminal services economy that previously commoditised ransomware, phishing kits, and credential-theft tooling. It is not a standalone AI phenomenon. It is the newest mature sub-market in a criminal services economy that has been running this playbook for over two decades.

The commoditisation of SQL injection toolkits, phishing kits, and now synthetic identity generation all follow the same trajectory: specialist capability packaged into turnkey tooling, distributed through criminal marketplaces, priced for volume. Group-IB describes AI as “the plumbing of modern cybercrime, quietly turning skills that once took time and talent into services that anyone with a credit card and a Telegram account can rent.”

The adoption curve data confirms the timing. AI-related mentions on dark web forums have grown 371% since 2019, with threads generating more than 23,000 new posts in 2025 alone. Identity fraud attempts using deepfakes surged 3,000% in 2023 as the technology crossed from niche curiosity to mainstream criminal tool.

The $5 price point matters because of what it represents: complete cost-barrier collapse for identity fraud. The remaining constraint is not access to tools or technical knowledge. It is willingness to commit fraud.

What Is Deepfakes-as-a-Service and What Do You Get for Thirty Dollars a Month?

Deepfakes-as-a-Service (DaaS) is a subscription-based criminal market that sells pre-packaged deepfake generation tools on dark web platforms, without requiring technical skills from the buyer. It is a mature sub-market of the broader CaaS economy, with tiered pricing, customer support channels, and product update cycles.

Here is how the pricing breaks down.

Entry-level synthetic identity kits sell for approximately $5 per package. That gets you a generated face image, a cloned voice sample, and fabricated supporting credentials. Everything required to construct a fraudulent identity for KYC bypass or social engineering.

Dark LLM subscriptions are the second tier. Language models with safety restrictions removed, available from documented vendors for between $30 and $200 per month, with over 1,000 active subscribers. The Register described the pricing as comparable to a Netflix subscription — which is accurate, and that is exactly the point.

For higher-volume operations, real-time deepfake video platforms sit at a premium tier between $1,000 and $10,000. These are the tools capable of an Arup-class attack — live, interactive, multi-participant video impersonation. Group-IB recorded 8,065 deepfake-enabled fraud attempts at a single financial institution over eight months.

The aggregate impact: $347 million in verified deepfake fraud losses per quarter. Deloitte projects generative AI fraud losses in the US will climb from $12.3 billion in 2023 to $40 billion by 2027.

The financial consequences for organisations are covered in depth in the financial losses that result from this commoditised threat.

What Are Dark LLMs and Why Do They Matter for Enterprise Security?

Dark LLMs are the scripting layer that makes deepfake fraud scalable.

A Dark LLM is a large language model with safety guardrails deliberately removed, sold on dark web platforms to assist criminal operations. Unlike ChatGPT or Claude, which refuse to generate phishing content or social engineering scripts, Dark LLMs are purpose-built for exactly those tasks. Products like WormGPT and FraudGPT produce personalised spear-phishing content and social engineering scripts on demand — no refusals.

Group-IB documents 1,000+ active subscribers and a continuous development cycle. These are actively maintained software products. They receive updates. They respond to support requests. The operational model mirrors legitimate SaaS.

Within the DaaS ecosystem, Dark LLMs fill the written communication layer. A synthetic identity kit provides the face and the voice. A Dark LLM provides the email impersonating the CFO, the follow-up establishing urgency, the social engineering script for the call that precedes the video conference. A single operator with a $5 identity kit and a $30/month Dark LLM subscription has access to all three attack components.

How Do AI-Powered Voice Clone Scams Work in Practice?

Voice cloning is currently the highest-frequency deepfake attack vector — cheaper, faster to deploy, and requiring less compute than real-time video deepfakes.

The technical barrier is lower than most defenders appreciate. Scammers need as little as three seconds of source audio to produce a voice clone with an 85% match to the original speaker. Conference talks, earnings calls, LinkedIn videos, corporate webinars — for most public-facing executives at SMB companies, sufficient voice samples are already publicly available.

Pindrop’s data is direct: deepfake attacks grew 880% in 2024, followed by a 1,210% surge by December 2025. Their Voice Intelligence and Security Report documents a 1,300% year-on-year increase in deepfake voice calls. US call centres saw a 173% increase in synthetic voice calls between Q1 and Q4 2024 alone.

The attack pattern is consistent. The attacker acquires voice samples from public sources, generates a real-time or pre-recorded clone, and places a call. Often a synthetic voice bot makes initial contact to probe IVR systems and validate credentials before the social engineering phase begins. When the voice-cloned call arrives, the target hears what sounds like their CFO applying urgency framing: “I’ll explain later” and “I need you to take care of this right now.”

Modulate‘s CTO Carter Huffman puts it plainly: “human ears and human eyes are just not enough — they’re rendered ineffective at determining what’s real.” Modulate’s January 2026 survey found 91% of enterprises planning to increase spending on voice fraud prevention over the next 12 months. That tells you everything about how the current defences are performing.

What Did the Arup Twenty-Five Million Dollar Deepfake Attack Actually Look Like?

In January 2024, an Arup employee in Hong Kong authorised 15 wire transfers totalling HKD 200 million (approximately USD 25.6 million) in a single day. Every visible and audible participant on the video conference call — including the company’s CFO — was an AI-generated deepfake. Hong Kong Police Force reported the incident in February 2024.

The attack chain is instructive.

Initial access came via spear-phishing: an email impersonating the CFO, establishing a pretext around a confidential financial matter. The execution phase was the video conference — not a single deepfake, but multiple deepfakes representing the CFO and several colleagues, constructing a multi-participant call that provided both authority and social consensus. Urgency framing was applied. Explicit requests for confidentiality were made. All of it was standard social engineering, delivered through a technically convincing synthetic media layer. Source material came from publicly available content: LinkedIn videos, company conference recordings, corporate media appearances.

The employee had no technical means to distinguish the call from reality. Human accuracy in identifying high-quality deepfake video falls to 24.5% in controlled studies. A 2025 iProov study found only 0.1% of participants could correctly identify all fake and real media presented to them. This is not a negligence problem. It is a technology problem.

The fraud was discovered when the employee contacted actual headquarters to discuss the “secret transaction.” No arrests have been announced. The funds remain unrecovered.

Why Are Traditional Fraud Defences Structurally Inadequate Against a Commodity Threat?

Traditional fraud defences were designed for an era when fraud required skill, investment, and time. DaaS commoditisation has collapsed all three barriers.

The result is speed asymmetry: DaaS tooling iterates on subscription-funded development cycles — decentralised, competitive, market-driven — while enterprise fraud-detection systems update on compliance-driven cycles that are slow and reactive.

Here is how legacy defences fail against DaaS-class attacks specifically.

Knowledge-based authentication (KBAs) are defeated by synthetic identity kits pre-loaded with fabricated answers. Separately, 60% of organisations already report fraudsters using compromised PII from data breaches to bypass KBAs. The identity kit makes this trivially accessible.

One-time passwords (OTPs) are bypassed not by defeating the authentication mechanism, but by defeating the human before verification is reached. By the time an employee is asked to confirm a transaction, the social engineering has already succeeded. The OTP confirms an action the attacker has already authorised the human to take.

Rule-based fraud detection flags anomalies against historical baselines. DaaS tooling generates novel attack patterns with each update cycle. A rule set calibrated against yesterday’s signatures is obsolete before it is deployed. Gartner projects that by 2026, 30% of enterprises will no longer consider standalone identity verification reliable in isolation.

The full argument for why the arms race between generation and detection consistently favours the attacker is developed in our architecture guide. For the broader deepfake threat landscape — including the regulatory and compliance dimensions — the series overview provides the full context.

What Comes Next — How Does Agentic AI Remove Human Operators From Fraud Entirely?

Current DaaS attacks still require a human operator to place the call, manage the wire transfer instructions. The human is both a capability and a constraint.

Experian‘s 2026 Future of Fraud Forecast identifies the removal of that constraint as the top emerging threat. Agentic AI fraud describes fully automated, machine-to-machine fraud that executes the entire attack chain — target identification, social engineering, financial extraction — without human operators. The scaling constraint shifts from operator availability to computational capacity.

The progression is already partially visible. Pindrop documents a major US healthcare provider facing over $40 million in account exposure from automated AI bot calls in 2025. The FBI has documented North Korean operatives using deepfake identities to secure IT employment positions at US companies and divert salaries back to the regime — an application of DaaS capabilities well beyond conventional financial fraud.

The FTC documented $12.5 billion in US consumer fraud losses in 2024 — a 25% increase despite fraud report volumes remaining stable. That increase reflects scams becoming more effective, not more numerous. Agentic automation removes the human bottleneck. The trajectory points toward $40 billion in generative AI fraud losses by 2027.

The insurance and liability exposure from losses at this scale is examined in our guide to why fraud losses at this scale expose significant insurance gaps.

Frequently Asked Questions

What is Deepfakes-as-a-Service?

Deepfakes-as-a-Service (DaaS) is a subscription-based criminal market on dark web platforms that sells pre-packaged deepfake generation tools — synthetic faces, cloned voices, and fabricated identity documents — at commodity pricing. Entry-level synthetic identity kits cost approximately $5; Dark LLM subscriptions run around $30 per month. (Source: Group-IB, January 2026)

How much does it cost to buy a deepfake voice clone?

A synthetic identity kit including a cloned voice sample, AI-generated face, and supporting documentation costs approximately $5 on dark web markets. Scammers can produce an 85% voice match from as little as three seconds of source audio. Dark LLM subscriptions for generating social engineering scripts run around $30 per month.

What is a Dark LLM and is it legal?

A Dark LLM is a large language model with safety guardrails removed, sold on dark web platforms to assist criminal operations. WormGPT and FraudGPT are documented examples. Possessing or distributing these tools is illegal in most jurisdictions under computer fraud legislation, though enforcement is limited by the pseudonymous dark web structure.

How do criminals use AI to commit fraud?

DaaS platforms combine three components: synthetic identity kits (~$5) for fake faces and documents; voice cloning tools for replicating executive voices; and Dark LLMs (~$30/month) for personalised phishing and social engineering scripts. Together, they enable non-technical operators to conduct sophisticated identity fraud and wire transfer scams at scale.

Can you really lose millions to a deepfake video call?

Yes. In January 2024, Arup lost HKD 200 million (approximately USD 25.6 million) when an employee in Hong Kong authorised 15 wire transfers after a video conference where all participants — including the company’s CFO — were AI-generated deepfakes. The Hong Kong Police Force confirmed the incident.

How fast are deepfake fraud attacks growing?

Pindrop documented 880% deepfake attack growth in 2024, followed by a 1,210% surge by December 2025, and a 1,300% year-on-year increase in deepfake voice calls. Group-IB reports AI-related dark web forum mentions grew 371% since 2019, with deepfake fraud losses reaching $347 million per quarter.

What is the difference between deepfake fraud and a regular social engineering attack?

Traditional social engineering relies on text-based deception. Deepfake fraud adds a synthetic media layer — AI-generated video, cloned voices, and fabricated documents — that defeats the visual and auditory verification humans naturally rely on. The Arup case showed a live, multi-participant deepfake video call can deceive trained employees in ways text-based phishing cannot.

What is a deepfake scam and how does it work?

A deepfake scam uses AI-generated synthetic media — typically a cloned voice or fabricated video — to impersonate a trusted individual and deceive a target into authorising a financial transaction. The attacker acquires voice or video samples from public sources, generates a real-time deepfake, and uses it in a phone call or video conference.

Why is employee awareness training not enough to stop deepfake fraud?

Awareness training assumes employees can detect deception through vigilance. Deepfake technology defeats the sensory cues that vigilance relies on. The Arup employee saw and heard the CFO on a live video call with no technical means to detect the deepfake. Human accuracy in identifying high-quality deepfake video falls to 24.5% in controlled studies. Detection requires technological countermeasures, not human perceptiveness.

What is agentic AI fraud?

Agentic AI fraud, identified by Experian as the top emerging threat for 2026, describes fully automated fraud where AI systems execute the entire attack chain — target identification, social engineering, financial extraction — without a human operator. It is the next evolution beyond current DaaS models, which still require a human to operate the tools.

How does DaaS compare to earlier cybercrime tooling like phishing kits?

DaaS follows the same commoditisation trajectory as SQL injection toolkits (2003–2008) and phishing kits (2010s): specialist capability packaged into turnkey tooling, sold at subscription pricing, distributed to non-technical operators via criminal marketplaces. Supply-chain maturation within the broader Cybercrime-as-a-Service economy.

Is voice deepfake fraud more common than video deepfake fraud?

Currently, yes. Voice cloning is the highest-frequency deepfake attack vector. Pindrop documents a 1,300% year-on-year increase in deepfake voice calls. Voice cloning requires less compute and less training data — as little as three seconds of audio — making it cheaper and faster to deploy at scale. The $5 synthetic identity kit includes a cloned voice sample for exactly this reason.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter