Insights Business| SaaS| Technology How Deepfake Fraud Works and Why Defences Keep Falling Behind
Business
|
SaaS
|
Technology
Feb 24, 2026

How Deepfake Fraud Works and Why Defences Keep Falling Behind

AUTHOR

James A. Wondrasek James A. Wondrasek
Comprehensive guide to deepfake fraud and why defences keep falling behind

In January 2024, a finance employee at engineering firm Arup joined a video call with the company’s CFO and several colleagues. Every person on that call was a deepfake. The employee authorised 15 wire transfers totalling $25 million before the fraud was discovered. It was not a one-off. Deloitte projects US losses from AI-enabled fraud will reach $40 billion by 2027.

The pattern behind these numbers is straightforward. Deepfake fraud tooling — synthetic identity kits, voice cloning models, Dark LLM subscriptions — iterates on criminal-market timescales. The defences designed to stop it — laws, insurance policies, detection tools, corporate verification workflows — iterate on legislative and procurement timescales. That gap is widening, and it is the subject of this seven-part series. Each article below addresses one dimension of the problem. This page helps you find the one that matches where you are right now.

In this series:

  1. How deepfake fraud tooling became a five dollar subscription — The threat landscape: what Deepfakes-as-a-Service is and why the commodity model makes static defences obsolete.
  2. What deepfake fraud actually costs and the financial case for better defences — The financial case: aggregate loss data, the Arup and MSUFCU cases, and SMB-scale exposure.
  3. Why deepfake laws are multiplying while the fraud keeps getting worse — The regulatory picture: 169 US state laws, EU AI Act Article 50, the UK Home Office framework, and why compliance is necessary but insufficient.
  4. Choosing between deepfake detection and content provenance architectures — The architecture decision: comparing reactive detection, proactive provenance (C2PA), and proof of humanness as distinct defensive paradigms.
  5. Why standard cyber insurance does not cover deepfake fraud losses — The insurance gap: the voluntary parting exclusion, Coalition’s Deepfake Response Endorsement, and sublimit adequacy.
  6. A phased deepfake defence roadmap for organisations without a security team — The operational roadmap: phased controls executable by a lean team, from incident response plan to vendor due diligence.
  7. The liar’s dividend and what deepfake proliferation means for organisational trust — The trust crisis: the liar’s dividend, the consumer fraud crossover, and the synthetic candidate employment risk.

How Deepfake Fraud Became a Subscription Service

Deepfakes-as-a-Service (DaaS) is the commoditisation of synthetic media fraud tooling into subscription and per-job marketplace models — directly analogous to Ransomware-as-a-Service. Actors with no AI expertise can now commission multi-modal impersonation attacks. In 91% of cases, creating a convincing deepfake takes just $50 and 3.2 hours. Voice clones can be generated from as little as three seconds of audio. The technical barrier to entry has effectively been removed.

The DaaS market emerged as a supply-chain maturation event — the same pattern that commoditised SQL injection tooling two decades ago. Pindrop documented a 1,337% year-on-year increase in deepfake attacks on contact centres. The speed asymmetry this creates is built into the system: the DaaS market iterates in weeks while enterprise defences iterate in months to years. If you want to understand precisely how deepfake fraud tooling became a five dollar subscription — including the Dark LLM subscription ecosystem and the synthetic identity kit supply chain — the full threat landscape analysis covers the commodity model in detail.

Read the full analysis: how deepfake fraud tooling became a five dollar subscription.

What Deepfake Fraud Actually Costs Organisations

The commodity pricing described above translates directly into loss figures. Deepfake fraud losses reached $547.2 million in the first half of 2025, with Deloitte projecting $40 billion in US market losses by 2027. Individual incidents reach eight figures: Arup lost $25 million in the video-call attack described above; Orion disclosed a $60 million loss attributed to social engineering fraud. In 2024, businesses lost an average of nearly $500,000 per deepfake-related incident.

Financial services companies are particularly exposed — synthetic voice fraud in insurance spiked 475% in 2024. But the exposure extends beyond large institutions. Social engineering endorsement sublimits of $100,000 to $250,000 are the typical ceiling for SMB policies, and incident costs at the low end of documented cases already approach that ceiling. For a detailed financial case — including what deepfake fraud actually costs and the financial case for better defences at both enterprise and SMB scale — the full analysis includes the MSUFCU Pindrop ROI case and aggregate exposure modelling.

Read the full analysis: what deepfake fraud actually costs and the financial case for better defences.

Why Deepfake Laws Are Multiplying but Fraud Is Getting Worse

Forty-six US states have enacted deepfake legislation, producing 169 laws since 2022. The EU AI Act Article 50 takes effect on August 2, 2026 with penalties up to EUR 15 million or 3% of global turnover. The federal TAKE IT DOWN Act criminalises publishing non-consensual intimate deepfakes. Yet deepfake fraud losses are accelerating. The governance failure is structural: most laws treat deepfakes as a content-moderation problem rather than as criminal infrastructure.

Regulators have framed deepfakes as a transparency and labelling challenge rather than as a criminal services economy that needs disruption. For organisations operating across US, EU, and UK jurisdictions, the compliance burden is a patchwork of different disclosure requirements, penalty structures, and effective dates. A jurisdiction-specific compliance matrix is the minimum governance tool. The full analysis of why deepfake laws are multiplying while the fraud keeps getting worse — including the cross-border compliance problem for SaaS, FinTech, and HealthTech operators — covers all three major regulatory regimes and explains why legislative volume is not the same as legislative effectiveness.

Read the full analysis: why deepfake laws are multiplying while the fraud keeps getting worse.

Choosing Between Detection and Provenance as Your Defence Architecture

Three architecturally distinct approaches to deepfake defence exist. Reactive detection tools analyse media for synthetic artefacts but face an arms race problem — accuracy against novel generation methods can drop to 38–50%. Proactive provenance standards like C2PA (Coalition for Content Provenance and Authenticity) cryptographically attach origin and edit-history metadata at the point of creation, so any compliant platform can verify authenticity. Proof of humanness bypasses the fake-versus-real media binary entirely.

C2PA is backed by Adobe, Microsoft, Google, and OpenAI, and is advancing towards ISO standardisation. Jones Walker identifies C2PA compliance as an emerging legal reasonableness benchmark for organisations handling synthetic media. The decision about choosing between deepfake detection and content provenance architectures depends on your attack surface, your existing tooling, and whether your threat model calls for reactive or proactive controls — or both. The cluster article covers how to choose between these paradigms based on your attack surface and resources.

Read the full analysis: choosing between deepfake detection and content provenance architectures.

Why Standard Cyber Insurance Does Not Cover Deepfake Losses

Standard cyber and commercial crime insurance policies typically contain a voluntary parting exclusion: if an employee knowingly authorised a wire transfer — even one induced by a sophisticated deepfake impersonation of the CEO or CFO — the transfer was not involuntary, and the claim is denied. Deepfake deception is not currently a recognised exception to voluntary parting under standard policy language. The gap is systematic, not incidental.

The coverage gap has a size problem as well. Social engineering endorsements typically provide sublimits of $100,000 to $250,000. Against a multi-million dollar loss or even a $500,000 mid-market incident, that sublimit provides no meaningful risk transfer. The insurance market is beginning to respond — Coalition launched the first explicit Deepfake Response Endorsement in December 2025 — but the product landscape remains early-stage. Understanding why standard cyber insurance does not cover deepfake fraud losses — including the specific coverage language to require from your broker and how to evaluate sublimit adequacy against your real exposure — is the starting point for closing this risk transfer gap.

Read the full analysis: why standard cyber insurance does not cover deepfake fraud losses.

Building a Defence Roadmap Without a Dedicated Security Team

The most immediately effective controls cost nothing to implement: a deepfake-specific incident response plan (separate from standard cybersecurity IR), out-of-band verification protocols for any wire transfer or identity-change request over a defined threshold, and a pre-agreed safe word system for voice authentication scenarios. These three controls directly address the authorisation-chain vulnerability that makes CEO and CFO impersonation fraud possible — without requiring specialist staff.

The gap is wider than you might expect. Only 13% of companies have anti-deepfake protocols in place, and 87% of finance professionals say they would execute a payment if instructed by what appeared to be their CEO or CFO. Coalition’s incident response lead Shelley Ma notes that these attacks “shortcut skepticism, and they can bypass even very well-trained employees” — which is why process controls matter more than awareness training alone. A phased approach — following a phased deepfake defence roadmap for organisations without a security team — lets lean engineering organisations sequence controls by impact and cost without needing to hire a dedicated security function.

Read the full analysis: a phased deepfake defence roadmap for organisations without a security team.

The Liar’s Dividend and the Broader Trust Crisis

The liar’s dividend is the epistemic by-product of pervasive synthetic media: once deepfakes are widespread enough that any video, audio, or document can plausibly be claimed to be AI-generated, genuine authentic evidence can be dismissed as synthetic. You face not just the risk of being deceived, but of having genuine evidence of that deception challenged in insurance claims, fraud investigations, and regulatory proceedings.

The employment fraud vector is already active. Gartner projects that 1 in 4 global job candidates will be AI-fabricated by 2028, and in 2024, over 300 companies unknowingly hired impostors connected to North Korea using deepfakes. The social engineering playbooks refined in consumer romance scam and pig-butchering operations are prototypes for enterprise executive impersonation attacks — the underlying technology is identical. The full treatment of the liar’s dividend and what deepfake proliferation means for organisational trust covers how this epistemic shift affects fraud investigations, insurance claims, and hiring workflows — and why it matters even to organisations that never become direct fraud targets.

Read the full analysis: the liar’s dividend and what deepfake proliferation means for organisational trust.

Resource Hub: Deepfake Fraud and Policy Response Lag — Series Library

Threat Awareness and Financial Context

Architecture, Compliance, and Risk Transfer

What to Do Next

Fraud tooling scales as a commodity market while defences — institutional, regulatory, technical, and contractual — operate on slower timescales. That gap does not close on its own.

You cannot implement every recommended control simultaneously, and the series is designed with that constraint in mind. Here is where to start depending on what you need:

Frequently Asked Questions

What is a deepfake and how is it different from other AI-generated content?

A deepfake is AI-synthesised audio, video, or still-image media that impersonates a specific real person with sufficient fidelity to deceive a human observer or an automated verification system. The distinguishing characteristic is the impersonation component — the goal is to make you believe the content represents a real, known individual, not merely AI-generated content in the abstract. For a detailed look at the tooling behind this: how deepfake fraud tooling became a five dollar subscription.

What sectors are most exposed to deepfake fraud losses right now?

Financial services companies face outsized exposure due to contact centres handling high-value authentication calls at volume, wire transfer authorisation workflows relying on voice or video confirmation, and KYC onboarding susceptible to synthetic video injection. However, any organisation that uses voice or video calls to authorise financial transactions is exposed. What deepfake fraud actually costs and the financial case for better defences covers sector-specific data.

How has voice cloning technology advanced to where three seconds of audio is enough?

The text-to-speech ecosystem expanded rapidly between 2023 and 2025, with zero-shot voice cloning models now generating convincing synthetic speech from a few seconds of reference audio without fine-tuning. Any public recording — a LinkedIn video, an earnings call, a media interview — provides sufficient reference audio for a voice clone attack. How deepfake fraud tooling became a five dollar subscription covers the technology pipeline in full.

Does the EU AI Act apply to my company if it is not based in the EU?

Yes, in most cases where your product or service reaches EU users. EU AI Act Article 50 transparency requirements apply to providers and deployers of AI systems regardless of incorporation location, if the output is used within the EU. Penalties of up to EUR 15 million or 3% of global turnover apply, effective August 2, 2026. Why deepfake laws are multiplying while the fraud keeps getting worse includes a jurisdiction compliance matrix.

What is the voluntary parting exclusion and why does it matter for deepfake fraud claims?

The voluntary parting exclusion is an insurance policy clause that denies a claim when an employee knowingly authorised a financial transfer — even if the authorisation was obtained through deepfake impersonation. Under current standard policy language, the sophistication of the deception does not override the voluntary authorisation. Full coverage language guidance: why standard cyber insurance does not cover deepfake fraud losses.

What is out-of-band verification and why is it the primary SMB countermeasure?

Out-of-band verification means confirming any wire transfer instruction or identity-change request through an independent communication channel — calling back on a previously-verified, hardcoded phone number rather than accepting the number provided in the request. It directly defeats the primary attack vector without requiring any technology investment. A phased deepfake defence roadmap for organisations without a security team includes OBV protocol design as a Phase 1 priority.

What is the liar’s dividend and why does it matter beyond direct fraud losses?

The liar’s dividend describes the epistemic by-product of pervasive synthetic media: genuine authentic evidence can now be plausibly dismissed as AI-generated. For organisations, this means genuine recordings of fraud incidents, authentic documentation of wrongdoing, and real evidence submitted to insurance claims can all be challenged as synthetic. The liar’s dividend and what deepfake proliferation means for organisational trust covers the institutional implications.

This article is part of the Deepfake Fraud vs Policy Response Lag series by SoftwareSeni. For the complete series, see the navigation block above.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter