Insights Business| SaaS| Technology What Deepfake Fraud Actually Costs and the Financial Case for Better Defences
Business
|
SaaS
|
Technology
Feb 24, 2026

What Deepfake Fraud Actually Costs and the Financial Case for Better Defences

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of deepfake fraud financial exposure and defence investment ROI

Deepfake fraud is no longer a theoretical risk. It’s a line item on someone’s loss statement right now. Group-IB documented $347 million in verified losses in a single quarter. The FTC reported $12.5 billion in total US consumer fraud for 2024. Deloitte projects $40 billion in US generative AI-enabled fraud by 2027. These aren’t competing figures — they’re measuring different things — but together they describe a threat that is accelerating faster than most organisations’ defences.

Two cases tell you everything you need to know about the stakes. In January 2024, a finance employee at Arup‘s Hong Kong office watched what looked like a video conference with the company CFO and several colleagues, then authorised 15 wire transfers totalling $25.6 million. Every person on that call except the victim was a deepfake. The money has never been recovered. On the other side of the ledger: Michigan State University Federal Credit Union deployed Pindrop‘s voice fraud detection platform in August 2024 and documented $2.57 million in avoided fraud over 14 months — plus a 10% NPS improvement and 58 seconds saved on every authentication call.

The difference between those two outcomes is whether a working defence was in place. This article lays out the real financial data, explains why financial services companies cop the worst of it, and walks you through a per-incident cost model you can apply to your own organisation. For broader context, see this overview of how deepfake fraud operates and why the policy response has lagged.

What Does Deepfake Fraud Actually Cost Businesses in 2024-2025?

The headline numbers are real, but they’re measuring different things. Worth unpacking before you use any of them in a board presentation.

Group-IB’s $347 million is verified quarterly losses from confirmed, investigated incidents — cloned executives, fake video calls, documented dollar amounts. It’s the most conservative figure because it only counts what was actually investigated and attributed to deepfake fraud.

The FTC’s $12.5 billion is total US consumer fraud across all types for 2024. Deepfakes are an accelerating slice of that, not the whole thing. Investment scams were the largest deepfake-specific category at $900 million, or 57% of all deepfake losses tracked by Surfshark.

Deloitte’s $40 billion is a forward projection for all US generative AI-enabled fraud growing from $12.3 billion in 2023 at roughly 32% compound annual growth. Use it as a planning anchor, not a current figure.

The acceleration is the sharpest data point of the lot. Surfshark tracked cumulative deepfake losses growing from $130 million across 2019 to 2023, to $400 million in 2024, to $1.56 billion in 2025. Losses tripled year-on-year. The WEF confirmed more than $200 million in Q1 2025 alone — more in a single quarter than the preceding four years combined.

For company-level planning, the most useful figure comes from DeepStrike: nearly $500,000 average per-incident cost in 2024. That’s the basis for the exposure modelling later in this article. And the tooling driving these attacks is cheap — voice cloning runs $0.01–$0.20 per minute and needs only three seconds of source audio to get started.

Why Are Financial Services Companies Disproportionately Targeted by Deepfake Fraud?

Three structural factors make financial services the preferred hunting ground.

KYC reliance. Know Your Customer workflows are built on the assumption that identity verification works — that a voice or a face can confirm who someone is. Deepfakes attack that assumption directly.

Call centre volume. Financial institutions handle millions of authentication calls every year. Pindrop documented a 1,300% year-on-year increase in deepfake calls targeting financial institutions in 2024 — with 1 in every 106 calls being machine-generated for Pindrop customers by late 2024.

Wire transfer workflows. A single authorised transaction can move millions of dollars. The Arup case demonstrates exactly what that looks like in practice — and Arup is an engineering consultancy, not a bank. Any company with wire transfer authority is exposed to the same risk.

Credit unions and community banks get specifically targeted because fraudsters know the technology gap exists. Frank McKenna of Point Predictive puts it plainly: “Fraudsters are targeting credit unions and smaller community banks because they know that they have not invested in the sophisticated technology that the bigger banks have.” As larger banks harden their defences, attackers move down-market.

The Modulate “State of Voice-Based Fraud 2026” survey found 91% of enterprises plan to increase voice fraud spending, but nearly half aren’t confident in their current detection capabilities. And 44% cite friction as the top consequence of adding security — so even organisations that do invest are under pressure to water it down.

The same tooling that targets a credit union can target your customer verification workflow. For context, see our analysis of the Deepfake-as-a-Service ecosystem.

What Happened in the Arup Deepfake Fraud Case and How Much Did It Cost?

In January 2024, a finance employee at Arup’s Hong Kong office received a spear-phishing email from a purported CFO, asking them to join a confidential video call. Every other participant on that call was a real-time deepfake built from publicly available footage — LinkedIn videos, conference recordings. Fifteen wire transfers totalling $25.6 million (HK$200 million) were authorised and executed in a single day. As of early 2025: no arrests, no recovered funds.

DeepStrike’s assessment cuts right to it: “The $25 million Arup fraud was not a failure of an employee’s detection skills; it was a failure of organisational process.” A mandatory out-of-band verification protocol — confirm all large transfer requests via a pre-registered phone number — would have stopped it regardless of how convincing the deepfakes were.

Arup is not a bank. It is a global engineering consultancy. Any company with wire transfer authority has the same exposure. That is the upper bound of what the cautionary scenario looks like.

For context on why standard cyber insurance often doesn’t cover these losses, see our analysis of the deepfake fraud coverage gap.

What Did the MSUFCU Pindrop Deployment Actually Prove About Defence ROI?

MSUFCU — Michigan State University Federal Credit Union, with $8.26 billion in assets and 367,000 members — deployed Pindrop’s Passport and Protect products in August 2024. The goal was straightforward: reduce fraud without adding friction, increase efficiency, improve member experience. Over 14 months, it delivered on all three.

$2.57 million in avoided fraud exposure — deepfake calls blocked before they reached agents. “Avoided fraud exposure” is the probability-weighted value of fraudulent transactions stopped before execution.

10% NPS improvement — Net Promoter Score moved from 57 to 63 immediately after go-live. Colleen Cole, VP of MSUFCU’s member service centre: “There was an immediate jump, and then it’s been maintained and sustained since then.”

58 seconds saved per authentication call — Pindrop’s passive scoring eliminates manual security questions at the start of each call. Less friction for legitimate members; more scrutiny for the suspicious ones.

The mechanism is important here. Pindrop analyses calls in real time between connection and agent pickup, without the caller doing anything extra. Security is invisible to legitimate members — which is why NPS went up, not down. For a deeper look at the passive call fraud scoring approach Pindrop uses and how it compares to other defensive architectures, see our analysis of detection versus content provenance strategies.

McKenna again: “$2.57 million on the bottom line is a significant amount for them. I would expect it to grow year over year because these AI attacks are going to be far more frequent.”

How Do You Estimate Per-Incident Deepfake Fraud Exposure for a 50-500 Employee Company?

Start with the DeepStrike anchor: $500,000 average per-incident cost in 2024. For voice fraud specifically, Modulate puts the typical range at $5,000 to $25,000, with 20% of organisations reporting $25,000 to $100,000. The expected loss calculation is straightforward:

Compare those numbers against the annual cost of detection tooling and you’ve got your business case.

Here’s the counterintuitive part: the SMB exposure gap is larger than the enterprise gap. Enterprise companies have security operations centres and dedicated fraud teams. A 200-person company has a CFO, a finance team of two or three, and often a single person with wire transfer authority. CEO fraud now targets at least 400 companies per day. More than half of business leaders admit employees have received zero deepfake training.

For SMBs that can’t yet justify detection investment, process controls are the starting point — and they cost nothing.

Voice Clone Fraud Versus Video Deepfake Fraud — Which Poses Greater Risk Right Now?

Voice clone fraud is the higher-volume, higher-frequency risk. Pindrop’s 1,300% year-on-year increase in deepfake calls is the ongoing daily attack surface. Voice cloning requires three seconds of source audio and costs fractions of a cent per minute. It goes after high-volume routine processes: customer authentication, account verification, call-centre identity checks.

Human detection is structurally unreliable here. McKenna: “Overwhelmingly, 80% of the people identify the deepfake voice clone as my real voice. People cannot tell a deepfake apart.” Humans correctly identify high-quality deepfakes only 24.5% of the time in controlled studies. That’s not a training problem. That’s a structural limitation.

Video deepfake fraud is the lower-volume, higher-value risk. The Arup case required real-time deepfaking of multiple participants — rarer, but getting more accessible fast. Gartner predicts that by 2026, 30% of enterprises will no longer consider standalone identity verification reliable because of deepfakes.

Voice clone fraud needs AI-powered passive detection. Video deepfake fraud needs procedural controls that a convincing deepfake simply can’t satisfy. One is your daily exposure; the other is your catastrophic tail risk.

How Do You Build the Business Case for Deepfake Defence Investment?

The MSUFCU deployment gives you a three-stream ROI framework that shifts the conversation from “security is important” to “here is the expected financial return.”

Stream 1: Direct fraud avoidance. Per-incident cost × annual incident probability × detection improvement rate. MSUFCU documented $2.57 million over 14 months. For an SMB, the equivalent is probably $25,000–$100,000 annually.

Stream 2: Operational efficiency. Pindrop saved 58 seconds per call. For a contact centre handling 100,000 calls per year at $0.50/minute agent cost, that’s approximately $48,000 in annual efficiency gains.

Stream 3: Customer experience uplift. MSUFCU’s NPS moved from 57 to 63. If your business has a measured relationship between NPS and churn, this stream becomes a real number you can put in a spreadsheet.

The cost-of-inaction argument is simple: $25.6 million lost with no defences versus $2.57 million avoided with detection deployed. Experian calls 2026 a “tipping point” — 72% of business leaders rank AI-enabled fraud as a top operational challenge. The investment window is before your first incident, not after. For a broader view of the regulatory response, see this overview of how deepfake fraud operates and the policy response lag.

Frequently Asked Questions

How much does a deepfake fraud incident cost on average?

DeepStrike documented a $500,000 average per-incident cost in 2024, rising to $680,000 for large enterprises. For voice fraud specifically, Modulate found the typical range is $5,000 to $25,000. The high-end case is Arup at $25.6 million from a single video conference attack.

Why are credit unions and banks targeted more than other businesses?

Three structural factors: KYC reliance (which deepfakes directly undermine), high call centre volume, and wire transfer workflows. Credit unions are specifically targeted because, as Frank McKenna of Point Predictive documents, fraudsters know they haven’t invested in the detection technology larger banks have deployed.

Can a small company afford deepfake fraud detection?

Process-based defences cost nothing: out-of-band verification, dual approval for large transfers, 24-hour holds for unusual transactions — free to implement, and they would have stopped the Arup attack. Technology-based detection is priced for enterprise scale. For SMBs: process controls first, then technology when the cost modelling justifies it.

What is passive call fraud scoring?

Passive call fraud scoring analyses calls in real time before connecting the caller to an agent — no additional steps required from the caller. Pindrop examines voice characteristics, call metadata, and behavioural patterns, then presents a risk score to the agent. The “passive” part is why MSUFCU’s NPS improved rather than declining.

What is the difference between the $347 million, $12.5 billion, and $40 billion figures?

They measure entirely different things. Group-IB’s $347M is verified quarterly losses from documented deepfake incidents globally. The FTC’s $12.5B is total US consumer fraud across all types in 2024. Deloitte’s $40B is a 2027 projection for all US generative AI-enabled fraud combined. Complementary, not contradictory.

Is it true AI voice cloning can fool banks?

Yes. Pindrop documented a 1,300% year-on-year increase in deepfake calls targeting financial institutions. McKenna demonstrated that 80% of conference attendees misidentified a deepfake voice as real. Humans correctly identify high-quality deepfakes only 24.5–50% of the time. Voice cloning costs as little as $0.01 per minute and needs just three seconds of source audio.

How fast are deepfake fraud losses growing?

Surfshark tracked losses growing from $130 million over four years (2019–2023) to $400 million in 2024, then to $1.56 billion in 2025 — tripling year-on-year. Deloitte projects growth from $12.3 billion in 2023 to $40 billion by 2027 at roughly 32% compound annual growth.

Why can’t employee training solve the deepfake detection problem?

Because humans are structurally unable to reliably detect high-quality synthetic media — only 24.5% accuracy on high-quality deepfake video. Even when explicitly warned, 33% of participants in a 2025 study still shared sensitive information with a synthetic voice bot. AI-to-AI defence is the only scalable response.

What process controls should finance teams follow before authorising a large wire transfer?

Five controls at zero cost: (1) never call back on the caller’s number — use a pre-stored known number; (2) require dual approval for transfers above a defined threshold; (3) implement a mandatory 24-hour hold for large or unusual transactions; (4) treat any urgency to bypass verification with immediate suspicion; (5) confirm all video call participants through a separate channel before executing any financial instruction.

What did the MSUFCU Pindrop deployment cost versus what it saved?

$2.57 million in avoided fraud exposure over 14 months, operational savings from 58 seconds saved per authentication call, and a 10% NPS improvement. The subscription cost hasn’t been publicly disclosed, but the three-stream ROI framework provides the structure to calculate net return against any known price.

Should my company build deepfake detection in-house or buy a third-party solution?

For most SMBs, in-house is impractical. Detection requires continuous model retraining, dedicated ML engineering, and large training datasets — third-party solutions have multi-year head starts. Implement free process controls now, then evaluate tooling when the per-incident cost modelling justifies it.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter