Your cyber insurance almost certainly does not cover deepfake fraud losses. There is a clause called the voluntary parting exclusion buried in your policy — and you have probably never heard of it until you are filing a claim you are about to lose.
Arup lost $25 million when an employee joined a deepfake video call with an AI-generated CFO. Standard Social Engineering Fraud Endorsement sublimits cap at $100,000–$250,000. That is a 100:1 gap between what your insurer will pay and what a real attack can cost.
This article explains why standard policies fail, where the Coalition Deepfake Response Endorsement fits (and where it does not), and exactly what language to demand from your broker. This article is part of our comprehensive guide on why deepfake fraud defences keep falling behind policy — start there for the full picture, then come back here for the insurance procurement decision.
Why Does Standard Cyber Insurance Exclude Deepfake Fraud Losses?
Standard cyber insurance is built for data breaches, ransomware, and system outages — technical events where something in your infrastructure was compromised.
Deepfake-induced wire transfers are a different animal entirely. When an employee is deceived by an AI-generated video call into authorising a $25 million transfer, no system was compromised. The employee made a decision — a voluntary one, in the insurer’s view — based on fabricated authenticity.
The PLUS Guide (Kennedys Law, February 2026) puts it plainly: deepfake fraud is a crime and fraud event, not a cyber event. Insurers slot these losses into crime and fidelity frameworks. Traditional controls — firewalls, endpoint detection, encryption — do nothing to stop a convincing deepfake call. Swiss Re‘s SONAR 2025 report said it clearly: “deepfakes may increasingly be used in sophisticated cyberattacks and drive cyber insurance losses.” When the reinsurance market issues systemic risk warnings, underwriters tighten exclusions. Not loosen them.
What Is the Voluntary Parting Exclusion and How Does It Apply to Deepfake Fraud?
The voluntary parting exclusion is the clause your insurer will use to deny the claim.
Plain-language version: coverage does not apply when your company voluntarily transferred funds — even when that transfer was induced by fraud.
In practice: your finance manager receives a video call from what appears to be your CEO — actually an AI-generated synthetic. They follow procedure, believe the request is legitimate, and complete the transfer. The exclusion applies because your employee clicked the button. The sophistication of the deception is legally irrelevant under standard policy language.
Jones Walker LLP‘s January 2026 analysis spells it out: the exclusion applies because the policyholder’s agent “willingly parted with” the funds. Think of it like a software licence that excludes liability for user error — except the insurer defines “user error” to include being deceived by a perfect deepfake, because the user still initiated the transaction. The deception is your problem. Unless you have the right coverage in place.
Does the Coalition Deepfake Response Endorsement Cover Financial Fraud Losses?
In December 2025, Coalition launched the Coalition Deepfake Response Endorsement — widely described as the first purpose-built deepfake insurance product, available across eight markets including Australia, the US, the UK, and Canada.
It covers forensic analysis, legal takedown support, and crisis communications for reputational harm from synthetic media. That last part — reputational harm — is the key phrase. It does not cover wire transfer losses. If your finance manager was deceived into wiring $25 million, the Coalition endorsement pays out nothing.
Coalition’s Head of Cyber Portfolio Underwriting confirmed that “deepfake-enabled fraud leading to fraudulent transfers” was already covered through existing social engineering fraud coverage. The December endorsement expanded into a different risk category — reputational harm — not the wire fraud scenario most finance teams are worried about. Understanding the mechanics of how deepfake fraud works helps clarify why these distinctions matter so much for coverage decisions.
Worth having. Just not the solution to the voluntary parting exclusion problem.
Are Standard Sublimits of $250,000 Adequate When Actual Deepfake Losses Reach $25 Million?
The sublimit is the maximum your insurer will pay on a social engineering fraud claim. Per IRMI, the standard range is $100,000 to $250,000 — roughly one percent of the Arup loss. To understand the full scale of what organisations actually lose, see the Arup $25M loss and MSUFCU avoided exposure figures — the gap between standard sublimits and real-world losses is the core procurement problem. Orion, a Luxembourg-based supplier, disclosed approximately $60 million in losses from a social engineering wire fraud attack. Deloitte projects generative AI-related fraud losses in the US reaching $40 billion by 2027.
The sublimit is the highest-priority procurement action. The best endorsement language delivers nothing if a $5 million wire fraud loss is covered to $250,000.
How to size it:
- Base it on maximum single-transaction exposure. Deepfake attacks target your largest transfers, not your smallest.
- SMBs with $10M–$50M revenue should target at least $500,000–$1M.
- High-value wire transfer businesses — FinTech, real estate, professional services — should target $1M–$5M minimum, benchmarked against the largest single transaction your organisation could execute in a week.
Insurers are not enthusiastic about raising social engineering fraud sublimits right now. But it is the only number that determines whether your coverage is real or performative.
What Should You Ask Your Broker to Include in Your Deepfake Fraud Coverage?
Six provisions. Why each matters, and what happens without it.
1. Social Engineering Fraud Endorsement with explicit deepfake and synthetic media language. The endorsement must reference AI-generated audio, video, and text-based deception — not just “impersonation” or “pretexting.” Without synthetic media specificity, an insurer can argue a deepfake event is not covered under existing definitions.
2. Explicit exception to the voluntary parting exclusion for deepfake-induced payments. Negotiate explicit language stating the exclusion does not apply to payments induced by deepfake impersonation. Without this, the endorsement may exist and the claim will still fail.
3. Sublimit adequate to maximum single-transaction exposure. Not the IRMI default. Negotiate based on your largest plausible single transaction.
4. Removal of verification clause, or a deepfake-specific carve-out. Ask your broker directly: does this policy contain a verification clause? Jack Keilty at New Dawn Risk advises clients to “steer clear of policies bearing this wording.” If it cannot be removed, negotiate carve-out language for scenarios where verification procedures were followed but defeated by AI-quality impersonation. The clause that was reasonable in 2019 can void legitimate coverage in 2026.
5. Coverage triggers that include both “social engineering fraud” and “funds transfer fraud” terminology. Some policies cover one, not both. This gap matters especially for FinTech companies where both attack vectors are plausible.
6. Crime policy placement, not cyber-only. If the endorsement is attached only to a cyber policy, the underwriting classification mismatch puts the claim at risk — even if the endorsement language appears to cover it.
Get specific policy language in writing before signing. Verbal broker assurances are not coverage.
How Do Documented Controls and Compliance Strengthen Insurance Coverage Claims?
Getting the right endorsement and sublimit in place is the procurement side. Making sure a claim holds up is the other — and that depends on documentation.
The PLUS Guide (Kennedys Law, February 2026) notes that “post-incident scrutiny focuses on what procedures were in place to prevent an incident from occurring.” Underwriters assess your governance, not just the event.
Three control categories that improve claims defensibility:
Deepfake detection tools, deployed and documented. Log evidence of detection capability matters. An organisation that deployed tooling — even if it did not catch a sophisticated attack — is in a stronger position than one with nothing in place. For the full comparison of what documented detection and provenance controls you should have in place, see our architectural guide — the controls you choose directly affect the strength of your coverage position.
Incident response plan covering deepfake-specific scenarios. A generic IRP does not demonstrate preparedness for synthetic media attacks. Address deepfake impersonation specifically: who authorises unusual transfer requests, what independent verification is required, when to escalate before funds move.
C2PA implementation across organisational media workflows. Jones Walker links C2PA (Coalition for Content Provenance and Authenticity) implementation to legal reasonableness benchmarks — and notes that organisations failing to implement available authentication technologies are “increasingly vulnerable to negligence claims” as industry standards emerge.
Documented controls do not guarantee a payout. They shift the burden. For whether regulatory compliance creates any audit trail that aids coverage claims — including EU AI Act audit trails — see our regulatory analysis. Compliance documentation and coverage defensibility are directly connected.
Frequently Asked Questions
Will my insurer pay out if an employee was tricked by a deepfake CFO video into wiring money? Under standard policies, probably not. The voluntary parting exclusion applies because the employee authorised the transfer voluntarily, even though they were deceived. You need a Social Engineering Fraud Endorsement with explicit deepfake language and an exception to the voluntary parting exclusion.
What is the voluntary parting exclusion in insurance? A policy clause that denies coverage when the insured voluntarily transferred funds, even if the transfer was induced by fraud. In deepfake scenarios, the employee is considered to have acted voluntarily regardless of the AI-generated deception that prompted it.
How much deepfake fraud coverage should my company have? Standard sublimits of $100,000–$250,000 are inadequate. SMBs with $10M–$50M revenue should target at least $500,000–$1M. Companies with high-value wire transfer activity should target $1M–$5M minimum, benchmarked against maximum single-transaction exposure.
What is a Social Engineering Fraud Endorsement? An optional rider to a crime or cyber policy that extends coverage to losses from deception-based fraud, including deepfake impersonation attacks that cause employees to authorise fraudulent wire transfers. It is the primary coverage instrument for deepfake wire fraud.
What is a verification clause in an insurance policy? A provision that may deny a social engineering fraud claim if you failed to verify the requestor’s identity through an independent channel before authorising a transfer. Some endorsements include this clause, potentially voiding coverage even when the Social Engineering Fraud Endorsement exists.
Does implementing C2PA help with insurance claims? Jones Walker identifies C2PA implementation as a legal reasonableness benchmark. Organisations that demonstrate they implemented content provenance standards strengthen their argument that losses occurred despite reasonable precautions.
Is the Coalition Deepfake Response Endorsement the same as social engineering fraud coverage? No. The Coalition endorsement covers reputational harm — forensic analysis, legal takedown, crisis communications. For wire transfer fraud caused by deepfake impersonation, you need the Social Engineering Fraud Endorsement, not the Coalition product.