Insights Business| SaaS| Technology The Liar’s Dividend and What Deepfake Proliferation Means for Organisational Trust
Business
|
SaaS
|
Technology
Feb 24, 2026

The Liar’s Dividend and What Deepfake Proliferation Means for Organisational Trust

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic The Liar's Dividend and What Deepfake Proliferation Means for Organisational Trust

In January 2024, an employee at Arup — a global engineering firm — joined a video call with what appeared to be the company’s CFO and several colleagues. The call was convincing. The faces were recognisable, the conversation coherent, the instructions specific: authorise 15 wire transfers totalling $25 million USD. Every person on that call was a deepfake.

The direct loss was bad enough. But here’s the second problem: when Arup filed an insurance claim, they discovered the insurer might deny coverage — because the employee had “voluntarily” authorised the transfers. Even under sophisticated synthetic deception, the money was gone and so was the recourse.

This is what scholars call the liar’s dividend — and it’s the foundation for understanding what deepfake fraud means for your organisation at a strategic level. This article is part of our series on the broader deepfake fraud and policy response lag, which maps the full threat landscape from commoditised tooling through to institutional trust failure.

What Is the Liar’s Dividend and Why Does It Undermine Organisational Trust?

The liar’s dividend was coined by legal scholars Robert Chesney and Danielle Citron in 2019. The concept is straightforward: once deepfakes are widespread and publicly known, anyone can plausibly claim that genuine video, audio, or documentary evidence is fabricated.

This is the inverse of ordinary misinformation. Misinformation creates false content. The liar’s dividend destroys trust in real content. The burden of proof flips — organisations must now demonstrate that evidence is authentic rather than assuming it.

The LSE‘s December 2025 analysis, “The Deepfake Blindspot in AI Governance,” identifies this as the gap most institutional responses miss. Regulatory frameworks classify deepfakes as a content distribution problem rather than what LSE researcher Rachel Ntow calls “a systemic risk multiplier: a technology that exploits digital authenticity to facilitate financial fraud, undermine public health, and erode public trust.”

There’s a two-sided accountability problem here. Bad actors dismiss evidence against them as fabricated. Organisations use “it could be a deepfake” as internal cover when verification controls fail. Both erode institutional trust. Neither is addressed by content moderation.

The practical implication: the audit trail you maintain, the recorded interviews you conduct, the documented approvals in your workflow — all of these are now subject to a credibility challenge that simply didn’t exist three years ago.

How Are Consumer Fraud Playbooks Becoming Enterprise Attack Prototypes?

The social engineering playbooks being refined at industrial scale on consumer victims — pig butchering, romance scams, phone impersonation — are prototypes for enterprise attacks. The underlying technology is identical: agentic AI bots, synthetic identity generation, real-time deepfake video.

Think of consumer fraud figures as a measure of how rapidly this technology is being refined. The FTC documented 65,000+ romance scam cases and $3 billion in losses in 2024. Total consumer fraud losses hit $12.5 billion — a 25% increase while the number of reports stayed flat. Each attack is getting more effective.

Experian‘s 2026 Future of Fraud Forecast calls this year a “tipping point” — the moment consumer fraud infrastructure crosses into enterprise-grade automated attacks. Their named threat for 2026 is “machine-to-machine mayhem”: autonomous AI agents initiating transactions and accessing systems without human oversight. The operations that spent three years perfecting deepfake video calls on consumers are now turning those same tools toward executive impersonation and employment infiltration.

To understand the infrastructure enabling this at scale, see our overview of how deepfake fraud scales and why defences fall behind.

What Is Pig Butchering and How Does Agentic AI Make It an Enterprise Concern?

Pig butchering is a long-con fraud that combines romance scams with fraudulent cryptocurrency investment. Victims are groomed over weeks or months into believing they have a genuine relationship, then guided into a fraudulent investment platform where fabricated returns grow until the platform disappears — taking everything with it.

Agentic fraud bots now sustain that emotional manipulation at scale. Thousands of parallel conversations, around the clock, without human operators. Not simple scripts, but emotionally intelligent systems managing long-form social engineering with consistent persona maintenance.

On 12 February 2026, Arizona Attorney General Kris Mayes issued a public warning citing AI deepfake videos and voice-cloning in active romance scam operations. The FBI confirmed it recognises AI-generated content as a feature of current scam infrastructure.

The enterprise connection is direct. The agentic bot that sustains a pig butchering conversation for eight weeks — maintaining emotional consistency, adapting responses, never breaking character — is the same technology that can conduct a synthetic job interview or impersonate an executive on a call. Long-con patience, social engineering precision, identity consistency. These capabilities transfer. For a full account of the DaaS infrastructure that powers these fraud vectors, see our breakdown of how deepfake fraud tooling became a commodity subscription market.

How Are Deepfake Candidates Infiltrating Hiring Workflows?

This is where the abstract risk becomes immediately operational. AI-generated faces, fabricated credentials, and scripted interview performance create entirely artificial job candidates capable of passing standard video interviews and background checks.

Gartner projects one in four candidate profiles will be fake by 2028. The FBI has documented over 300 US companies that unknowingly hired North Korean operatives using stolen identities and AI-generated personas.

The KnowBe4 case from July 2024 makes it concrete. KnowBe4 discovered that a newly hired software engineer — who had passed background checks, verified references, and four video interviews — was a North Korean operative using stolen US credentials and an AI-enhanced photo. Malware was flagged within hours of the laptop being delivered.

Here’s the key point: a synthetic employee is not an external attacker. They are inside your hiring workflow, with access to your codebases, internal systems, and credentials from day one. The threat model is not perimeter security — it is insider access.

Jones Walker have documented that the negligent hiring standard — “knew or should have known” — is shifting. With FBI warnings public and synthetic identity fraud widely covered, courts may find that organisations without verification controls should have known the risk existed.

For practical hiring controls, see a practical defence roadmap that includes employment fraud and hiring workflow controls.

What Does the Liar’s Dividend Mean for Fraud Investigation and Insurance Claims?

The liar’s dividend creates three institutional failures beyond direct losses: it undermines fraud investigation, complicates insurance claims, and weakens regulatory enforcement.

In fraud investigation, genuine video evidence, audio recordings, and authenticated documents can now be challenged as potentially AI-generated. Investigators must establish the authenticity of their own evidence before it can function as evidence.

Insurance compounds the problem through existing policy language. Standard crime and fidelity policies contain voluntary parting exclusions: when an employee authorises a payment — even under deepfake-induced deception — the insurer may deny the claim because the employee technically “chose” to act.

Regulatory enforcement faces the same challenge. Any evidence in compliance proceedings can be challenged as potentially synthetic, creating delays while authenticity is established. LSE’s Rachel Ntow captured the trajectory: “If regulatory frameworks continue to treat deepfakes as isolated nuisances rather than structural threats, they will progressively weaken the digital trust systems that underpin economies, public safety, and accountability.”

Jones Walker notes that documentation of verification efforts is now the primary legal defence — organisations must show what steps they took before an attack succeeded.

From Trust Crisis to Architectural Response: What Comes Next?

Detection alone fails. If any evidence can be accused of being synthetic, the arms race between detection tools and generative AI is beside the point. You need a different structural approach.

The emerging response is proof-of-humanness verification as the architectural response to pervasive synthetic media: confirming that a real person is behind an interaction before evidence is created. As Adrian Ludwig, Chief Architect and CISO at Tools for Humanity, put it: “The challenge is not spotting the fake, but proving the real.”

Banks could apply proof-of-human checks when opening accounts. Video platforms could verify participants before recording commences. Hiring workflows could establish verified human identity at application stage. The C2PA standard provides complementary infrastructure — cryptographic chain-of-custody for digital content that establishes provenance from creation rather than challenging authenticity after distribution.

The organisations that navigate this successfully will invest in verification architecture — the structural response the liar’s dividend actually demands. Detection investment addresses the wrong layer.

For the proof-of-humanness and content provenance architecture in depth, see our comparative guide to deepfake detection vs content provenance — choosing the right defence architecture. For immediate practical steps, see a practical defence roadmap that includes employment fraud and hiring workflow controls.

Frequently Asked Questions

What is the liar’s dividend?

Coined by Robert Chesney and Danielle Citron (2019): once deepfakes are widespread, anyone can plausibly dismiss genuine video, audio, or documentary evidence as AI-fabricated. It shifts the burden from proving something is fake to proving something is real.

Can deepfakes affect hiring decisions?

Yes. The FBI has documented over 300 US companies that inadvertently hired North Korean operatives using synthetic identity. Gartner projects one in four candidate profiles will be fake by 2028.

Are romance scam bots using AI now?

Yes. Experian’s 2026 Fraud Forecast identifies agentic AI fraud — fully autonomous bots sustaining emotional manipulation over weeks — as a named 2026 threat. Arizona AG Kris Mayes issued a 12 February 2026 warning specifically citing AI deepfake video calls in romance scam operations.

How do deepfakes affect insurance fraud claims?

When employees authorise transfers after deepfaked video calls, the voluntary parting exclusion in standard crime and fidelity policies may deny the claim — because the employee technically “chose” to act, even under synthetic deception.

What is pig butchering and why should organisations care?

A long-con fraud combining romance scams with fraudulent cryptocurrency investment. The same agentic AI technology — deepfake video, synthetic identity, emotionally intelligent bots — transfers directly to enterprise attacks like executive impersonation and employment fraud.

How is the liar’s dividend different from ordinary misinformation?

Misinformation creates false information. The liar’s dividend allows genuine information to be dismissed as false. Any real video, audio recording, or document can be plausibly accused of being AI-generated.

What is agentic AI fraud?

Experian’s term for fully autonomous AI systems executing multi-step fraud schemes without human operators — sustaining complex social engineering over extended periods rather than following simple scripts.

Can you really lose millions to a deepfake video call?

Yes. Arup lost $25 million when an employee authorised wire transfers after a deepfaked CFO video call in Hong Kong (January 2024).

How worried should I be that a remote hire might be a synthetic candidate?

The risk is documented. Over 300 US companies have already been compromised. For remote-first teams, every video interview is a potential deepfake interaction.

What verification steps can detect deepfake job candidates?

Human detection is unreliable. Effective controls include government-issued ID verification during video calls, biometric liveness detection, in-person verification for privileged access roles, and unpredictable live actions during interviews.

Does the liar’s dividend affect regulatory enforcement?

Yes. Any evidence in compliance proceedings can be challenged as potentially AI-generated, weakening enforcement actions and creating delays while authenticity is established.

What is proof of humanness and how does it address the liar’s dividend?

An emerging verification approach that confirms a real person is behind an interaction before evidence is created. Tools for Humanity represent this shift from post-hoc detection to pre-interaction verification.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter