Since 2022, US state legislatures alone have enacted 169 deepfake laws across 46 states. The EU AI Act has an enforcement deadline of August 2026. The UK government launched what it called a “world-first” deepfake detection framework in February 2026.
And yet deepfake-enabled fraud losses are going up, not down.
Here is the situation: compliance is necessary — the penalties are real and the deadlines are coming — but compliance alone will not stop active fraud against your organisation. Regulation always addresses yesterday’s threat while attackers have already moved on. That gap has a name: policy response lag.
This article breaks down what each major framework actually requires, why cybersecurity experts say the frameworks will not work as fraud prevention, and how to build a compliance matrix for a multi-jurisdiction SaaS or FinTech product. If you want the broader picture on how deepfake fraud is outpacing policy response, the pillar guide covers that in full.
How Many Deepfake Laws Exist and Why Aren’t They Stopping Fraud?
Jones Walker‘s January 2026 analysis documents 169 deepfake laws enacted across 46 US states since 2022, with 146 bills introduced in 2025 alone.
The problem is what those laws actually target. The overwhelming majority address non-consensual intimate imagery (NCII), election manipulation, and right of publicity. Texas §255.004 criminalises creating deepfake videos within 30 days of elections. Minnesota §609.771 escalates penalties for repeat electoral interference offences. Virginia §18.2-386.2 covers deepfake pornography. Tennessee’s ELVIS Act was the first law protecting voice as a right of publicity in the AI context.
Notice what is missing: enterprise fraud. CFO impersonation. Synthetic job candidates passing live video interviews. Wire transfer fraud. None of these are the primary target of any major deepfake law. Most legislation treats deepfakes as a content-moderation problem rather than organised criminal infrastructure. A content-removal framework cannot solve a fraud problem.
This creates a real paradox: more laws produce more compliance obligations without actually reducing the attack surface. A company that fully complied with every applicable state law in 2025 would still have been exposed to the Arup incident — the $25 million wire transfer fraud where an employee authorised 15 transactions to a deepfaked CFO on a video call. Nothing in the existing regulatory stack addresses that scenario. To understand what deepfake incidents actually cost organisations and why compliance investment needs to be benchmarked against real financial exposure, see the financial case analysis.
What Does the UK Home Office Deepfake Detection Framework Actually Do?
On February 5, 2026, Home Secretary Liz Kendall and Safeguarding Minister Jess Phillips announced a “world-first” deepfake detection framework.
Here is what it actually does: establishes an evaluation methodology for deepfake detection technologies — the Deepfake Detection Challenge, with more than 350 participants including INTERPOL, Five Eyes members, Microsoft, and academic institutions. The result is industry benchmarks for assessing detection tools against real-world threats.
Here is what it does not do: impose compliance obligations on businesses. There is no requirement to adopt specific tools, no penalties for failing to detect deepfakes, and no coverage of the generation side. It is useful for vendor assessment. It is not a regulatory mandate.
The UK’s criminal enforcement layer is separate and already in force — legislation making it illegal to create deepfake intimate images of adults without consent. That criminal provision and the detection framework operate independently.
Why Do Cybersecurity Experts Say the UK Framework Won’t Work?
Dr. Ilia Kolochenko, CEO of ImmuniWeb — a Swiss cybersecurity firm specialising in AI-driven security testing — was blunt in his assessment to The Register. The plan “will quite unlikely make any systemic improvements in the near future.”
The structural problem is this: detection technologies are evaluated against a fixed snapshot of generation capability. But generation capability evolves continuously. By the time a benchmark is validated and adopted, the generation methods it evaluates have already been superseded. This is speed asymmetry — not a one-time gap but a compounding feature of generative AI development.
There is also the metadata stripping problem. Legitimate AI generators may comply with C2PA watermarking standards — backed by Adobe, Microsoft, Google, and OpenAI. But bad actors strip those markers before deploying deepfakes. The EU Code of Practice acknowledges this openly: its labelling framework “applies only to lawful deepfakes.” A criminal generating a deepfake for a wire transfer scheme is entirely outside the labelling regime.
Kolochenko’s conclusion is worth quoting directly: “We need a systemic and global amendment of legislation — not just legally unenforceable code of conduct or best practices.” Compliance frameworks establish accountability norms. They do not stop active fraud.
What Does EU AI Act Article 50 Require and What Does August 2026 Mean for Your Product?
EU AI Act Article 50 is the broadest synthetic media disclosure framework currently in force. Enforcement date: August 2, 2026. If your product generates or manipulates synthetic media, this is the deadline that matters.
What it requires: providers of AI systems that generate or manipulate synthetic media must ensure content is “marked in a machine-readable format and detectable as artificially generated.” Deployers must disclose synthetic content to users no later than at first interaction.
The EU Code of Practice — with its final version anticipated in June 2026 — mandates a multilayered approach: visible disclosures combined with machine-readable markers, including metadata, watermarking, and content provenance signals. Interim visible markers are two-letter acronyms: “AI”, “KI”, “IA” depending on language.
On penalties: transparency violations under Article 50 carry a ceiling of €7.5 million or 1.5% of global annual turnover. High-risk system non-compliance rises to €15 million or 3% of global turnover. Article 72 extends penalty exposure to governance record-keeping, not just implementation failures.
The extraterritorial reach is the critical point for global SaaS operators. Article 50 applies to any company whose AI output reaches EU users, regardless of where the company is headquartered. Jakarta, Sydney, Singapore — if you serve EU customers, you are in scope. The Digital Services Act adds complementary obligations for platforms hosting user-generated content.
And here is the blind spot: Article 50 is a disclosure obligation for legitimate providers. A criminal impersonating your CFO on a video call is not going to watermark the deepfake. Compliance avoids penalties. It does not protect you from being defrauded.
US State Laws vs Federal: What Does the TAKE IT DOWN Act Actually Cover?
The TAKE IT DOWN Act was signed by President Trump on May 19, 2025. It was bipartisan legislation championed by Senators Ted Cruz and Amy Klobuchar — and it does almost nothing to address enterprise deepfake fraud.
Its scope is NCII. Covered platforms must establish a takedown process within one year and remove flagged content within 48 hours. The FTC enforces it with civil penalties up to $53,088 per violation. CFO impersonation, synthetic job candidates, wire transfer fraud — none of it is in scope. When enterprise deepfake fraud occurs, prosecutors fall back on wire fraud, identity theft, and extortion statutes.
The compliance patchwork remains. There is no federal preemption. Jones Walker documents 169 laws across 46 states, each with different categories and penalty structures. A SaaS platform with users in multiple states needs to work out which state laws apply. US state laws and EU AI Act requirements are not alternatives — both apply simultaneously depending on where your users are.
One more thing worth flagging: even full regulatory compliance does not guarantee insurance coverage when deepfake fraud occurs. Standard crime policies typically include a “voluntary parting” exclusion — when a deceived employee authorises a fraudulent transaction, coverage often does not apply. Understanding how regulatory audit trails affect insurance coverage claims is critical — compliance is not a substitute for understanding your insurance exposure.
Which Jurisdiction’s Rules Apply to a Global SaaS Company Operating Cross-Border?
The organising principle here is straightforward: compliance obligations are determined by where your users are located, not where you are headquartered.
EU AI Act: The broadest compliance trigger. Any AI system whose output reaches EU users triggers Article 50 regardless of company domicile. EU customers, AI-generated media — you are in scope for August 2026.
United Kingdom: Two distinct instruments. The detection framework is not a compliance mandate. UK criminal law applies to fraud targeting UK-based entities and is separate from EU AI Act obligations.
United States: No single federal standard for enterprise deepfake fraud. State laws apply based on where affected individuals or operations are located. The TAKE IT DOWN Act applies to covered platforms hosting NCII.
Southeast Asia: No country in the region currently has comprehensive deepfake-specific legislation. Indonesia’s PDP Law and Singapore’s PDPA provide partial data protection coverage but were not designed for synthetic media. Compliance obligations flow from EU and UK requirements — not local law.
A single global policy built on the least restrictive regime will fail. Domestic regulatory silence does not provide protection — a company in Jakarta serving EU customers is subject to Article 50 from August 2026 regardless of what Indonesian law requires. For the full scope of the deepfake threat that drives these compliance obligations, the pillar overview covers the complete landscape.
How Do You Build a Jurisdiction-Specific Deepfake Compliance Matrix?
Map applicable laws to specific business functions rather than attempting a single global policy. That is Jones Walker’s methodology and it is the right approach.
Start with an AI use-case inventory. You cannot build a compliance matrix if you do not know what AI tools your organisation is using and what they generate. The NCUA AI Compliance Plan — developed for US credit unions — requires a centralised registry of all AI tools deployed and what they output. It is not FinTech-specific. It is a governance foundation that any organisation deploying AI should establish.
From that inventory, identify which operations involve synthetic media or AI-generated content: content generation, user verification, financial transactions, HR screening, customer communications. Each function is a potential compliance trigger wherever it reaches users.
The matrix maps each business function against the relevant jurisdiction, applicable law, specific requirement, effective date, penalty, and responsible owner. EU AI Act Article 50 requires labelling from August 2026. UK obligations flow from criminal law. US obligations vary by state. Southeast Asian obligations at present flow through from EU and UK requirements.
The matrix is a living document. The EU Code of Practice finalises in June 2026. US state legislatures introduced 146 bills in 2025 alone. Assign an owner and schedule quarterly reviews.
Building this matrix demonstrates governance maturity and satisfies regulatory obligations. But it does not stop a deepfake attack in progress. A fully compliant organisation can still be defrauded if it lacks independent verification procedures, live detection capability, and incident response protocols. Compliance is the floor. Defence requires more. See the guide on building a compliance matrix as part of a practical defence roadmap for a phased, actionable approach to implementing what the matrix documents.
FAQ
Does the EU AI Act apply to my company if I’m not based in the EU?
Yes. The EU AI Act has extraterritorial reach — it applies to any company deploying AI systems whose output reaches EU users, regardless of where the company is headquartered. If your SaaS product serves EU customers and uses AI that generates or manipulates synthetic media, Article 50 applies to you. August 2, 2026 is your compliance deadline.
What is the penalty for violating EU AI Act synthetic media requirements?
Transparency violations under Article 50 reach up to €7.5 million or 1.5% of global annual turnover. High-risk system non-compliance reaches €15 million or 3% of global turnover. Article 72 establishes documentation requirements as evidence of compliance — penalty exposure extends to governance record-keeping.
What does the TAKE IT DOWN Act require from businesses?
The TAKE IT DOWN Act (signed May 2025) targets non-consensual intimate imagery. Covered platforms must establish a takedown process within one year and remove flagged NCII content within 48 hours. FTC enforcement with civil penalties up to $53,088 per violation. Enterprise deepfake fraud is outside its scope entirely.
Is the UK deepfake framework mandatory for businesses?
No. The UK Home Office framework is a detection evaluation methodology — it establishes benchmarks but does not impose compliance mandates. Creating deepfakes for fraud is separately criminalised under UK law. The framework and criminal provisions are distinct instruments.
How many US states have deepfake laws?
As of early 2026, 46 US states have enacted deepfake legislation, with 169 total laws documented by Jones Walker since 2022. Different categories — elections, NCII, right of publicity, fraud — with no federal preemption. Companies in multiple states must comply with each applicable state law individually.
Why can’t governments just ban deepfakes?
Deepfake technology is dual-use. The same capabilities used for fraud power legitimate applications in entertainment, accessibility, and education. A blanket ban would criminalise legitimate uses and would be unenforceable across borders. The LSE analysis identifies the core problem: treating deepfakes as content to be banned rather than criminal infrastructure to be disrupted. The governance approach is mismatched to the threat.
Can you explain why detection tools keep losing the arms race against AI-generated fakes?
Detection tools are trained on known generation techniques, but generation methods evolve continuously. Each new model produces outputs existing detectors were not trained to recognise. Bad actors also strip C2PA markers before deploying deepfakes — rendering upstream compliance irrelevant at the point of attack. As ImmuniWeb’s Dr. Ilia Kolochenko says: “We need a systemic and global amendment of legislation — not just legally unenforceable code of conduct or best practices.” Speed asymmetry is structural.
What is a jurisdiction-specific compliance matrix and how do I build one for deepfake laws?
A compliance matrix maps applicable deepfake laws to your business functions — content generation, identity verification, financial transactions, HR screening — across each jurisdiction where you serve customers. Start with an AI use-case inventory: a registry of all AI tools your organisation deploys and what they generate. Then map each use case against applicable laws, documenting what is required, when, and what the penalty is.
Does regulatory compliance protect my company from deepfake fraud losses?
No. Compliance avoids penalties and demonstrates governance maturity, but it does not prevent active attacks. Standard crime policies typically include a “voluntary parting” exclusion that voids coverage when an employee authorises a fraudulent transaction, even under deepfake deception. Compliance is necessary but insufficient; pair it with technical detection, operational verification, and incident response capability.
What deepfake regulations exist in Southeast Asia?
Southeast Asian countries currently lack comprehensive deepfake-specific legislation. Indonesia’s PDP Law and Singapore’s PDPA provide partial data protection coverage but were not designed for synthetic media. Companies in Southeast Asia with EU or UK customers inherit compliance obligations from those jurisdictions — your compliance matrix must account for these flow-through requirements.
What is the NCUA AI Compliance Plan and why does it matter for non-financial companies?
The NCUA AI Compliance Plan requires US credit unions to maintain a centralised AI use-case inventory — a registry of all AI tools deployed. While sector-specific, any company deploying AI can adapt the approach: you cannot build a compliance matrix if you do not know what AI tools your organisation uses. It is the governance foundation that compliance depends on.