AI governance is no longer something you can slot into next quarter’s roadmap. It is a legal obligation with dated deadlines and real penalties — and some of those deadlines have already passed. The EU banned manipulative AI, social scoring, and workplace emotion detection in February 2025. That is done. It is law.
The EU AI Act’s staggered enforcement timeline means high-risk system requirements kick in from August 2026. Meanwhile, US states — California, Texas, Illinois, Colorado — have gone ahead and enacted their own AI laws. Most took effect on January 1, 2026. So you are now dealing with a multi-jurisdictional patchwork, and it is only going to get thicker.
This article maps the concrete obligations, timelines, and penalties across both jurisdictions — and lays out the business case for acting now rather than later. If you want the broader picture on the AI governance gap regulators are now targeting, start there.
What does the EU AI Act actually require — and what are the penalties for non-compliance?
The EU AI Act (Regulation EU 2024/1689) is the world’s first comprehensive AI regulation. It sorts AI into risk buckets: unacceptable (banned), high-risk (strict obligations), limited risk (transparency rules), and minimal risk (largely left alone).
There are four enforcement dates. The first one has already come and gone:
February 2, 2025 — Prohibited AI practices banned outright. Manipulative AI, social scoring, real-time biometric surveillance in public spaces, workplace emotion detection. AI literacy requirements now active for providers and deployers.
August 2, 2025 — GPAI model rules come into force. Providers need technical documentation, copyright policies, and training data summaries.
August 2, 2026 — High-risk AI obligations become fully enforceable. Conformity assessments, risk management systems, human oversight, post-market monitoring, EU Database registration, CE marking — all required before you can place a high-risk system on the market. These assessments typically take six to twelve months. If you have not started, August 2026 is already tight.
August 2, 2027 — Everything else kicks in.
The penalties are serious: up to EUR 35 million or 7% of global annual turnover for prohibited practice violations. EUR 15 million or 3% for other infringements. That is more than GDPR fines.
Two things worth knowing about scope. First, the Act applies extraterritorially — if your AI touches the EU market, you must comply, no matter where your company is incorporated. Second, the provider/deployer distinction matters. Most SaaS companies are deployers. But if your product involves employment decisions, creditworthiness assessment, health diagnostics, or biometric processing, it is almost certainly high-risk under Annex III. That means conformity assessments and human oversight — whether you built the model or not.
Which US state AI laws apply to your company right now?
There is no comprehensive federal AI legislation. The states have filled the gap — over 1,000 AI-related bills were introduced in 2025 alone. Here are the ones that actually matter.
California SB 53 (effective January 1, 2026). Frontier model developers need to publish risk frameworks and report safety incidents. Penalties go up to $1 million per violation — though it targets developers pulling in revenue above $500 million.
California AB 2013 (effective January 1, 2026). Generative AI developers must disclose training data sources, types, and copyright status.
Texas TRAIGA (HB 149, effective January 1, 2026). Here is the one to pay attention to: compliance with the NIST AI Risk Management Framework constitutes an affirmative defence. Adopt a governance framework, get legal safe harbour. Penalties run $80,000 to $200,000 per violation.
Illinois HB 3773 (effective January 1, 2026). Bans discriminatory AI in employment decisions. And here is the kicker — it includes a private right of action. That is the only state AI law where plaintiffs can sue you directly. If you use AI anywhere in hiring or workforce decisions, this is where your litigation exposure lives.
Colorado SB 24-205 (effective June 30, 2026). The first comprehensive US state statute going after high-risk AI systems. Requires impact assessments and consumer disclosures. Penalties up to $20,000 per violation. Impact assessments take months — June 2026 is closer than it looks.
What about federal preemption? The Trump Administration’s December 2025 Executive Order signalled intent to challenge state regulation, but an executive order cannot overturn existing state law. The Senate already voted down a provision that would have barred states from enforcing AI regulations for ten years.
The practical advice: comply with the strictest applicable standard now rather than gamble on preemption that may never arrive.
What does SEC AI washing enforcement mean for how your company talks about AI?
How you talk about your AI practices is itself a regulatory surface. If your company claims to use AI responsibly or to have governance frameworks in place — and those claims are not backed by what you actually do — you have a problem.
The SEC treats AI claims in investor materials, filings, and marketing as material representations. After the landmark 2024 settlements against Delphia and Global Predictions, the SEC’s 2026 Examination Priorities specifically target AI disclosures. The FTC is in on it too, going after deceptive AI capability claims.
What this means in practice: governance is not just an internal exercise. It is an external disclosure risk. What you say about your AI practices has to be verifiable. The gap between governance claims and governance reality is now an enforcement target.
If you want to understand how measurement satisfies compliance obligations, that is where documentation becomes your defence.
How does shadow AI complicate your existing GDPR, HIPAA, or SOC 2 compliance?
SEC enforcement targets what you say about governance. Shadow AI undermines the governance you actually have.
Shadow AI — employees using AI tools without IT approval — introduces unmonitored data flows that compound your compliance obligations. An ISACA survey of 561 European professionals found 83% believe employees use AI without policy coverage. Only 31% have a formal AI policy in place. This is not some edge case. It is the norm.
GDPR requires documented data processing activities and a lawful basis for processing. Shadow AI tools processing personal data outside sanctioned channels violate these requirements without the organisation knowing. HIPAA‘s requirements for protected health information controls fall apart when employees use consumer AI tools to process patient data. SOC 2 trust principles assume known system boundaries — shadow AI pushes processing well beyond those boundaries.
The compounding effect is the real worry. Each unsanctioned tool multiplies the compliance surface without increasing your compliance capacity. You are not failing at one regulation — you are silently failing at several at once. Less than 47% of organisations have adopted formal AI risk management frameworks.
Your regulatory liability does not disappear because IT did not approve the tool. The company bears the liability regardless. That is why governance execution is now a legal requirement.
What is governance debt — and what does it cost when it comes due?
Governance debt is the gap between how fast you are adopting AI and how mature your governance actually is. Think of it like technical debt — except instead of slower deployments, you get regulatory penalties and financial consequences.
David Talby, CTO of John Snow Labs, puts it directly: “Governance debt will become visible at the executive level. Organisations without consistent, auditable oversight across AI systems will face higher costs, whether through fines, forced system withdrawals, reputational damage, or legal fees.”
The cost breaks into three parts.
Penalty exposure. The EU, California, Texas, Colorado, and Illinois penalties are additive across jurisdictions. One governance failure can trigger enforcement under several laws at the same time.
Remediation cost. Building governance after an enforcement action costs multiples of doing it proactively. Retroactive compliance means documenting and correcting decisions you have already made. That is always harder and always more expensive.
Operational cost. Incident response, legal engagement, crisis management, customer notification. The average data breach costs $4.45 million — and that is before reputational damage.
The financial argument is pretty straightforward: building governance now is a fraction of what governance debt costs when it comes due.
How do you use regulatory urgency to make the internal business case for governance investment?
The regulatory forcing function gives you the strongest argument for governance investment you have ever had: external deadlines with quantified penalties that turn “we should” into “we must.”
Timeline pressure. EU AI Act high-risk obligations go live August 2, 2026. US state laws have been in effect since January 1, 2026. Colorado impact assessments are due June 30, 2026. These are enforceable dates.
Penalty quantification. Present the aggregate exposure across jurisdictions — the EU, US state, and SEC enforcement figures laid out above. Your legal team can calculate the specific exposure for your operational footprint.
Framework leverage. NIST AI RMF addresses multiple regulatory surfaces with one framework investment. Organisations aligned with NIST AI RMF find that cost avoidance from risk prevention often exceeds governance programme costs within two years.
Competitor positioning. In regulated industries, transparency and explainability is increasingly a market access requirement. Compliance failures under the EU AI Act can block product sales in entire markets. Governance becomes a market-access credential, not just a cost centre.
Frame it for your board as what governance costs now versus what deferred governance costs when an audit lands on your desk. Let the numbers do the talking. If you need to know how to build the governance execution that regulators require and how measurement satisfies compliance obligations, those are your next steps.
Conclusion
The regulatory landscape has shifted from voluntary frameworks to enforceable obligations with dated deadlines and real penalties. The EU AI Act and US state laws are not parallel — they are additive. If you operate across jurisdictions, your obligations compound, and you cannot address them piecemeal.
Governance debt is accumulating right now. The question is whether you invest proactively at a planned cost or reactively at penalty cost. The numbers favour doing it now.
Start with the gap between AI policy and AI practice, then move to how to build the governance execution that regulators require.
FAQ
Does the EU AI Act apply to companies outside the EU?
Yes. The EU AI Act has extraterritorial scope. If your AI systems are placed on the EU market or your AI outputs affect people within the EU, you must comply — regardless of where your company is headquartered. US-based SaaS, FinTech, and HealthTech companies with EU customers are in scope.
What is the difference between a high-risk AI system and a general-purpose AI system under the EU AI Act?
High-risk AI systems operate in regulated domains — employment, healthcare, credit assessment, law enforcement — and must meet strict obligations including conformity assessments, risk management systems, and human oversight. GPAI models are large-scale models adaptable to many tasks, and they come with different transparency and documentation requirements.
Can a company be penalised for shadow AI tools used by employees without IT approval?
Yes. Regulatory obligations attach to the organisation, not to individuals. If an employee uses an unsanctioned AI tool that processes personal data in violation of GDPR, or makes employment decisions using AI in violation of Illinois HB 3773, the company bears the legal liability. IT not knowing about it does not change that.
Is my SaaS product classified as high-risk under the EU AI Act?
If your product involves AI-driven decisions in employment, creditworthiness assessment, health diagnostics, biometric identification, or critical infrastructure management, it is likely high-risk under Annex III. Classification depends on what your product does, not on what technology stack it is built on.
What is the NIST AI Risk Management Framework and does it help with regulatory compliance?
NIST AI RMF 1.0 is a voluntary US federal framework with four functions: govern, map, measure, and manage. Compliance gives you an explicit affirmative defence under Texas TRAIGA — legal safe harbour in at least one jurisdiction while providing a foundation that maps well to EU AI Act obligations.
Can companies wait for federal preemption to override US state AI laws?
No. All enacted state laws remain enforceable until courts rule otherwise. The FTC was directed to issue a preemption statement by March 11, 2026, but that would not automatically invalidate existing state laws. Comply now.
What does “AI washing” mean and why is the SEC enforcing against it?
AI washing means making unsubstantiated claims about AI capabilities or governance practices — think of it as greenwashing but for AI. The SEC treats AI claims in investor materials and marketing as material representations. If what you actually do does not match what you say, that is a potential securities law violation.
What is the Colorado AI Act and how does it differ from the EU AI Act?
Colorado SB 24-205 (effective June 30, 2026) goes after high-risk AI systems, requiring reasonable care to prevent algorithmic discrimination, impact assessments, and consumer disclosures. It focuses specifically on algorithmic discrimination rather than the EU’s broader safety framework. Penalties are up to $20,000 per violation versus EUR 35 million or 7% of turnover under the EU Act.
How long does it take to complete a conformity assessment for the EU AI Act?
Conformity assessments require technical documentation, risk management systems, data governance controls, and human oversight mechanisms. For a mid-sized company, expect six to twelve months — which means the August 2, 2026 deadline requires action now, not next quarter.
What are the specific penalties for non-compliance across the major AI regulations?
EU AI Act: up to EUR 35 million or 7% of global turnover for prohibited practice violations; EUR 15 million or 3% for other infringements. Colorado: up to $20,000 per violation. Texas TRAIGA: $80,000 to $200,000 per violation. California SB 53: up to $1 million per violation. Illinois HB 3773: civil liability through private right of action. These are additive across jurisdictions — one governance failure can trigger penalties under multiple laws.