The percentage of companies integrating AI into at least one business function surged to 72% in 2024, up from 55% the year before. Governance has not kept pace. Only 25% of organisations have fully operational AI governance programmes, and 76% say AI is moving faster than their governance can handle.
The distance between those two realities — rapid AI adoption on one side, immature governance on the other — is the AI governance gap. It creates exposure across data security, regulatory compliance, and operational accountability.
This page is a structured overview of the governance gap: what it is, where the exposure concentrates, and how to close it. The sections below cover shadow AI and why it differs from shadow IT, mid-market exposure patterns, the gap between policy and execution, what mature operating models look like, how to measure governance effectiveness, and what regulators now require. Each section links to a dedicated article where you can go deeper.
What is the AI governance gap and why does it matter now?
The AI governance gap is the measurable distance between your organisation’s rate of AI adoption and the maturity of the governance frameworks managing that AI. AI tool adoption surged from 55% to 72% of organisations between 2023 and 2024. Governance has not kept pace — nearly 74% of organisations report only moderate or limited coverage in their AI risk and governance frameworks. The gap creates real exposure across data security, regulatory compliance, and operational accountability.
The gap matters now because governance has shifted from aspiration to enforcement. For most of the last decade, AI governance was treated as a matter of intent — write a policy, signal good faith, move on. That stopped working in 2025 when regulators moved from guidance to enforcement.
The distinction to understand is between a policy and a framework. 75% of organisations have a written AI policy, but only 36% have adopted a formal governance framework. A policy is a document. A framework is an operational system with enforcement, accountability, and monitoring. The distance between those numbers — 75% and 36% — is where most organisations currently sit: documented intent without operational execution. That produces what governance practitioners call “governance theatre” — checkbox compliance that generates paperwork without reducing risk.
The most visible symptom of this gap is what shadow AI is and why it differs from shadow IT — and why this new threat is structurally harder to contain than its predecessor.
For a deeper look at the scale of the governance gap problem, see Shadow AI vs Shadow IT — What Makes the New Threat Harder to Govern. The difficulty of governing shadow AI is compounded when organisations lack the structural resources to detect it.
Why is shadow AI harder to govern than shadow IT?
Shadow AI — AI tools used without organisational knowledge or approval — is harder to govern than traditional shadow IT because the exposure is less visible and potentially irreversible. A shadow SaaS tool creates an integration risk you can unwind. A shadow AI tool can ingest, analyse, and generate content from sensitive data in a single interaction. The data leaves your perimeter the moment the prompt is submitted.
71% of office workers use AI tools without IT approval, and OpenAI accounts for 53% of all shadow AI usage — more than the next nine platforms combined. At smaller firms, the density is even higher: companies with 11–50 employees average 269 unsanctioned AI tools per 1,000 employees.
Prohibitive bans do not work. As IBM Distinguished Engineer Jeff Crume puts it, “saying no doesn’t stop the behaviour, it just drives it underground”. Governance has to enable sanctioned use while containing unsanctioned exposure — which means publishing an approved tool list that gives people enterprise-grade alternatives to the tools they are already using.
For the full evidence base and practical containment strategies, read Shadow AI vs Shadow IT — What Makes the New Threat Harder to Govern.
Why are some organisations disproportionately exposed to shadow AI risk?
Smaller organisations face disproportionate shadow AI risk for structural reasons, not because they are less competent. Accountability for AI governance is fragmented — CIOs hold it at 29% of firms, CDOs at 17%, CISOs at 14.5% — with no clear mandate for the technical executive running delivery. Governance tooling designed for enterprises with dedicated compliance staff does not translate to resource-constrained teams.
Companies with 11–50 employees show the densest shadow AI usage relative to headcount, yet only 23% of small organisations have a dedicated team driving generative AI adoption, compared to 52% of large enterprises. The smaller the firm, the larger the gap relative to capacity. Understanding how mid-market companies face disproportionate shadow AI exposure — and why the CTO accountability gap is most acute at this scale — is the starting point for right-sizing governance to the actual organisation.
For the full mid-market analysis, see Shadow AI in Mid-Market Companies — Why the Exposure Is Disproportionate.
What is the difference between having an AI policy and actually executing AI governance?
An AI policy is a written statement of rules. An AI governance framework is the operational system that makes those rules real — roles, controls, monitoring, and measurement. Having a policy without a framework is the most common state: 75% of organisations have a written AI policy, but only 36% have adopted a formal governance framework. The distance between those numbers is the governance execution gap, and it is where most organisations currently sit.
The execution gap has four failure modes: role ambiguity (nobody owns enforcement), policy staleness (rules written for last year’s tooling), measurement absence (no way to know whether controls are working), and governance theatre (documentation that looks like control without providing it). An AI governance policy without enforcement mechanisms is a wish list.
Structurally, governance operates across five interlocking domains — Strategy, Compliance, Operations, Ethics, and Accountability. A strategy decision to prioritise a high-risk use case triggers compliance review, operations readiness, ethics evaluation, and accountability assignment simultaneously. What moves governance from policy to programme is an oversight function — an AI governance committee with clear RACI assignments, decision rights, and an operating cadence. That committee turns written rules into daily practice. It does not need to be large. It needs to be accountable. The practitioner’s execution playbook for moving from AI policy to AI practice covers every step of this transition in detail.
For the practitioner’s execution playbook, read From AI Policy to AI Practice — How to Build Governance That Actually Executes.
What does a mature AI operating model look like?
A mature AI operating model embeds governance into the daily mechanics of AI development and deployment. Governance leaders are 2.5x more likely to embed AI as a core pillar of business strategy. Their operating models include executive ownership proximity, a structured oversight function, risk-tiered use-case management, and continuous monitoring — not an annual audit. Laggards, by contrast, have written policies with diffuse or absent responsibility and no direct CEO oversight of AI governance at 72% of organisations.
Only 7% of organisations have fully embedded AI governance. Most sit at the Ad Hoc or Developing stages of a five-level maturity progression (Ad Hoc, Developing, Defined, Managed, Optimising). The pattern that produces the largest governance gaps is high adoption maturity with low governance maturity — you are deploying AI widely but governing it loosely.
Three governance model archetypes exist: centralised (one oversight body sets policy across the enterprise), federated (local governance per business line, coordinating centrally), and hybrid (central policy with federated execution). The hybrid model is the default for scaling organisations because it balances consistency with operational flexibility. How governance leaders differ from laggards comes down to how deliberately they have designed this operating model, not just how much they have documented.
For the strategic architecture of governance leadership, read The AI Operating Model — What Separates Governance Leaders from Laggards.
How do you build AI governance that employees actually follow?
Governance that employees follow is governance that makes the right path the easy path. Start with an AI tool inventory, then publish an approved AI tool list that provides enterprise-grade alternatives to the shadow tools already in use. Role-based access controls create the technical enforcement layer. Distributed enablement — AI champions embedded in teams — creates the cultural adoption layer, bridging the gap between central policy and frontline practice.
The communication gap is wide: 78% of organisations have not communicated a clear plan for AI integration, and 58% of employees have not received formal training on safe AI use. Governance that nobody knows about is governance that nobody follows.
An intake-to-value mechanism — a structured approval process for new AI use cases — keeps governance proportional to organisational capacity. Instead of blanket rules, each proposed use case is assessed against risk tier, data sensitivity, and regulatory scope, then routed to the appropriate approval path. This keeps governance from becoming a bottleneck while maintaining oversight where it counts. How to build AI governance that actually executes — with role-based controls, lightweight approval workflows, and shadow AI detection — is covered step-by-step in the dedicated execution guide.
For the step-by-step execution guide, read From AI Policy to AI Practice — How to Build Governance That Actually Executes.
How do you know whether your AI governance is actually working?
Fewer than 20% of organisations track well-defined GenAI KPIs. Without measurement, governance effort cannot be verified, improved, or demonstrated to regulators or boards. As IBM Distinguished Engineer Jeff Crume notes, “it’s pretty hard to know if you’re succeeding if you’ve never even defined the benchmarks”.
A working governance programme tracks four core indicators: Policy Compliance Rate (what percentage of AI use cases are governed by approved policies), Incident Response Time (how quickly governance failures are contained), Use Case Review Cycle Time (how efficiently new AI deployments are approved), and Model Coverage (what percentage of production AI systems are fully documented).
The shift in 2025–2026 is from policy to proof. Regulators, enterprise buyers, and insurers are now asking for demonstrated governance, not stated intent. Governance that cannot produce evidence of its own operation — audit trails, model cards, incident logs — will not satisfy a regulatory or due-diligence inquiry. Continuous monitoring is what separates governance from periodic audit compliance. Building a governance measurement framework that tracks both operational effectiveness and regulatory evidence is the next step once execution foundations are in place.
For the full measurement framework, read How to Measure Whether Your AI Governance Is Actually Working.
What do EU AI Act and US state AI laws require from your organisation?
The EU AI Act is legally binding and applies to any organisation placing AI on the EU market — regardless of where that organisation is headquartered. For SaaS companies serving EU customers, any AI-powered feature used by EU-based users is potentially in scope. High-risk provisions are fully in force by August 2026. US states have begun regulating AI in parallel, creating overlapping obligations for multi-state SaaS companies.
The EU AI Act classifies AI systems into four risk tiers: Unacceptable (prohibited), High (conformity assessments, audit trails, human oversight), Limited (disclosure requirements), and Minimal (few requirements). Where your deployments land on that scale depends on use case — most mid-market SaaS features will sit in the Limited or High-risk tiers.
In the US, states have begun regulating AI in the absence of federal legislation. Colorado, Texas, Illinois, and California all have laws taking effect in 2026, creating overlapping obligations for multi-state SaaS companies. Governance frameworks like NIST AI RMF and ISO/IEC 42001 serve as the operational backbone for satisfying multiple regulatory requirements simultaneously. NIST AI RMF provides the operational structure; ISO/IEC 42001 certification provides portable regulatory evidence. Understanding what the EU AI Act and US state laws require now — including the enforcement timeline and penalty exposure — is the external forcing function that makes governance investment non-negotiable.
For the full regulatory analysis, read The Regulatory Forcing Function — What EU AI Act and US State Laws Require Now.
What should you do first to close the AI governance gap?
Start with visibility. You cannot govern AI you cannot see. The first step is an AI tool inventory — a complete catalogue of all AI tools in use across the organisation, including tools employees have adopted without approval. From that inventory, classify tools by risk tier, establish an oversight function with defined decision rights, and publish an approved tool list. Governance that skips visibility and goes straight to policy produces enforcement without foundation.
From that baseline, the sequenced path looks like this:
- Conduct an AI tool inventory — establish a visibility baseline across all teams.
- Classify current AI use by risk tier — prioritise governance controls where exposure is highest.
- Establish an oversight committee with clear RACI assignments and decision rights.
- Publish an approved AI tool list — contain shadow AI through substitution, not prohibition.
- Deploy monitoring before measurement — build operational capability before proof infrastructure.
- Define core governance KPIs — Policy Compliance Rate, Incident Response Time, and Model Coverage.
- Align to a regulatory framework — NIST AI RMF as the US baseline; ISO/IEC 42001 for international reach.
The gap is widening. But governance is not a one-time project — it is an operational capability you build incrementally. The organisations that will be best positioned in 2026 are those that build evidence instead of narratives and normalise assurance instead of treating it as exceptional.
For the execution playbook, start with From AI Policy to AI Practice — How to Build Governance That Actually Executes. For the accountability infrastructure, read How to Measure Whether Your AI Governance Is Actually Working.
AI Governance Gap Library
Understanding the Problem
-
Shadow AI vs Shadow IT — What Makes the New Threat Harder to Govern — Why the most visible symptom of the governance gap is harder to contain than its predecessor, with evidence on how far it has already spread.
-
Shadow AI in Mid-Market Companies — Why the Exposure Is Disproportionate — How shadow AI risk accumulates differently at 50–500 employee companies and why accountability fragmentation is most acute at this scale.
-
The Regulatory Forcing Function — What EU AI Act and US State Laws Require Now — Why regulatory pressure is converting AI governance from aspiration to legal requirement and what the enforcement timeline means for your organisation.
Building Governance That Works
-
From AI Policy to AI Practice — How to Build Governance That Actually Executes — The practitioner’s execution playbook: role-based access controls, distributed enablement, lightweight approval workflows, and shadow AI detection.
-
The AI Operating Model — What Separates Governance Leaders from Laggards — What the strategic architecture of mature AI governance looks like and the operating model choices that distinguish high-performing organisations.
-
How to Measure Whether Your AI Governance Is Actually Working — The measurement framework for governance execution quality: KPIs that matter, audit trail infrastructure, and how to demonstrate governance effectiveness.
Frequently Asked Questions
What is “governance theatre” and how do you avoid it?
Governance theatre is the failure mode where organisations generate documentation — policies, frameworks, reports — without building the operational controls that make governance real. The signals: high self-reported compliance alongside frequent AI-related incidents, and risk reviews completed on paper but never enforced in practice. You avoid it by assigning accountability to named people (not functions), deploying monitoring that runs continuously, and measuring operational outcomes rather than documentation volume.
Does the EU AI Act apply to my company if we are not based in the EU?
Yes. The EU AI Act has extraterritorial scope — it applies to any organisation placing AI systems on the EU market, regardless of headquarters location. For SaaS companies serving EU customers, any AI-powered feature accessible to EU-based users is potentially in scope. High-risk AI provisions are fully in force by August 2026. See The Regulatory Forcing Function for the full analysis.
What is the difference between an AI governance framework and an AI policy?
An AI policy is a written document stating what is permitted and prohibited. An AI governance framework is the operational system that makes the policy enforceable — enforcement mechanisms, accountability structures, monitoring capabilities, and measurement processes. Having a policy without a framework is the most common state: 75% of organisations have written policies, but only 36% have a governance framework.
How do NIST AI RMF and ISO/IEC 42001 relate to each other?
NIST AI RMF is a voluntary US framework organised around four functions (Govern, Map, Measure, Manage) that provides a practical structure for AI risk management. ISO/IEC 42001 is the international standard for AI management systems, where certification provides documented evidence satisfying multiple EU AI Act requirements. They are complementary: NIST for operational structure, ISO/IEC 42001 for portable regulatory evidence.
What is risk-tiered AI governance and why does it matter for smaller organisations?
Risk-tiered governance matches oversight controls to the criticality of each AI use case. Low-risk use cases (content drafting, scheduling) require minimal controls; high-risk use cases (hiring decisions, credit scoring) require conformity assessments, audit trails, and continuous monitoring. For resource-constrained organisations, tiering is what makes governance feasible — it allocates effort where it materially reduces risk rather than applying enterprise-level controls to everything.
How long does it take to build a functioning AI governance programme?
A minimum viable programme — AI tool inventory, risk classification, oversight committee, approved tool list, and basic monitoring — can be operational within 90 days if treated as a structured project. More comprehensive programmes including full KPI tracking, model card documentation, and regulatory alignment typically require six to twelve months. The priority is reaching operational visibility before investing in measurement infrastructure. See From AI Policy to AI Practice for the execution sequence.