Most organisations now have an AI policy. Far fewer have governance that actually does anything.
That gap between a policy document and real enforcement? That’s where shadow AI thrives, compliance theatre takes hold, and risk piles up without anyone noticing. 97% of organisations that reported an AI-related breach lacked proper AI access controls, and 71% of employees already use AI tools without authorisation. The policy exists. The enforcement doesn’t.
This article walks you through the concrete execution mechanisms — role-based access controls, distributed enablement, lightweight approval workflows, and shadow AI detection — that turn a policy document into a functioning governance programme that addresses the AI governance gap. If you own this problem and you don’t have a dedicated GRC team, this is your practical starting point. Not another framework diagram.
What does it actually mean to execute AI governance rather than just have a policy?
It means live, measurable enforcement of your AI usage rules. Not a PDF sitting in a shared drive that nobody has read since the day it was uploaded.
AI policy states intent — acceptable use, data handling rules, ethical guidelines. AI governance operationalises that intent through controls, workflows, roles, and detection. Most organisations stop at the policy layer and call it done. A majority of breached organisations — 63% — either don’t have an AI governance policy or are still developing one. Even among those with policies, less than half have an approval process for AI deployments, and only 34% perform regular audits for unsanctioned AI.
You need four layers working together to make this real: technical controls (role-based access), process controls (approval workflows), cultural controls (AI Champions), and detection controls (shadow AI visibility). IBM’s governance model is a well-documented example of all four operating at scale — their License to Drive certification, AI Fusion Teams, streamlined provisioning, and continuous monitoring show how the gap between policy and execution forms when any of these layers is missing.
As Jeff Crume, IBM Distinguished Engineer, puts it: “Saying no doesn’t stop the behaviour, it just drives it underground.” That one sentence captures why policy without execution is worse than useless.
How do role-based AI access controls work — and why are they the foundation of real governance?
If you’ve got experience with infrastructure access management, this is familiar territory. Role-based AI access controls assign AI tool permissions based on job function, data access level, and use-case risk. It’s RBAC applied to AI tooling — the same principle you already use for infrastructure access, extended to cover which AI tools each role can use and what data they can process. The problem? 80% of AI tools operating within companies are currently unmanaged by IT or security teams. That means most organisations have zero role-based controls on AI.
Differentiated access replaces the binary choice — allow everything or block everything — that drives shadow AI in the first place. People get access appropriate to their role without waiting for manual approval every time.
Here’s a practical starting point:
- Define three access tiers. Tier 1: general productivity AI available to all staff (summarisation, writing assistance). Tier 2: department-specific tools with data access, assigned by role. Tier 3: high-sensitivity use cases requiring explicit approval (anything touching customer PII, financial data, or regulated information).
- Map each tier to data sensitivity boundaries. Which data categories can each tier access? This is the step most organisations skip, and it’s the one that matters most.
- Set tool permissions per tier. Which specific AI tools does each tier grant access to?
- Enforce via your existing infrastructure. Use your current identity provider or a simple AI gateway proxy. You don’t need enterprise-grade tooling to get started.
- Audit regularly. Review who has access to what, whether the tiers still match actual usage, and whether new tools have popped up outside the governed pathway.
Can a smaller organisation do this without enterprise tooling? Yes. Many organisations benefit from offering a corporate instance of a preferred AI platform with sensitivity labels and DLP policy enforcement — even at small scale, this gives you centralised access and consistent controls.
As Diana Kelley, CISO at Noma Security, notes: “Clear guardrails not only prevent misuse but also build employee confidence in knowing what’s safe, legal and compliant.” People follow rules they understand and that make their work easier, not harder. This is the operating model context that makes these mechanisms work.
How do IBM’s AI Fusion Teams and OpenAI’s AI Champions model make distributed enablement work?
Technical controls define what each role can access. But someone needs to own and adapt those controls close to where the work happens. That’s distributed enablement — placing AI expertise inside business units rather than centralising every decision in a governance committee that meets quarterly and approves nothing quickly.
IBM’s AI Fusion Teams are the go-to example. These are cross-functional teams that combine people who deeply understand business functions with technologists from the CIO organisation. The procurement expert learns prompt engineering and builds directly on the enterprise AI platform. The IT technologist handles the technical plumbing. As Matt Lyteson, IBM’s CIO of Technology Platform Transformation, explains: “You bring them together and you start to see amazing results.”
IBM’s License to Drive sits alongside this. Just like you need a licence to drive a car, you need certification to build and deploy AI agents on IBM’s infrastructure. It’s a qualification, not a gate. Where you sit on the org chart doesn’t dictate whether you can build with AI — but the certification makes sure everyone builds responsibly.
OpenAI’s AI Champions model, from their State of Enterprise AI 2025 report, takes a different angle. Selected employees within each business unit get trained to drive responsible AI adoption from within their teams. They’re the peer bridge between governance policy and day-to-day practice.
Both models distribute governance ownership rather than centralising it. IBM’s version is infrastructure-heavy. OpenAI’s is culture-heavy. For most mid-size organisations, the AI Champions model is the right-size starting point. Pick one champion per team, train them on the governance framework, give them authority to approve standard use cases. Scale towards the Fusion Teams model as your organisation grows and governance needs become more complex.
What is the simplest AI approval process that won’t create shadow AI by driving people underground?
If your approved pathway takes weeks and ChatGPT takes seconds, governance has already failed.
IBM’s intake-to-value mechanism shows what’s possible. They went from a two-week process of back-and-forth business case reviews to having an entire environment provisioned in about five or six minutes. A structured intake form triggers automated checks, and the whole flow completes in minutes rather than weeks.
The design principle here is enablement-first governance: the approval process exists to make approved AI easier to use than unapproved AI. As David Talby, CTO of John Snow Labs, puts it: “We need to stop treating governance as a gatekeeper. It’s supposed to give teams safe lanes to use AI, rather than forcing them underground.”
For organisations without IBM’s infrastructure, here’s the practical version:
- Create a simple intake form. Structured fields: tool name, use case, data involved, risk tier.
- Auto-approve Tier 1 requests. Predefined criteria, no human review needed.
- Route Tier 2 requests to the relevant AI Champion. Same-day review.
- Escalate Tier 3 to the CTO or security lead. These are the high-sensitivity cases that warrant proper review.
- Target sub-24-hour turnaround for standard requests. Any process that consistently exceeds one week is actively driving shadow AI — employees won’t wait when an unapproved tool is one browser tab away.
The approval process and RBAC work hand in hand. RBAC defines what each role can access by default. The approval workflow handles exceptions and new tool requests. Together they create a system where 59% of employees who currently use unapproved tools have a governed alternative that’s genuinely easier to use.
How do you detect which AI tools employees are already using?
Shadow AI detection gives you visibility into what’s actually in use so your governance applies to reality rather than assumptions.
There are three categories of detection tooling:
CASB (Cloud Access Security Broker) monitors cloud traffic and flags access to unapproved AI services. The blind spot: AI features embedded within approved SaaS platforms. Copilot features inside Microsoft 365, for example, often slip through because the platform itself is sanctioned.
SaaS Discovery tools like Reco and Grip Security inventory all SaaS applications in use via OAuth grant monitoring and login patterns. The blind spot: tools accessed without OAuth, through direct browser usage.
DDR / Data Detection and Response tools like Cyberhaven track data lineage in real time, catching when sensitive data flows to AI tools regardless of how the employee accessed them. Broadest coverage, but the most complex to deploy.
If you’ve got an existing CASB or web proxy, start there — immediate visibility at zero incremental cost. If you’ve got nothing, start with SaaS Discovery via OAuth audit. No agent deployment required, and it surfaces the shadow AI pattern you’ll see most often.
The numbers make the case for acting quickly. Small organisations with 11–50 employees show the densest shadow AI usage, averaging 269 unsanctioned AI tools per 1,000 employees. Many shadow AI tools show median usage durations exceeding 400 days — at that point, they’re not experiments. They’re embedded in how the business actually runs.
The goal of detection isn’t to ban things. Discovered tools become candidates for formal evaluation. Bring shadow AI into the governed pathway rather than pushing it further underground.
What is compliance theatre — and how do you make sure your governance programme isn’t just performing?
Compliance theatre is the appearance of AI governance without the substance. Policy documents exist, approval committees meet, checklists get completed — but employees still use unapproved AI tools daily, sensitive data flows unmonitored, and the organisation has false confidence that risk is managed.
It’s worse than having no governance at all. Leadership believes risk is handled when it isn’t, which delays investment in the execution mechanisms that would actually reduce it.
Here are the diagnostic signs. You might be doing compliance theatre if:
- Your policy exists but no one can describe how it’s enforced.
- Your approval process exists but average approval time exceeds one week.
- Your governance committee meets quarterly but has no real-time visibility into AI tool usage.
- Shadow AI usage across the organisation is unknown or unmeasured.
- 47% of your teams have experienced negative consequences from AI use and your governance framework flagged none of them.
The root cause is predictable: policy without corresponding execution mechanisms. This is shadow AI and the governance gap in its most visible form — documented intent with no operational substance behind it. In 2025, regulators moved from guidance to enforcement — AI governance is no longer judged by policy statements but by operational evidence. The fix is the execution stack we’ve been walking through in this article: RBAC, lightweight approval workflows, AI Champions, and detection tooling. As Jeff Crume notes: “It’s pretty hard to know if you’re succeeding if you’ve never even defined the benchmarks.”
Policy-first or enablement-first — which approach actually works?
Neither works alone. The real question is sequencing.
A policy-first approach writes comprehensive policies, establishes review committees, then gradually enables AI use within those constraints. The predictable failure mode: approval friction creates bottlenecks before enablement catches up. 78% of employees bring their own AI tools to work, and 68% use free-tier AI tools via personal accounts. Those numbers are the direct result of policy-first approaches that didn’t provide fast alternatives.
An enablement-first approach makes approved tools available quickly with basic guardrails, then layers governance controls as usage patterns emerge. IBM demonstrates what this looks like in practice. They start with provisioning access — License to Drive plus Fusion Teams — and embed governance into the provisioning flow rather than gating access behind it.
For a mid-size company, here’s a practical sequencing:
- Month 1: Deploy detection (know what’s in use), establish basic RBAC tiers, appoint AI Champions.
- Quarter 1: Build the lightweight approval workflow, formalise policies based on observed usage patterns rather than assumptions.
- Year 1: Mature your measurement, expand role-based controls, iterate governance based on data. This is where verifying these execution mechanisms are working becomes the priority.
The key insight: policy that emerges from observed practice is more durable than policy imposed before practice begins. When official tools are available and effective, the temptation to use shadow tools declines. Design the system so the approved path is the path of least resistance, then build policy around what you learn.
FAQ
How do you govern AI tools that teams adopted before there was any policy?
Start with discovery, not enforcement. Use SaaS Discovery or CASB tools to inventory what’s in use. Evaluate each tool against your risk tiers. Bring compliant tools into the governed pathway with proper RBAC assignments. For non-compliant tools, provide governed alternatives before removing access — abrupt bans just drive usage further underground.
What is the difference between an AI Champion and an AI Fusion Team?
An AI Champion is a single person embedded in a business unit who advocates for responsible AI use and acts as the local governance contact. An AI Fusion Team is a cross-functional group combining business domain experts with IT and security personnel who jointly manage AI deployment. Champions suit smaller organisations. Fusion Teams suit enterprises with the headcount to staff them.
Can a smaller organisation implement role-based AI access controls without enterprise tooling?
Yes. Start with three access tiers mapped to your existing IAM system and enforce via your identity provider or a simple AI gateway proxy. The RBAC section above walks through the steps.
What does an AI approval workflow look like when there is no dedicated compliance team?
Auto-approve low-risk requests, route mid-tier requests to the relevant AI Champion for same-day review, and escalate high-sensitivity cases to the CTO or security lead. The goal is sub-24-hour turnaround for standard requests. The approval workflow section above has the full step-by-step.
How long should an AI approval process take before it starts creating shadow AI?
IBM’s benchmark is 5–6 minutes for standard provisioning. For a mid-size company, aim for same-day approval for Tier 1 and Tier 2 requests. Any process that consistently exceeds one week is actively creating the problem it was designed to prevent.
What is the difference between a CASB and SaaS Discovery for detecting shadow AI?
A CASB monitors network traffic and flags access to unapproved cloud services. SaaS Discovery monitors OAuth grants and login patterns to inventory all SaaS applications. CASB works at the network layer. SaaS Discovery works at the identity layer. CASB misses tools accessed on personal devices. SaaS Discovery misses tools used without OAuth.
Why do employees ignore AI policies even when they know the rules?
Because the approved pathway is harder to use than the unapproved one. Shadow AI is a governance design failure, not an employee behaviour problem. The fix is reducing friction in the approved pathway, not increasing penalties.
How do you measure whether AI governance is actually working or just performing?
Track operational metrics: percentage of AI tools in use that are governed, mean time to approve new tool requests, volume of shadow AI detected over time, data exposure incidents related to AI tools. If those metrics aren’t available, your governance programme lacks the visibility layer it needs to verify execution.
What is governance-as-infrastructure and how does it differ from governance-as-process?
Governance-as-infrastructure embeds controls into platform tooling and provisioning flows — access controls enforced by the AI gateway, approval checks automated in the pipeline, detection built into the network layer. Governance-as-process relies on human reviewers, committee meetings, and manual checks. The infrastructure approach scales. The process approach creates bottlenecks.
Should we ban AI tools first and then create a governance framework, or govern what is already in use?
Govern what’s already in use. Banning tools employees depend on creates immediate productivity loss and drives usage to channels that are harder to detect. Start with detection and visibility, then apply governance to discovered tools. Reserve bans for tools with unacceptable risk profiles after evaluation, and always provide a governed alternative.