Insights Business| SaaS| Technology What AI Governance Actually Requires and Why Most Policies Fall Short
Business
|
SaaS
|
Technology
Mar 30, 2026

What AI Governance Actually Requires and Why Most Policies Fall Short

AUTHOR

James A. Wondrasek James A. Wondrasek
Comprehensive guide to what AI governance actually requires and why most policies fall short

Most organisations treat AI governance as a documentation problem. They write a policy, circulate it, and consider the work done. The gap between what the policy says and what AI systems are actually doing in production is where risk accumulates. McKinsey’s 2025 State of AI survey found nearly nine in ten organisations are using AI regularly, yet most have not begun scaling it with mature governance in place.

This guide maps the full governance terrain: from diagnosing shadow AI, through building an operating model and assigning accountability, to enforcing rules at runtime and satisfying regulators. Each section links to a detailed article.

In This Series

Why do most AI governance policies fail to actually control risk?

Most AI policies fail because they describe intent but cannot enforce controls. A policy tells employees what they should do — it cannot stop an agent from accessing data it should not touch, or catch a model drifting toward outputs that no longer match approved behaviour. Static policy applied to dynamic systems is an architectural failure, not a compliance gap that more documentation resolves.

Three blind spots compound the gap: visibility (you do not know all the AI tools in your environment), ownership (systems with no assigned human owner cannot be governed), and decision authority (it is unclear who can stop an AI system when something goes wrong). 99% of organisations report financial losses from AI-related risks, with 64% exceeding $1 million (EY, 2025). Agentic AI makes this larger still — a policy reviewed before deployment cannot anticipate what an autonomous agent will do six months later. Once you accept governance requires infrastructure, the first question is: what do you not know about?

What is shadow AI and why is it the biggest governance gap in most organisations?

Shadow AI is any AI tool, model, or agent operating in your organisation without formal approval, oversight, or a defined owner. Your policy does not apply to systems you do not know about — and AI tools carry more risk than traditional shadow IT because they act on data, produce outputs used in decisions, and take autonomous actions in production systems. The governance consequence is the same regardless of scale.

Reco.ai’s 2025 State of Shadow AI Report found 71% of office workers admit to using AI tools without IT department approval. The “Bring Your Own Agent” pattern makes things worse — as ArmorCode’s Nikhil Gupta puts it, employees “need 20 minutes and a credit card” to deploy an autonomous agent with no owner and no approval record. You need a continuously updated inventory before any governance structure can take effect — and that inventory is the foundation for building a model that governs what you find.

What does an AI operating model actually require?

An AI operating model defines who approves AI systems, who owns them in production, what controls apply at each risk level, and how governance is enforced — not merely documented. It integrates people, processes, and technology so that governance is embedded into how AI is adopted and operated, rather than retrofitted after systems are already running without oversight.

Most organisations have policies that express intent. An operating model translates that intent into repeatable decisions and enforceable controls. McKinsey found fewer than 10% of AI use cases make it out of pilot mode — the operating model gap, not the technology gap, is the primary constraint. The model needs to be proportionate: rigorous enough to catch high-risk AI, lightweight enough not to slow down teams using low-risk tools. With the model in place, the next question is: who owns the decisions these systems make?

Who is accountable when enterprise AI causes a business mistake?

In most organisations, the answer is unclear — and that ambiguity is itself a governance failure. Accountability requires defined ownership at three levels: who owns the AI system, who owns the decision it informed, and who has authority to stop the system when something goes wrong. Without explicit assignment, accountability defaults to no one — which means no one is monitoring, and no one acts when a problem surfaces.

The Air Canada chatbot case showed what this looks like — a tribunal held the airline liable when its AI provided outdated fare information. Three structures matter: a governance lead with cross-functional authority, clear business-unit ownership per AI application, and defined stop authority — the right to suspend or roll back a system without multi-team approval. Accountability also determines what happens at runtime, because someone has to own the enforcement layer.

What is runtime AI governance and how is it different from policy governance?

Runtime AI governance means enforcing controls at the moment an AI agent acts — in production, in real time — rather than through policy review before deployment or audit after the fact. It includes prompt firewalling, identity and least-privilege enforcement, behavioural monitoring, egress controls, and continuous audit-trail generation. Policy governance describes what should happen; runtime governance enforces what does happen, against live system behaviour.

The distinction matters most for agentic AI. Agents fail differently from traditional software: a broken API call throws an exception, but an agent reasoning failure produces confident, plausible output that is wrong — no error, no alert, no log entry. In multi-agent workflows, bad output becomes the next agent’s input. Yet only 48% of organisations monitor their production AI systems for accuracy, drift, and misuse (Gradient Flow, 2025). Without a continuous record of what agents are doing, there is no enforcement surface — which raises the question of how you detect what is running outside your governance entirely.

How do you detect shadow AI and create sanctioned pathways that employees will actually use?

Shadow AI detection requires two things operating in parallel: technical discovery (scanning network traffic, SaaS usage logs, development environments, and software supply chains for undeclared AI) and a sanctioned pathway that makes the approved route faster and less friction-heavy than going around it. Detection without a viable alternative drives shadow AI underground rather than eliminating it from your governance surface.

Passive shadow AI — employees using unauthorised apps — is findable through SaaS usage monitoring. Active shadow AI — agents deployed without IT knowledge, MCP servers introduced by individual developers — requires deeper supply chain scanning. Reco.ai found that shadow AI tools become entrenched, with some running for over 400 days before detection. Blanket blocking just drives usage underground. Sanctioned pathways — a fast-track approval process, an approved tool catalogue, self-service provisioning — give employees a governed alternative that does not impede their work. Once that equilibrium exists, you need a way to tell whether the programme is actually reducing risk.

How do you know whether your AI governance programme is actually working?

Most organisations are measuring the wrong things. Usage counts — number of approved AI tools, number of employees trained — describe activity, not outcomes. The metrics that indicate governance health are different: shadow AI coverage rate, policy violation rate in production, time from incident detection to resolution, and the ratio of sanctioned to unsanctioned AI usage over time.

Every ungoverned system accumulates governance debt — risk that surfaces at the worst possible moment. As David Talby of John Snow Labs puts it: “Organisations without auditable oversight across AI systems will face higher costs, whether through fines, forced system withdrawals, reputational damage, or legal fees.” The “say-do ratio” — how often AI systems behave consistently with the policies written for them — is a useful diagnostic. Only 30% have deployed generative AI to production with documented governance (Gradient Flow, 2025). Proactive measurement provides the evidence base regulators are starting to require.

What do the EU AI Act, NIST AI RMF, and ISO 42001 actually require your organisation to do?

The three frameworks converge on the same core requirements: know what AI systems you are operating, classify them by risk, assign ownership and accountability, implement proportionate controls, generate evidence that governance was applied, and monitor AI behaviour after deployment. The EU AI Act makes these requirements binding for high-risk AI. ISO 42001 makes them certifiable. NIST AI RMF structures them as voluntary operational practice.

The EU AI Act enters general application on August 2, 2026. High-risk systems must comply with conformity assessment, documentation, human oversight, and post-market monitoring. Penalties reach 35 million euros or 7% of global turnover, and the Act applies regardless of where you are incorporated if your AI affects people in the EU. This is not only a European concern — Colorado’s AI Act takes effect June 30, 2026, and California and Texas have passed their own requirements. Cross-framework mapping avoids duplicating effort: an AI inventory satisfies EU AI Act registration, NIST AI RMF’s Map function, and ISO 42001 clause 8.4 simultaneously.

Resource Hub: AI Governance Library

Understanding the Governance Gap

Building Governance Infrastructure

Measuring and Reporting Governance Health

Frequently Asked Questions

What is the difference between an AI policy and AI governance that actually works?

An AI policy is a document that describes what your organisation intends. AI governance that works is the infrastructure — operating model, accountability structures, runtime enforcement, and measurement — that makes those intentions enforceable at scale. Most organisations have the former; few have the latter. The gap between them is where the risk accumulates.

Do I need to comply with the EU AI Act if my company is based outside Europe?

If your AI systems affect people in the EU — including SaaS products that make consequential decisions for EU customers — the EU AI Act applies regardless of where you are incorporated. Most companies will need at least a basic compliance assessment before August 2026.

What is compliance theatre in AI governance?

Governance activities that produce the appearance of control without the substance: annual AI policy sign-offs, usage surveys without enforcement, governance committees with no authority to stop a deployment. These are programmes that satisfy auditors on paper but would not survive a real incident inquiry.

How many AI tools is the average enterprise running without IT approval?

More than most organisations expect. The 2025 State of Shadow AI Report found smaller companies are hit hardest — those with 11 to 50 employees averaged 269 unsanctioned AI tools per 1,000 employees, and some tools ran for over a year before detection. If you have not actively inventoried your AI usage, do not assume the answer is close to zero.

What is an AI operating model — do I need one at 200 employees?

An AI operating model is the system that governs how AI is adopted and managed in your organisation. At 200 employees you almost certainly need one — the question is how lightweight it can be while still addressing genuine risks. At minimum: an inventory process, a risk classification, defined approval pathways, and ownership assignment for every AI system in production.

Can AI governance and developer productivity coexist?

Yes — but only when governance is designed as an enablement function rather than a gatekeeping function. IBM compressed its AI project approval process from weeks to five minutes by embedding compliance checks directly into the provisioning platform. Governance that gives developers a safe, fast lane reduces shadow AI without reducing output.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter