Insights Business| SaaS| Technology Shadow AI and the Governance Gap Enterprises Are Not Measuring
Business
|
SaaS
|
Technology
Mar 30, 2026

Shadow AI and the Governance Gap Enterprises Are Not Measuring

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic Shadow AI and the Governance Gap Enterprises Are Not Measuring

Here is a number worth sitting with: 82% of executives feel confident their policies protect their organisation from unauthorised AI agent actions. Only 14.4% of those same organisations have full security approval for all AI agents currently deployed.

That 68-point gap is the governance story.

EY‘s 2026 Technology Pulse Poll surveyed 500 US technology executives and found 52% of department-level AI initiatives operating without formal approval. Three independent surveys converge on the same finding: adoption is running well ahead of governance.

This article is the diagnosis — what shadow AI is, how the governance gap is measured, why it persists, and what it is costing right now. It is part of our complete guide to what AI governance actually requires and why most policies fall short.

For a CTO at a 50–500 person SaaS company, the shadow AI governance gap is a personal liability question.

What is shadow AI — and why is it different from the shadow IT problem you already know?

Shadow AI is the use of AI tools — large language models, automation platforms, AI-powered SaaS applications — by employees without explicit IT or security approval. It is the AI-era evolution of shadow IT, but the analogy breaks down quickly.

Shadow IT meant using Dropbox instead of the approved file share. The risks were bounded: a file in the wrong place. Shadow AI carries all of those risks plus some genuinely new ones.

When employees submit sensitive data as prompts to an external model, that model processes it, may retain it, and in some configurations trains on it. The data has been consumed by infrastructure you do not control, with no audit trail and no retrieval path. That is a categorically different kind of exposure.

Harmonic Security‘s analysis of 22 million enterprise AI prompts (January–December 2025) makes this concrete: code, legal documents, and financial data comprise 74.5% of what employees expose through unsanctioned AI tools. Legal documents alone account for 35.0% of exposures — M&A materials, settlement content, litigation strategy. And 12.8% of coding tool exposures contain API keys or tokens.

Cloud access security broker (CASB) solutions cannot adequately address this. They manage access to cloud services, but they cannot assess model behaviour, training data exposure, or hallucination risk.

The employee-side framing is BYOAI — Bring Your Own AI, a term popularised by Microsoft WorkLab. 76% of businesses have active BYOAI use. This is the default state at most organisations that have not built sanctioned pathways for approved AI access. BYOAI matters as a governance design framing because it reframes the problem: this is not a failure of compliance, it is a gap between what employees need and what the organisation provides.

How large is the AI governance gap — and what does the data actually show?

Three independent surveys — a Big Four accounting firm’s executive poll, a security platform’s practitioner survey, and IBM‘s cost-of-breach research — arrive at the same conclusion using different methodologies and different populations.

EY (February 2026, 500 US technology executives): 52% of department-level AI initiatives operating without formal approval. 78% of leaders say adoption is outpacing their ability to manage risk.

Gravitee (2026, 900+ practitioners): only 14.4% have full security approval for all AI agents going live. Only 47.1% of deployed agents are actively monitored.

IBM Cost of Data Breach 2025: 20% of organisations have staff using unsanctioned AI tools. 97% of AI-related breaches lacked proper access controls.

The governance gap is not simply the absence of policy. HelpNet Security‘s Larridin survey found 69% of organisations report having AI risk and compliance policies — yet only 38% maintain a comprehensive inventory of AI applications actually in use. The average large enterprise operates 23 AI tools, with 45% of adoption occurring outside formal IT procurement.

50% of employees believe their organisation’s AI guidelines are “very clear,” yet 58% have not received formal training on safe AI use. That 28-point gap is where governance breaks down in practice.

Gravitee found 88% of organisations reported confirmed or suspected AI security incidents in the past year. The AI governance gap is not a theoretical risk — it is observed, ongoing harm.

Why does AI adoption keep outpacing governance even when leaders know it’s a problem?

EY Global Technology Sector Leader James Brundage coined the “velocity paradox” to name this structural condition: 85% of technology leaders prioritise speed-to-market; only 15% prioritise exhaustive pre-launch vetting.

Teams that move fast on AI deliver visible short-term gains. Governance requires slowing down in an environment where that speed is perceived as competitive advantage. So governance gets sequenced after adoption rather than alongside it.

Reco‘s data shows just how far this sequencing goes: two specific tools in their dataset had median usage durations of over 400 days before formal review. After that long, you are not evaluating a tool — it is core business infrastructure.

If you do not have a dedicated compliance team, the velocity paradox lands directly on you: accountability for both delivery speed and risk outcomes with no structural support. The answer is not slower AI adoption. It is building an AI operating model designed to move at adoption speed.

What is the confidence paradox — and why are executives and frontline managers measuring different things?

The confidence paradox is Gravitee’s framing for the disconnect at the heart of enterprise AI governance: 82% of executives feel confident their existing policies protect against unauthorised AI agent actions — yet only 14.4% of organisations have full security approval for all deployed agents. These two numbers cannot both be right.

The explanation is simple. Executives measure governance by proxy: does a policy exist? Is there a review process? These are inputs. Operational leaders measure governance by outputs: did this specific deployment go through the review process? Can someone name who can stop a misbehaving system right now? Different questions, systematically different answers from the same organisation.

HelpNet Security’s Larridin survey found a 16-point confidence gap between C-suite and directors. The closer you are to execution, the less confident you are in AI visibility.

The confidence paradox compounds the velocity paradox: if executives believe governance is in place, there is no organisational pressure to invest in better governance infrastructure. The gap becomes self-concealing.

EY’s data adds another layer: only 50% of organisations report their AI governance leaders have full independent authority to halt high-priority projects that fail safety guardrails. 42% require board or CEO intervention. When accountability for enterprise AI is unclear — and when those accountable cannot act without board escalation — governance becomes ceremonial.

What are the real business risks when AI tools run without oversight?

The risks are already occurring.

EY’s 2026 survey found 45% of technology executives confirmed or suspected sensitive data leaks from employees using unauthorised generative AI tools in the prior 12 months. 39% confirmed or suspected proprietary IP leaks. These are current-year disclosures, not projections.

Reco found organisations with high shadow AI density added $670,000 to their average breach cost. The exposure pattern is not random — employees submit their most sensitive operational data to AI tools because that is where the productivity gains are biggest. And 4% of enterprise prompts in Harmonic’s dataset went to China-headquartered AI tools, creating jurisdictional data risk that most governance frameworks do not track.

The agent-specific risk profile is qualitatively different. AI agents act: they execute tasks, interact with APIs, and take actions in production systems without human approval at each step. Only 21.9% of teams treat AI agents as independent, identity-bearing entities; 45.6% still rely on shared API keys, making accountability chains impossible to audit. Gravitee documented cases of agents gaining unauthorised write access to databases.

The aggregate risk is not one large breach. It is months of accumulating small, invisible exposures. Detecting shadow AI starts with visibility, and visibility starts with knowing what tools are running.

What closes the governance gap?

Publishing a more detailed policy document does not close the governance gap. The EY and Gravitee data make this clear: organisations with policies already have the gap. What is missing is measurement and operational enforcement.

Four components are identified across the research as necessary to move from policy to practice.

Operating model clarity. Where does AI ownership sit relative to the CEO? Databricks CTO EMEA Dael Williamson found that an AI-serious organisation’s first signal is how close data and AI ownership sits to the CEO. What an AI operating model actually includes is the foundation everything else depends on.

Named accountability structures. Someone must hold the right to approve, change, pause, or stop AI in production. If you cannot name who can stop an AI system in 10 seconds, you do not own it. Accountability for AI decisions is an operational question, not an organisational chart entry.

Sanctioned pathways for employees. Shadow AI thrives where approved alternatives do not exist. Role-based AI enablement — where access, tooling, and training are calibrated to role-specific risk — creates structured access without driving behaviour underground. Detecting shadow AI and creating sanctioned pathways addresses what policy enforcement cannot.

Measurement infrastructure. Counting licences is not governance. Knowing whether AI systems are behaving as intended requires an AI asset inventory, observability tooling, and audit trails. Only 38% of organisations maintain a comprehensive inventory. Measuring whether AI governance is working closes the loop.

These four components are interdependent: operating model clarity assigns ownership; accountability structures define authority; sanctioned pathways address frontline behaviour; measurement confirms whether the other three are functioning.

This article has established the diagnosis. For the full picture of enterprise AI governance — from operating model design through measurement infrastructure — see What AI Governance Actually Requires and Why Most Policies Fall Short.

Frequently Asked Questions

What is the difference between shadow AI and shadow IT?

Shadow IT refers to unauthorised use of software or cloud services without IT approval — the classic example is personal Dropbox for work files. Risks are primarily access control and data residency.

Shadow AI introduces additional risk categories: employees submit sensitive data as prompts to external model infrastructure; the AI processes it, may retain it, and can act autonomously in agentic configurations. The harm mechanism is categorically different. CASB tools, developed to handle shadow IT, cannot assess model behaviour, training data exposure, or hallucination risk.

Is 52% of AI projects running without oversight really that common?

Yes — and the consistency across multiple independent data sources is more credible than any single statistic. EY’s 2026 poll found 52% of department-level AI initiatives operating without formal approval. Gravitee’s 2026 report found only 14.4% have full security approval for all AI agents. IBM’s 2025 report found 20% of organisations have staff using unsanctioned AI tools. Three methodologically different surveys, same finding.

Why do executives think AI governance is in place when operational reality differs?

Executives measure governance by proxy: does a policy exist? Is there a review process? Operational leaders measure governance by practice: did this specific deployment go through the review process? Can someone name who can stop a misbehaving AI system?

Different questions, systematically different answers. The Gravitee confidence paradox — 82% executive confidence versus 14.4% actual approval rate — is the clearest quantification.

What types of sensitive data are most commonly exposed through shadow AI?

Harmonic Security’s analysis of 22 million enterprise prompts (January–December 2025) shows code, legal documents, and financial data comprise 74.5% of what employees expose through unsanctioned AI tools. Legal documents are the largest category at 35.0%, covering M&A materials, settlement content, and litigation strategy. 12.8% of coding tool exposures contain API keys or tokens. 4% of prompts went to China-headquartered AI tools.

What does the velocity paradox mean for a company trying to move fast on AI?

The velocity paradox — EY’s term, coined by James Brundage — names the structural tension: 85% of tech executives prioritise speed-to-market, but governance requires slowing down to assess and approve. Teams bypass governance to deliver, and shadow AI accretes.

The resolution is governance infrastructure designed to move at adoption speed — risk-tiered approval processes that apply lightweight checks to low-risk tools and heavier scrutiny to agentic systems.

What happens when employees use AI tools that IT hasn’t approved?

Short-term: invisible productivity. Medium-term: sensitive data submitted to those tools has already been processed by external model infrastructure with no audit trail and no retrieval path.

EY’s 2026 data shows 45% of tech companies confirmed or suspected sensitive data leaks from unauthorised AI tool use in the prior 12 months. Reco’s breach cost data shows a $670,000 higher average breach cost at high shadow AI density organisations. Reco also shows usage durations of 400+ days before IT identifies tools — by which point they are core business infrastructure.

Can writing an AI policy close the governance gap?

No. Many organisations with governance policies still have the governance gap. The missing capability is measurement — knowing whether what the policy says is happening actually is.

Without measurement infrastructure — AI asset inventory, monitoring, audit trails — there is no way to know whether policy is being followed. Policy documents are a necessary precondition, not a solution.

How many AI tools is the average enterprise running?

Most enterprises do not have a reliable answer — which is the governance problem. HelpNet Security’s Larridin survey found the average large enterprise operates 23 AI tools, with 45% of adoption outside formal IT procurement. Only 38% maintain a comprehensive AI application inventory.

You cannot govern what you have not enumerated. Detection approaches are addressed in the cluster article on sanctioned pathways and shadow AI detection.

What makes AI agents harder to govern than regular AI tools?

AI agents act: they execute tasks, interact with APIs, and take actions in production systems without human approval — unlike a chatbot, which generates text for a human to evaluate. An AI agent with shadow AI characteristics is an action risk, not just an exposure risk.

47.1% of deployed agents are not actively monitored; 88% of organisations reported AI security incidents in the past year. Only 21.9% of teams treat AI agents as independent, identity-bearing entities; 45.6% still rely on shared API keys, making accountability chains impossible to audit.

Should companies block shadow AI tools or create approved alternatives?

Blocking alone does not resolve shadow AI — employees route around restrictions when approved alternatives do not deliver equivalent productivity. Anton Chuvakin at Google Cloud put it plainly: “If you ban AI, you will have more shadow AI and it will be harder to control.”

The effective design is sanctioned pathways: a clear route for employees to use AI tools that meet the organisation’s security requirements, rather than forcing a choice between an approved tool that does not work and an unapproved tool that does.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter