A quarter of workers use AI at work without telling their manager — SurveyMonkey found that, including 43% of senior leaders. That is not a rogue employee problem. That is a governance design failure.
Shadow AI grows when official AI pathways are absent, slow, or more trouble than they are worth. When IT approval takes two weeks and a personal ChatGPT account takes two minutes, the unofficial route wins. You cannot ban your way out of it — employees need AI to stay competitive. But unrestricted use creates real exposure: shadow AI incidents account for 20% of all breaches, and IBM’s 2025 Cost of Data Breach Report found shadow AI adds $670,000 to the average breach cost.
The answer is: detect first, then enable. Build official pathways that outcompete the shadow alternatives. This is the practical playbook for a scaling technology company, and it sits within a broader AI governance framework that connects policy to actual employee behaviour.
Why Prohibition Doesn’t Work — and What to Do Instead
Cyberhaven‘s research found that 91% of AI tools used in enterprises are completely unmanaged — even at companies with explicit prohibiting policies. The tools are still there; IT just cannot see them. When 59% of employees use unapproved tools even when they understand the risks, awareness is not the problem. Governance design is.
Here is the strategic reframe: the goal is not to prohibit unauthorised use. The goal is to make sanctioned use easier than the shadow alternative. One organisation found its engineers using fifteen different coding tools simultaneously. What worked was providing GitHub Copilot Enterprise and deploying credential detection across all tools. Credential exposure dropped to zero. The sanctioned pathway won because it was better, not because the shadow one was blocked.
The correct objective is to reduce shadow AI by increasing sanctioned AI adoption — not by increasing restrictions.
Starting with Visibility: How to Build an AI Asset Inventory
You cannot govern what you cannot see. Before detection, policy work, or access controls, you need to know what AI is already in your organisation. That is the AI Asset Inventory: a catalogue of every AI tool in use, sanctioned and suspected unauthorised.
Most companies do not have one. The Reco 2025 State of Shadow AI Report found companies with 11–50 employees averaged 269 unsanctioned AI tools per 1,000 employees.
Building the inventory requires no dedicated security tooling:
1. IT procurement data. What AI subscriptions does the company actually pay for? This is your known baseline.
2. SaaS spend analysis. Review expense reports for AI vendor line items IT did not provision. Employees often expense personal AI subscriptions. They show up in the numbers.
3. SSO logs and OAuth grants. Check your Google Workspace or Microsoft 365 admin panel for OAuth grants to third-party AI services. Every “Sign in with Google” creates a grant. Auditable in minutes.
4. Browser extension review. Scan installed extensions on company-managed devices — frequently missed by network-level monitoring.
5. Employee AI usage survey. Run an anonymous survey before technical discovery — the most reliable method for surfacing AI use in legal, finance, HR, and product, where network monitoring has blind spots.
The inventory is a living document. The gap between your known AI estate and your detected AI estate is itself a governance metric.
How to Detect Unauthorised AI Tool Use Without Surveillance Overreach
Effective shadow AI detection combines network-level monitoring, endpoint visibility, and employee self-reporting. Start with what you can implement today, then add tooling as the programme matures.
Firewall and DNS log review (lightweight, free). Check outbound connections against known AI service domains: api.openai.com, claude.ai, gemini.google.com, and equivalents. No new tooling required.
SaaS spend and OAuth audit (lightweight, accessible). Review expense reports for AI vendor line items and OAuth grant lists for services IT did not provision. These two reviews surface most AI tool use quickly.
DLP and DDR monitoring (intermediate, requires tooling). Data Detection and Response tools — Cyberhaven is the category leader — monitor what data is being sent to AI endpoints. Where firewall logs show a connection to api.openai.com, DDR tells you they pasted production API keys into the prompt. Harmonic’s 22M prompt analysis found 16.9% of all enterprise AI data exposures happen through personal free-tier accounts.
CASB deployment (comprehensive, requires security team capacity). Cloud Access Security Brokers — Netskope, Microsoft Defender for Cloud Apps, Zscaler — provide full visibility and policy enforcement. Pair with DDR, as CASB often misses AI features embedded within approved SaaS platforms.
Communicate the programme before deployment — cooperation surfaces more risk than covert monitoring. Non-engineering roles — product, legal, finance, and HR — are often the highest-risk AI users and the most underdetected.
For the technical infrastructure details, see our guide to AI monitoring infrastructure.
Writing an Acceptable Use Framework That People Will Actually Follow
An Acceptable Use Framework that employees have never read is documentation, not a behaviour-change mechanism.
Most AUFs are built by lawyers for compliance rather than by operators for adoption. When the policy says “no AI tools without committee approval” and the committee meets monthly, the policy does not prevent shadow AI. It causes it.
Here is what an effective AUF for a mid-market company needs:
Permitted tools list by role category. Name the approved tools for engineering, product, marketing, and operations. Specific tools, specific use cases, specific role groups — not a blanket prohibition.
Data handling rules. What data may be entered into AI tools, and what may not — customer PII, proprietary IP, authentication credentials, regulated data. Researching case law is low risk; uploading client contracts is high risk. The AUF needs to communicate that distinction.
Output review requirements. Where AI outputs require human review — customer-facing content, financial calculations, legal documents — say so. Where they do not — internal drafts, code going through standard review — say that too.
Review cadence and named owner. Review the AUF every six months — the tool landscape changes too fast for annual cycles. One named owner, not a committee. Committees create diffused accountability and slower decisions — both of which generate shadow AI.
This connects to the broader accountability structures in our article on AI governance accountability.
Acceptable Use Framework — Minimum Viable Outline:
- [ ] Permitted tools list by role category (engineering / product / marketing / operations)
- [ ] Approved use cases per tool (what it may be used for)
- [ ] Data classification rules (what data may and may not be entered)
- [ ] Output review requirements (when human review is mandatory)
- [ ] Incident reporting procedure (how to report suspected AI data exposures)
- [ ] Policy exception request process (how to request approval for tools not on the list)
- [ ] Review and update schedule (minimum every 6 months)
- [ ] Named owner and sign-off (not a committee)
Role-Based AI Access Controls: A Practical Model for Mid-Market Companies
Role-based AI access controls assign permissions based on job function, seniority, and training completion. The goal is to avoid blanket permissiveness — which guarantees data exposure — and blanket restriction, which guarantees shadow AI.
IBM’s approach is the most concrete implementation model available. Compliance checks are embedded in the workflow; provisioning is automated, not a separate gate. Fast access for trained employees, no manual queue, no bureaucratic drag.
97% of organisations that experienced AI-related security incidents lacked proper AI access controls, per IBM’s 2025 Cost of Data Breach Report. Here is a three-tier model for a 50–500 person company:
Tier 1 — Basic use (all staff). Sanctioned productivity tools: Microsoft Copilot, ChatGPT Enterprise, Claude for Teams. Access requirement: a 15-minute module on data handling rules.
Tier 2 — Advanced use (engineers, product teams). Code generation tools: GitHub Copilot Business, Cursor. Access requirement: Tier 1 plus a module on prompt security and credential handling. Coding tools show a 14x concentration of credential exposure risk.
Tier 3 — Build and deploy (senior engineers, tech leads). Building AI-integrated features and deploying agents. Access requirement: Tier 2 plus named owner sign-off on the specific use case.
The enforcement layer is SSO. Employees without completed training simply do not have credentials for the relevant tools. No manual queues, no governance committee, no additional tooling.
For guidance on measuring how well this enablement programme is working, see measuring AI enablement programme effectiveness.
Risk-Tiered Governance: Where to Apply Heavy Controls and Where to Stay Out of the Way
Risk-tiered governance applies controls proportional to the actual risk of each AI use case. It is the mechanism that lets you say “yes” quickly to most AI use while reserving effort for cases that actually matter.
Applying the same approval process to “an engineer using Copilot to complete a function” and “deploying an AI agent that processes customer financial data” creates overhead without proportional risk reduction. Design tools represent 9.5% of total AI usage but only 0.06% of sensitive data exposures. Coding tools show a 14x concentration of credential exposure. Uniform governance misallocates effort — risk-tiered governance puts it where the data says it belongs.
Low risk / Light governance. Drafting, summarisation, research, code completion going through standard review. No approval gate beyond Tier 1 training. Minimal friction — this is the majority of AI use.
Medium risk / Standard governance. AI-generated content for external publication, AI-assisted customer communications, AI tools handling internal structured data. Output review against the AUF; AI tool usage logging; periodic spot audits.
High risk / Strong governance. Customer-facing AI agents, AI processing PII or regulated data, AI integrated into financial or legal workflows. Use case approval, named owner, pre-production testing, continuous monitoring, and human-in-the-loop review before outputs reach customers.
Map each use case to its risk tier in the permitted tools list. Employees know immediately what applies — less ambiguity, fewer exception requests, less friction driving shadow AI governance underground.
Frequently Asked Questions
What is shadow AI and how is it different from shadow IT?
Shadow AI is the use of AI tools by employees without formal IT or security approval. The key difference: AI tools can actively process and expose sensitive data in ways traditional unapproved SaaS apps do not. Shadow AI is also harder to detect — AI features are increasingly embedded inside already-approved tools like Notion and Grammarly.
Why do employees use unauthorised AI tools even when they know the risks?
Productivity. AI tools save employees 40–60 minutes per day. When there is no enterprise licence or the approved tools have frustrating limitations, personal accounts fill the gap. Employees are making rational decisions within the constraints their organisation has created.
How much does shadow AI cost in a data breach?
IBM’s Cost of Data Breach 2025 report found shadow AI adds $670,000 to the average breach cost — a 16% increase. Shadow AI breaches disproportionately affect customer PII: 65% versus a 53% global average.
How do I know if my team is using AI tools I don’t know about?
Three lightweight signals to start: firewall and DNS log review for outbound connections to known AI endpoints; OAuth grant lists via your SSO admin panel; and SaaS expense line items for AI accounts IT did not provision. Then run an employee AI usage survey — self-reporting surfaces tools in non-engineering roles that network monitoring misses.
What tools can I use to detect shadow AI without a dedicated security team?
Start free: firewall log analysis, an employee survey, and an OAuth grant review. For intermediate coverage: DLP/DDR tools like Cyberhaven for continuous monitoring. For comprehensive coverage: CASB platforms — Netskope, Microsoft Defender for Cloud Apps, Zscaler.
What should an AI acceptable use policy actually say?
At minimum: the permitted tools list by role, data handling rules, output review requirements for high-risk use cases, the incident reporting procedure, and a review schedule. Avoid blanket prohibitions with committee approval requirements. One page, plain language, specific approved tools by name.
How does AI governance differ for regulated industries versus growth-stage SaaS companies?
Regulated industries face mandatory compliance obligations: GDPR, HIPAA, FCA guidance, EU AI Act requirements for high-risk AI systems. Growth-stage SaaS companies have more discretion but the same shadow AI risks. The minimum viable governance approach here is most directly applicable to growth-stage companies. The risk-tiered approach is valid for both.
What is the fastest way to get AI governance in place without creating bureaucracy?
The minimum viable governance stack: an AI Asset Inventory built from an employee survey and OAuth audit; a one-page Acceptable Use Framework with a named permitted tools list; and a three-tier role-based access model provisioned via SSO. Two to four weeks, using existing HR and SSO infrastructure. No new tooling, no committee, no governance hire.
Who should own AI governance in a company without a dedicated governance function?
One person — not a committee, not a vendor. The technical lead owns the policy and permitted tools list; the engineering lead owns Tier 3 approvals; IT or DevOps owns SSO provisioning and detection tooling. Creating an AI governance committee as a workaround for named ownership is a mistake — committees create diffused accountability and slower decisions.
How do I design an AI enablement programme that makes sanctioned tools more attractive than shadow alternatives?
Apply product thinking to internal tooling. The provisioning process should be faster than signing up for a personal account. The approved tool should cover the use cases employees actually need. Training should be short, practical, and immediately applicable. If employees are not using sanctioned tools, the programme needs redesign — not tighter enforcement.
Can AI governance be automated, or does it always require human oversight?
Detection and enforcement can be substantially automated: DLP/DDR tools monitor data flows; CASB platforms enforce tool access policies; SSO provisioning gates access based on training completion. What cannot be automated: use case risk classification, high-risk tier approvals, incident response, and policy updates. These require named human accountability.
What metrics should I track to know if our shadow AI programme is working?
Four leading indicators: (1) Approved tool adoption rate — are employees using sanctioned tools? (2) Shadow AI detection rate — are new unauthorised tools appearing less frequently? (3) Policy exception request volume — a rising trend signals the permitted list is too short or the policy too restrictive. (4) Training completion rate — is certification keeping pace with headcount growth? Avoid using policy existence as a success metric — that measures governance theatre, not governance effectiveness.
For a complete overview of the enterprise AI governance gap — from operating model design to accountability structures to regulatory obligations — see What AI Governance Actually Requires and Why Most Policies Fall Short.