Insights Business| SaaS| Technology Shadow AI in Mid-Market Companies — Why the Exposure Is Disproportionate
Business
|
SaaS
|
Technology
Mar 10, 2026

Shadow AI in Mid-Market Companies — Why the Exposure Is Disproportionate

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic Shadow AI in Mid-Market Companies

Shadow AI is not spread evenly across organisations. If you’re running a company with 50 to 500 employees, your people are almost certainly using more unsanctioned AI tools per head than their counterparts at large enterprises. Reco AI’s 2025 State of Shadow AI Report found 269 unsanctioned AI tools per 1,000 employees at the smallest companies studied. The reasons are structural — fewer gatekeeping layers, faster adoption, and informal approval processes that let AI tools bed down before anyone notices. Meanwhile, accountability for AI risk is scattered across roles, with no clear mandate for the CTO who ends up holding the bag by default.

This article looks at why mid-market exposure is structurally different, who should own it, and how to sequence governance when resources are thin. If you want the broader context on the AI governance gap affecting most organisations, that’s worth reading first.

Why do smaller companies have more shadow AI per employee than large enterprises?

At the smallest companies studied, Reco AI found roughly 27% of employees actively using shadow AI without IT knowing about it. Mid-market companies sit in the worst spot — they’ve got the adoption speed of small companies without the control layers of large ones.

Large enterprises have centralised procurement, SaaS management platforms, and dedicated security teams that intercept unapproved tools before they take root. A 150-person company has none of that. No procurement gate, no SaaS security posture management, often no security team at all.

Microsoft WorkLab reports that 80% of employees at small and medium-sized companies bring their own AI tools — what gets called BYOAI. Someone on your team can sign up for ChatGPT, Otter.ai, or Perplexity AI and weave it into their daily workflow within days. No approval friction, no security assessment. OpenAI alone accounts for 53% of all shadow AI usage — more than the next nine platforms combined.

And the problem compounds. An employee starts using a new AI app in minutes, but it may take months for anyone to notice. Reco AI found some shadow AI tools had median usage durations exceeding 400 days without formal approval. After 100 days of continuous use, that tool is woven into how your business operates. Removing it is a business disruption, not just an IT task. Understanding what the governance gap is helps frame why this entrenchment matters.

Who actually owns AI risk when there is no dedicated CISO or CIO?

Acuvity’s 2025 State of AI Security Report shows the CIO holds AI security responsibility in 29% of organisations, the CDO in 17%, the CISO in just 14.5%. No single role dominates. And close to 40% have no managed governance structure at all.

At a 200-person company, there’s often no CIO, no CISO, and no CDO. The CTO handles technology decisions, a VP of Engineering manages the dev pipeline, and someone in finance or operations deals with compliance. AI governance falls into the gap between these roles. Nobody has the formal mandate, so nobody acts until something goes wrong.

The capacity gap makes it worse. McKinsey’s 2025 State of AI report found that 52% of large organisations have a dedicated generative AI team, compared to just 23% of small ones. Your CTO is simultaneously the technical leader, the de-facto security owner, and the AI risk accountable executive — without the budget, team, or formal authority for any of it.

You can’t rely on people coming forward, either. 52% of employees won’t voluntarily disclose their AI usage. And only 31% of organisations have formal AI policies at all. The accountability fragmentation isn’t a management failure — it’s a structural consequence of how mid-market companies are designed. This is the wider problem of AI policy without execution.

Once you accept that no single role will own AI governance cleanly, the question becomes how to distribute that responsibility.

Does centralised or federated governance work better at mid-market scale?

Centralised AI governance — one team handling tool approval, monitoring, and policy enforcement — works at enterprise scale where you can staff it. At sub-200 employee scale, centralised typically means one person doing everything. That collapses under its own weight.

The alternative is federated governance, where responsibility is distributed across team leads with lightweight central coordination. Here’s the thing: most direct managers already know about or approve the shadow AI their teams are using. The de facto governance at most mid-market companies is already federated — people just haven’t called it that. Formalising it is more realistic than imposing a centralised structure that doesn’t match how your company actually works.

In practice, engineering leads vet tools for their teams, product leads assess data handling, and the CTO provides the policy framework and escalation path. The trade-off is consistency — federated models are faster to deploy but harder to maintain uniformly.

Shadow AI thrives when governance is too heavy. If requesting a new AI tool means writing a 40-page document with dozens of appendices, teams will skip it. Nearly 60% of employees use unapproved AI tools at work. Your governance model has to be frictionless enough that people actually use it.

Start federated. It matches your headcount and your reality. Companies between 200 and 500 employees can layer in centralised policy and audit as the function matures.

Is building internal AI governance capability worth it, or should you buy?

McKinsey data confirms what you probably already suspect: only 23% of small organisations have a dedicated AI adoption team. Just 13% have hired AI compliance specialists. Everyone else is winging it.

Building internal governance — a dedicated team, custom policies, bespoke tooling — requires sustained investment that most 50 to 200-person companies simply can’t absorb. You probably don’t have a security team yet. Hiring a governance team before a security team doesn’t make sense.

Buying a SaaS governance platform gives you immediate visibility. Tools like Nudge Security and Varonis provide network monitoring, user activity tracking, and data discovery — capabilities that would require significant headcount to replicate internally.

Here’s the practical framework: buy tooling for discovery and monitoring, where speed matters. Build policy and process internally, where organisational context matters. The approval process — what gets approved, what gets rejected, how fast — that has to be yours. No vendor can build that for you.

At 50 to 200 employees, buy-first dominates. At 200 to 500, the balance shifts toward building a dedicated governance function. And doing nothing isn’t neutral — 98% of organisations already have employees using unsanctioned apps. So wherever you land, the next question is where to start.

What should you govern first when you cannot govern everything at once?

Most governance advice falls apart here. Everyone says “govern AI” but nobody tells you what to do first when you’ve got limited people and limited budget. Here’s a sequenced approach that works at mid-market scale.

Discovery comes first. If you skip straight to policy-writing you’re governing blind — you don’t know which tools are in use, what data flows through them, or which teams are exposed. OAuth authorisation logs are your best starting point because most shadow AI tools authenticate via Google Workspace or Microsoft 365 leaving a visible trail. Browser extension audits and SaaS spend reports for unfamiliar vendors fill in the gaps.

Risk classification follows discovery. Rank tools by data exposure severity. Tools handling customer PII, financial data, or proprietary code are higher priority than internal productivity tools. Reco AI found three apps with failing security grades — Jivrus, Happytalk, and Stability AI — for lacking encryption, MFA, and audit logging.

A pre-approved tool list is the single most effective action. Create a vetted registry of sanctioned AI tools and channel employee adoption toward secure alternatives before risky tools become entrenched. When the official path is easy and meets employee needs, there’s less incentive to go rogue.

A lightweight approval process makes the list sustainable. The approval process must be faster than the time it takes an employee to sign up for a free AI tool. If your process is slower than the shadow path, shadow AI wins every time.

Formalised policy and training close the awareness gap. Write the AI acceptable use policy — only 31% of organisations have one. Include data handling boundaries, disclosure expectations, and the approved tool list. 58% of employees haven’t received formal training on safe AI use at work.

Ongoing monitoring catches what the other steps miss. Buy continuous monitoring at mid-market scale — don’t try to build it — and run periodic audits to catch new shadow AI adoption. You can measure governance effectiveness without enterprise tooling once this is in place.

The sequencing matters. Discovery and classification have to come before policy because you can’t write effective policy without knowing what you’re governing. The pre-approved list has to come before the approval process because employees need an immediate alternative.

How do you make the business case for AI governance investment before a crisis?

The financial argument has gotten concrete. IBM’s Cost of a Data Breach Report (2025) documents a $670,000 breach cost premium for organisations with high shadow AI exposure. And 97% of organisations that reported AI-related breaches lacked proper AI access controls. That $670K shadow AI premium is the number you put in front of your board.

Organisations treating governance as a strategic capability see a 30% ROI advantage over those treating it as a compliance afterthought. Frame governance as a cost-avoidance multiplier — tie it to customer trust, sales cycle impact (enterprise buyers will ask about your AI governance posture during SOC 2 and ISO 27001 reviews), and insurance premium reduction.

63% of organisations have no AI governance policies. The first quarter of shadow AI existence is the cheapest quarter to act.

Mid-market companies carry disproportionate shadow AI exposure because they sit between the informality of small companies and the control structures of large enterprises. Ownership is fragmented, tooling is absent, and the governance debt compounds with every quarter of inaction. But the path forward is sequenced, practical, and doesn’t require enterprise-scale resources. Start with discovery, publish a pre-approved list, and make the business case before a breach makes it for you. For context on the AI governance gap affecting most organisations and why mid-market exposure is disproportionate, start with the overview. If you’re ready to move from the business case to implementation, here’s how to actually execute governance at mid-market scale.

FAQ

Is the shadow AI problem the same for a 100-person FinTech as a 500-person SaaS company?

No. FinTech companies face regulatory obligations — SEC, SOC 2 — that make unsanctioned AI usage a compliance violation, not just a governance gap. SaaS companies see higher adoption velocity because engineering culture normalises self-service tooling. HealthTech carries the highest risk profile due to FDA requirements and EU AI Act high-risk classification. The problem varies by vertical, size, and regulatory exposure.

Can a CTO own AI governance without a dedicated AI team?

Yes, but only with a federated model. Set the policy framework, maintain the pre-approved tool list, and define the escalation path. Engineering and product leads handle tool review within their teams. The 77% of small organisations without a dedicated AI team (McKinsey 2025) still need governance — they just can’t centralise it.

What is the minimum viable AI governance programme for a 150-person company?

Three things: a pre-approved AI tool list (what employees can use), a lightweight approval process for new tools (how to request something not on the list), and a quarterly shadow AI discovery scan (what employees are actually using). One person can implement this without dedicated governance staff and produce measurable risk reduction within 30 days.

How many unsanctioned AI tools are employees typically using without IT knowledge?

Reco AI’s 2025 report found 269 unsanctioned AI tools per 1,000 employees at companies with 11 to 50 employees. Varonis reports 98% of organisations have employees using unsanctioned apps including AI tools. OpenAI/ChatGPT alone accounts for 53% of all shadow AI usage.

What is the biggest risk of shadow AI — data leakage, compliance violations, or operational dependency?

All three, but data leakage is the most immediate and measurable. Employees share customer PII, proprietary code, and financial data with external AI systems that have no contractual data handling obligations. IBM reports 97% of AI-related breaches lacked proper AI access controls. Compliance violations create legal exposure. Operational dependency creates removal risk.

How long before a shadow AI tool becomes too embedded to remove easily?

The entrenchment window starts at around 100 days of continuous use. After that, the tool is woven into daily workflows — data pipelines depend on its output, team processes assume its availability. Removing it becomes a migration project, not a quick switch. Early discovery within the first 90 days keeps switching costs manageable.

What does accountability fragmentation actually look like at a 200-person company?

The CTO owns infrastructure, the VP of Engineering owns the development pipeline, and operations or finance owns compliance reporting. AI governance touches all three but sits cleanly in none. Without a formal mandate, governance happens reactively — someone scrambles to write a policy after a client asks about AI data handling, or after an employee feeds customer data into an unsanctioned tool.

How do I discover shadow AI tools if I do not have a security team?

OAuth logs, browser extension audits, and SaaS spend reviews all work without dedicated security staff. If budget allows, a SaaS security posture management tool automates discovery continuously. Don’t rely on employee surveys as your primary method — 52% of employees won’t disclose AI usage voluntarily.

Does buying a SaaS governance tool actually solve the shadow AI problem?

No. A SaaS governance tool solves discovery and monitoring — it tells you which AI tools are in use and what data they access. It doesn’t solve the policy problem (what’s acceptable use), the ownership problem (who decides), or the cultural problem (employees adopt tools because approval processes are too slow). Buy for visibility. Build policy and process internally.

What happens if we just let employees use whatever AI tools they want?

Varonis data shows 98% of organisations already have employees doing exactly this. The consequence isn’t a single event — it’s a gradual accumulation of data exposure, compliance gaps, and operational dependencies on tools the company doesn’t control. IBM’s $670,000 breach cost premium for high-shadow-AI organisations quantifies the financial risk.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter