Insights Business| SaaS| Technology How to Design AI Governance That Enables Speed Instead of Killing It
Business
|
SaaS
|
Technology
Apr 27, 2026

How to Design AI Governance That Enables Speed Instead of Killing It

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic AI ROI: Proving the Enterprise Business Case Beyond the Pilot

You invested in GitHub Copilot. Maybe Cursor too. Your developers are generating more code than ever. You expected that to show up in throughput. It hasn’t — not proportionally. And somewhere in the post-mortem, governance got blamed.

But governance design is the real issue, not governance itself. Most AI governance inherited its DNA from enterprise compliance frameworks built for organisations with dedicated governance teams, legal budgets, and change advisory boards. Applied to a 100-person SaaS company, it doesn’t scale down — it just blocks.

This article covers the minimum viable governance model for a 50–500 person tech company: what shadow AI actually costs you, why your change approval process is the real productivity bottleneck, how agentic AI changes the equation, and what a practical framework looks like without a governance team to run it. For context on where governance fits in the enterprise AI ROI gap, start there.

Why does AI governance have such a bad reputation — and is any of it deserved?

The reputation is earned — but for the wrong version of governance. Heavyweight committee-approval models, waterfall-style review cycles, enterprise frameworks built for regulated industries — those genuinely slow teams down at SMB scale. That part is deserved.

The undeserved part: governance infrastructure embedded into delivery pipelines creates no friction at all. When policy-as-code runs your security scans and automated gates check quality before merge, no human is waiting in a queue. The governance is happening — it’s just invisible.

That’s the distinction that matters: governance by design versus governance by exception. Governance by exception means humans review things reactively, in batches, after the fact. Governance by design embeds controls into the process — clear policies, fast-track paths for low-risk work, automated controls for the high-risk stuff.

Get this right early and you earn the governance dividend: the speed advantage that accumulates when your infrastructure is in place before you need it. Competitors who retrofit governance after an incident lose that window. The existing enterprise frameworks — Databricks’ five pillars, IBM’s governance model — are worth understanding. But they’re not built for your scale.

What is shadow AI and why does it turn into a CFO problem faster than you expect?

Shadow AI is unsanctioned use of AI tools outside IT and security controls. It’s the AI-era evolution of shadow IT — but more dangerous, because AI agents reason on data rather than simply storing it.

It spreads faster than shadow IT because the barrier is lower (browser-based, no infrastructure), the benefit is immediate, and the governance surface is wider than most people realise. 77% of employees paste data into GenAI prompts, and 82% of those prompts come from unmanaged accounts. Every one is a potential data leakage event.

The CFO problem arrives in three stages: tool proliferation, then the cloud bill, then the compliance incident. Gartner expects over 40% of organisations to experience compliance or security incidents related to shadow AI by 2030.

Detection without a SIEM is practical. Review cloud billing for unrecognised AI subscriptions, run an OAuth grant audit, and survey developers directly. Ask which tools they find useful and whether the official approval process is actually feasible. Treat what you find as a product signal — if shadow AI is widespread, the approved process is too slow. Fix the process, not just the policy.

IBM’s AI License to Drive is one structural response worth knowing: requiring employees to demonstrate governance literacy before deploying AI on enterprise infrastructure.

How does the change approval process silently destroy your AI coding productivity gains?

Here is the mechanism most teams miss. AI coding tools increase the volume of code a developer produces. More code means more changes queuing for review. If the approval process is unchanged, batch sizes balloon, committee reviews lengthen, and deployment frequency drops. The tool made the development step faster — the approval process created a larger bottleneck downstream.

The New Stack’s analysis establishes this directly. Faros AI telemetry shows AI increases PR size by 154%. Developers on high-AI-adoption teams complete 21% more tasks and merge 98% more pull requests — but PR review time increases 91%. Individual developers feel faster. The system measures slower.

The fix is structural, not tooling-driven. Adding an AI code review tool without reducing batch size shifts the bottleneck but doesn’t eliminate it. The Theory of Constraints applies: code review and change approval are the constraint. The solution is to reduce batch size and apply automation right there.

Octopus Deploy’s lightweight change approval model is the benchmark: sign-off from team and direct manager only, few manual approvals, no cross-team committees, approvals captured in deployment tooling. Continuous Delivery demands approvals become a continuous part of the pipeline, not a periodic gate. For the full argument, see how governance design affects the pilot-to-production transition.

Why do AI agents need a fundamentally different governance model than traditional software?

Traditional software executes deterministic logic on a defined path. You approve it at deployment and the approval stands. AI agents plan, reason, and act autonomously across multiple steps — introducing non-deterministic behaviour that changes with every prompt. Every new capability granted to an agent expands the attack surface between deployments, without a new deployment event to trigger a review.

SailPoint research found 96% of enterprises acknowledge AI agents as a security risk. Only one in five companies has a mature governance model to oversee how AI is actually being used.

Agentic AI requires continuous oversight — live visibility into what each agent is doing and the ability to intervene, not auditing a deployment decision made weeks ago. When agents communicate with each other, the governance perimeter becomes the network, not the individual agent.

For an SMB without a full MLOps team, start with event logging at the agent action level. Then layer in human-in-the-loop review for decisions that always require it: irreversible actions, high-value financial transactions, any action that modifies governance or access controls. Human-on-the-loop oversight — monitoring without approving each step — covers lower-risk, high-volume processes. The governance challenge compounds when agents are operating without grounded operational context — see governing AI agents that lack operational context for why that gap creates outsized risk.

How do you implement AI observability before you have a dedicated monitoring team?

AI observability does not require a dedicated MLOps team. It requires structured logging at the model interaction level, a defined set of metrics, and a documented escalation path. Start with logging before monitoring: capture every model input, output, and decision in a structured format you can query. That creates the audit trail before you have the monitoring infrastructure.

There are three layers to build towards. Model-level covers accuracy, drift, and hallucination rate — start here. Data-level covers lineage, access, and sensitivity classification. Infrastructure-level covers cost, latency, and token consumption. Existing APM tooling is sufficient to get the model layer running. Databricks’ Unity Catalog automates data lineage and access tracking at the platform layer.

The staffing constraint resolves through platform-embedded compliance: build observability into the provisioning process so developers cannot bypass logging. Once baselines are established, thresholds encode as policy-as-code — automated policies that trigger alerts or block deployments without manual review. That’s how an SMB achieves governance coverage without a governance team, and how governance connects to the broader AI ROI accountability challenge.

Centralised vs. distributed AI governance — which works at your scale?

Centralised governance — all AI decisions routed through a single committee — is the default enterprise model and the architecture that creates every bottleneck described above. For companies under 500 people, it is almost always the wrong choice.

Distributed governance is the right model at SMB scale, with one condition: platform-level guardrails must be in place before you delegate authority. Certified team leads hold approval authority within their domain, guided by automated controls. The guardrails replace the committee.

IBM’s AI fusion team model demonstrates this in practice. Their AI License to Drive certification ensures anyone deploying AI agents understands data privacy, security protocols, and integration risks before they build.

Here’s the practical split. New data source integrations, external-facing agents, and governance policy changes stay centralised. Tool selection within approved categories, prompt engineering, and fine-tuning on internal data are appropriate for team-level approval once leads are certified. Start centralised, then delegate as guardrails are proven — see the operating model context for governance decisions.

What is the minimum viable governance framework for a 50–500 person tech company?

Minimum viable governance is not a scaled-down enterprise framework. It is a sequenced set of capabilities that prevents the three highest-probability failure modes — shadow AI proliferation, change approval bottleneck, ungoverned agentic deployments — without requiring a governance team.

Phase One — the governance floor (months one to three)

Start with visibility. A cloud billing audit, an OAuth grant audit, and a direct developer survey give you the baseline picture in under a week. From there: an approved AI tools registry with a fast-track approval path (target under two weeks — a slow approval process is itself a shadow AI generator), lightweight change approval aligned with the Octopus Deploy model, and structured activity logging for all production AI workloads.

Phase Two — the governance scaffold (months three to six)

AI competency certification for anyone deploying agents — modelled on IBM’s AI License to Drive. Agentic system decision gates: define which actions require human-in-the-loop review and which operate on human-on-the-loop oversight. Observability dashboards with threshold alerts. Policy-as-code automation for the most common approval decisions.

The governance team question has a direct answer: this model is designed to operate without one. Ownership distributes to certified team leads, embeds in the platform, and stays visible through the observability dashboard.

On the CFO argument: the cost of non-compliance averages $14.82 million versus $5.47 million for compliance. The governance floor is the prevention cost. The governance dividend — scaling AI deployments faster than competitors who are retrofitting governance after incidents — is the revenue-timing argument. For the complete picture, see the five root causes of AI value failure.

Frequently Asked Questions

How do I know if shadow AI is already happening in my organisation?

Check cloud billing for unrecognised AI subscriptions and API charges. Run an OAuth grant audit. Then survey your developers: ask which tools they find useful and whether the official approval process is actually feasible. Treat what you find as a product signal — if shadow AI is widespread, the approved process is too slow.

What is the minimum governance a company our size actually needs?

An approved AI tools registry with a sub-two-week fast-track path, a lightweight change approval model (small batches, no cross-team committees), and a decision gate requiring human review before any agent takes an irreversible action. Everything else is Phase Two.

Do I need a separate AI governance committee or can this be part of engineering?

For companies under 500 people, a separate AI governance committee is almost always the wrong architecture — it creates a single-threaded bottleneck. The better model: engineering team leads hold AI approval authority in their domain, guided by platform-embedded guardrails. A CTO-level owner with a documented policy and quarterly review cadence gives you accountability without adding a new bureaucratic layer.

What is the difference between shadow AI and shadow IT?

Shadow IT stores data outside approved systems — the risk is data sprawl and licence compliance. Shadow AI is more dangerous because AI agents reason on data, generate outputs from it, and take actions based on it. Under the EU AI Act, organisations are responsible for AI processing on their behalf regardless of whether it was authorised.

How does policy-as-code actually reduce governance overhead?

It encodes governance rules as automated checks in the deployment pipeline. Instead of a human checking whether a deployment meets requirements, the pipeline rejects non-compliant deployments automatically. Governance scales with deployment frequency without adding headcount.

What DORA metrics should I track to prove governance is enabling speed?

Track three. Deployment Frequency increases as small-batch reform takes hold. Change Failure Rate decreases as automated quality gates replace manual review. Mean Time to Restore shows whether your observability is working. DORA benchmarks for elite teams: multiple deployments per day, change failure rate 0–15%, MTTR under one hour.

At what point does agentic AI require a fundamentally different governance approach?

When an AI system can take sequences of actions affecting external systems, financial accounts, customer data, or other agents without a human reviewing each action. The governance question to ask at deployment: “Can we see what this agent is doing right now and stop it if needed?”

Can I implement these governance changes without disrupting current delivery velocity?

Yes, but sequencing matters. Start with the friction-reducing changes — lightweight change approval reform and the approved tools registry. Phase in logging requirements and competency certification after those are live. Platform-embedded compliance adds governance coverage without adding developer workload.

How do I make the case to the CFO for investing in governance infrastructure?

Frame it as risk management: the cost of a shadow AI incident is the benchmark, and the governance floor is the prevention cost. Add the revenue-timing argument: governance infrastructure enables faster, lower-rollback AI deployments than competitors retrofitting after incidents. Then connect to the AI ROI gap: if AI coding tools aren’t improving throughput proportionally, a change approval bottleneck is the most likely cause.

What is the governance dividend and how do I quantify it?

The governance dividend is the competitive speed advantage from building governance before scaling AI — the ability to sustain deployment velocity when competitors pause for security reviews or shadow AI remediation. Quantify it using DORA metrics before and after governance reform: the dividend shows up as sustained throughput, not decline.

What is pilot purgatory and how does governance cause it?

Pilot purgatory is when AI proofs-of-concept demonstrate value but cannot reach production because change approval or operating model friction blocks deployment. The exit is governance reform: lightweight approval chains, automated quality gates, small-batch deployment practices.

Where do I start if I have no governance infrastructure at all?

Start with visibility: the detection approaches in the shadow AI section give you a baseline picture in under a week. Second, the change approval audit: map your current process, measure batch sizes and cycle times against DORA benchmarks, and identify whether change approval is already a bottleneck. In most organisations it is — fixing it delivers immediate returns before anything more sophisticated is in place.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter