Insights Business| SaaS| Technology The AI Operating Model — What Separates Governance Leaders from Laggards
Business
|
SaaS
|
Technology
Mar 10, 2026

The AI Operating Model — What Separates Governance Leaders from Laggards

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic The AI Operating Model — What Separates Governance Leaders from Laggards

Most organisations have an AI strategy. Very few have an AI operating model — the actual architecture of governance, data, people, and process that determines whether your AI investment turns into business outcomes or just burns cash.

And the gap between the ones who get it right and the ones who don’t is getting wider. BCG’s Build for the Future 2025 research found a five-fold revenue gap between the top 5% of firms and everyone else, along with three times the cost reductions. Meanwhile, 60% of companies are getting almost nothing back. Despite spending real money.

This article looks at what separates governance leaders from laggards at the operating model level. We’ll give you a diagnostic framework for assessing your governance maturity and help you work out whether centralised or federated governance fits your situation. For the bigger picture on the AI governance gap this operating model closes, start with the pillar article. For tactical execution mechanics, we’ll point you to the companion piece on operationalising governance.

What does the data actually say separates AI governance leaders from laggards?

Three independent research programmes — BCG, TEKsystems, and Deloitte — all land on the same finding. The difference between AI leaders and laggards is operating model maturity. Not tooling. Not budget. Not talent access.

BCG’s Build for the Future 2025 study is the clearest. Only 5% of companies qualify as “AI future-built.” Another 35% are scaling AI. The remaining 60% are reaping minimal returns despite real investment. The future-built firms do spend more on IT and dedicate more of that budget to AI — but the spending differential is not what separates them. It is how the spending is structured.

TEKsystems backs this up from a workforce angle. Their 2026 State of Digital Transformation report shows digital leaders are 2.5 times more confident their investments will meet ROI expectations. Enterprise-wide AI adoption has doubled year over year — 24% of organisations in 2026, up from 12% in 2025. Among leaders, 38%. Among laggards, 9%.

Deloitte’s 2026 State of AI in the Enterprise report frames the challenge as “ambition to activation.” Only 34% of organisations are genuinely reimagining their businesses with AI. The rest are surface-level. And here is the telling detail: 42% believe their strategy is highly prepared for AI, but feel less prepared on infrastructure, data, risk, and talent. Strategically ready, operationally unsure. Sound familiar?

These are independent data sets from different methodologies arriving at the same conclusion. You should be able to locate your organisation on this curve — and chances are you are further from the top than you think.

What are the signals that distinguish AI-serious organisations from performative ones?

BCG identifies structural markers that separate future-built companies from everyone else. Not just outcomes — observable design choices you can check against your own setup.

Signal one: executive ownership proximity. AI-serious organisations place AI strategy under direct CEO or CTO oversight. Not middle management. Not IT operations. This is a structural commitment that says AI is a strategic priority, not a cost centre experiment. Deloitte backs this up: enterprises where senior leadership actively shapes AI governance achieve significantly greater business value.

Signal two: data treated as a strategic asset. Future-built companies define data policies, maintain inventories, and codify data governance into operating standards. Data governance is the foundation the AI operating model sits on. Without it, everything else is built on sand.

Signal three: broad definition of AI scope. Leaders define AI broadly — automation, ML, analytics, agentic systems — not narrowly as “chatbots” or “generative AI.” This makes sure governance covers the full portfolio, not just the visible tip.

IBM adds a reality check. Nearly 74% of surveyed organisations have only moderate or limited AI governance coverage. Only 23.8% have comprehensive frameworks. PwC confirms that operationalisation — turning principles into repeatable processes — is the hurdle most executives point to.

Here is your quick self-assessment. Can you answer yes to all three? AI strategy owned at C-level? Data treated as a governed strategic asset? AI scope covering the full portfolio? If not, governance investment is premature until you fix these first.

How does a multi-element AI framework apply to governance — not just AI strategy?

The three signals tell you whether you are structurally serious about AI. The next question is whether your operating model integrates the components to actually act on it.

BCG, Deloitte, and TEKsystems all identify the same structural components: strategy, talent, operating model, technology, data, and adoption at scale. These work as an integrated system where governance is the connective tissue — not as independent workstreams you can pick and choose from.

When elements are disconnected, things go wrong in predictable ways. Strategy without governance produces pilot purgatory — that graveyard of proof-of-concepts that never scale. Technology without data governance produces shadow AI. Talent without an operating model produces individual productivity gains that never compound into anything useful.

Walk through each element with a governance lens:

Strategy. Is governance embedded, or bolted on as compliance? If it only appears in your risk register, it is bolted on.

Talent. Do you have governance-literate people, or just AI-literate people? Nine in ten organisations face skills gaps in AI, ML, and cybersecurity. That gap hits governance hardest because governance requires cross-functional understanding — it is not something you can hand to one team and forget about.

Operating model. Is governance a repeatable process, or a one-off policy document gathering dust on SharePoint?

Technology. Does your stack include governance tooling, or just AI tooling? Leaders are building modular, cloud-native platforms with privacy, sovereignty, and security baked in from the start.

Data. Is data governance the foundation, or an afterthought? Organisations that codify data standards are getting more ROI from AI today.

Adoption and scaling. Can governance scale with AI adoption, or does it become a bottleneck?

When all six elements work together, the result is what BCG calls “enterprise as code” — capturing how a business operates as structured, machine-readable code instead of documents, spreadsheets, and tacit knowledge. When processes are explicitly defined, they can be tested, automated, and improved. Governance gets built into operating logic from the start, not bolted on afterwards.

How big is the gap between future-built AI firms and the median enterprise — and what does it mean?

The performance gap is not incremental. It is structural.

Future-built firms expect twice the revenue increase and 40% greater cost reductions by 2028 compared to laggards. They reinvest AI returns into stronger capabilities. The gap accelerates.

Organisations treating governance as a strategic capability see a 30% ROI advantage over those treating it as compliance overhead. That is not a rounding error. That is governance paying for itself versus governance being a cost centre.

And the downside of weak governance is concrete. IBM data shows 20% of AI-related breaches involve shadow AI, at an average cost of $670K per incident. The average data breach cost sits at $4.45 million. That is the kind of number that gets attention in a board meeting.

Agentic AI is widening the gap further. BCG projects AI agents account for 17% of total AI value in 2025, rising to 29% by 2028. A third of future-built companies already use agents, compared with 12% of scaling companies and almost none of the lagging 60%. Without governance frameworks for autonomous agents, you cannot safely deploy them — and that locks you out of a growing share of AI value.

The compounding effect is the part that should worry you. Leaders reinvest returns. Laggards cannot, because there are no returns to reinvest. Every quarter without a functioning operating model widens the gap. It is not a problem that fixes itself.

How do you assess your governance maturity honestly — and what does each level require to move up?

Governance maturity is not binary. It progresses through levels, and where you sit determines what to do next.

Here is a four-level framework synthesised from IBM, PwC, and Agility at Scale research:

Ad hoc. No formal governance. AI use is untracked. Shadow AI everywhere. You cannot answer “how many AI systems are in production?” If this sounds like your situation, you are in the majority — IBM’s data suggests roughly three-quarters of organisations are here or one level up.

Managed. Policies exist. You can audit which tools are in use. But governance is manual, reactive, and dependent on individual effort. You have a policy document, but you cannot prove it actually works.

Measured. Governance effectiveness is quantifiable. You can demonstrate compliance to a board. Risk assessments are tiered and systematic. Companies at this stage are 1.5 to 2 times more likely to describe their responsible AI capabilities as effective.

Optimised. Governance is embedded in the AI lifecycle. Automated monitoring, bias testing, compliance reporting. Enterprise-as-code principles are operational. This is the end state.

The Managed-to-Measured transition is where most organisations stall. PwC’s data maps directly: half of executives cite operationalisation as their biggest hurdle.

So what does crossing that threshold actually require? An AI governance intake system — a centralised mechanism that captures all AI initiatives, categorises returns, and assigns risk profiles. Tiered risk assessment, so low-risk systems get streamlined review while high-risk systems get comprehensive assessment. Lifecycle monitoring. And the ability to report governance ROI to leadership.

Here is the good news: governance benefits compound. Year one costs are front-loaded. Years two and three show accelerating returns. If you want to know how to measure whether this operating model is performing, that is the measurement article’s territory.

Centralised or federated governance: which operating model structure works for your context?

Centralised versus federated governance is a structural architecture decision. The right answer depends on your size, regulatory exposure, AI portfolio complexity, and engineering culture.

Covasant identifies three canonical structures:

Centralised. One governance function owns all AI oversight. Best for companies early in AI maturity, regulated industries, and smaller organisations. The trade-off: business units may find the central team slow to respond.

Federated. Each business unit owns governance execution with light-touch central oversight. Best for organisations with high AI maturity and engineering cultures that can self-govern within guardrails. The trade-off: harder to enforce, inconsistent integration.

Hybrid (the governance spine). A central team defines standards, risk frameworks, and audit processes. Business units execute within those guardrails. This is the model most future-built companies converge on.

PwC’s data supports the hybrid direction — 56% of executives say first-line teams now lead responsible AI efforts. That puts responsibility closer to where decisions are made, which is federated execution within centralised standards.

What must be centralised regardless: risk classification, compliance reporting, policy definition. What can be federated: use-case-specific risk assessment, tool selection within approved categories, operational monitoring.

The practical advice is simple. Start centralised. At your scale, one governance function can cover the portfolio without bottleneck risk. But document the governance spine from day one — central standards, risk frameworks, audit processes — so when complexity grows, you can federate without rewriting foundations. For how to operationalise this operating model with specific mechanisms, that is what the execution article covers.

Conclusion

The AI operating model is the structural layer that determines whether AI investment compounds or evaporates in pilot purgatory. The evidence from BCG, IBM, Deloitte, PwC, and TEKsystems all points the same way: leaders and laggards are separated by operating model maturity.

Assess your governance maturity honestly. Design the governance spine. Plan for federation as complexity grows. To understand what the AI governance gap looks like at operating-model level, start with the pillar overview. To operationalise this operating model with specific mechanisms, move to the execution companion. And to measure whether this operating model is performing, the measurement article lays out the metrics.

FAQ Section

What is the difference between an AI strategy and an AI operating model?

An AI strategy defines what you want to achieve with AI. An AI operating model defines how the organisation is actually structured to deliver it — governance, data, people, process, and technology architecture that converts strategy into repeatable execution. You can have a strategy without an operating model, but you cannot scale AI without one.

Can a mid-market company (50-500 employees) reach governance leader status?

Yes. BCG’s future-built characteristics are structural, not scale-dependent. Executive ownership proximity, data-as-strategic-asset, and broad AI scope definition do not require enterprise budgets. If anything, smaller scale means faster decision cycles and less organisational inertia working against you.

How do you assess governance maturity without paying for a consulting engagement?

Use the four-level framework — Ad hoc, Managed, Measured, Optimised — and answer three questions. Can you list every AI system in production? Can you demonstrate governance effectiveness to a board? Can governance scale without manual intervention? IBM’s data showing only 23.8% with comprehensive frameworks gives you the calibration point. If you can answer yes to all three, you are ahead of most.

Why do employees use unauthorised AI tools even when company policies exist?

Shadow AI emerges when governance blocks legitimate use without offering sanctioned alternatives. The operating model answer is to provide approved tools with clear acceptable use policies, fast provisioning, and visible guardrails — making the sanctioned path easier than the shadow path. If your people are going around governance, governance is the problem.

What does “enterprise as code” mean in practical terms?

Enterprise as code (BCG, December 2025) means capturing your organisation’s operating logic — processes, decision rules, workflow sequences — as structured, machine-readable code rather than documents or tacit knowledge. When processes are explicitly defined, they can be automated, measured, and continuously improved. Think of it as infrastructure-as-code, but for how your business actually runs.

How much does weak AI governance actually cost?

IBM data shows 20% of AI-related breaches involve shadow AI at an average cost of $670K per incident. The average data breach costs $4.45 million. On the flip side, organisations with mature governance see a 30% ROI advantage over those treating governance as pure compliance overhead. So weak governance costs you coming and going.

What is the most common point where governance maturity stalls?

The Managed-to-Measured transition. Most organisations can write policies and audit tool usage. Far fewer can quantify governance effectiveness or connect governance costs to business outcomes. PwC confirms it — half of executives cite operationalisation as their biggest hurdle.

How does agentic AI change the governance operating model?

Agentic AI — autonomous systems taking multi-step actions without human intervention — requires governance for delegation of authority, action boundaries, and automated oversight. BCG projects agents will account for 29% of AI value by 2028. Without agent-specific governance, you cannot safely deploy them. And if you cannot deploy them, you are locked out of a growing share of AI value.

Should governance be centralised or federated for a 200-person SaaS company?

Start centralised. At 200 employees, one governance function can oversee the portfolio without bottleneck risk. Document a governance spine from the start so you can federate execution later without rewriting foundations. You will know it is time to federate when the central team becomes a bottleneck — not before.

What does a governance intake system look like?

A centralised mechanism that captures all AI initiatives, categorises expected returns, assigns risk profiles, and tracks governance requirements across the portfolio. It is the infrastructure that makes governance repeatable — and the key piece required to move from Managed to Measured maturity. Without it, you are flying blind.

How do future-built companies treat data differently from laggards?

They define data policies, maintain comprehensive inventories, and codify data governance into operating standards. BCG’s enterprise-as-code research makes it explicit: organisations that codify data policies are getting more ROI from AI today. Without data governance as the foundation, the rest of the operating model has nothing solid to stand on.

Does AI governance slow down innovation?

PwC’s data shows the opposite: nearly 60% of executives report that responsible AI practices boost ROI and efficiency. Governance-mature organisations innovate faster because they deploy with confidence, scale without rework, and avoid costly remediation from ungoverned deployments. Done right, governance is an accelerator, not a brake.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter