Insights Business| SaaS| Technology How to Build an AI Operating Model That Goes Beyond Policy Documents
Business
|
SaaS
|
Technology
Mar 30, 2026

How to Build an AI Operating Model That Goes Beyond Policy Documents

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic How to Build an AI Operating Model That Goes Beyond Policy Documents

Most organisations trying to get serious about AI governance have a policy document. Very few have an operating model. That distinction matters more than it sounds — it is the difference between governance on paper and governance that actually runs.

Here’s the quickest test: if your team can’t answer “who can stop this AI system right now?” in ten seconds, you don’t have an operating model. You have an experiment with a document attached to it. This guide is part of our comprehensive AI governance gap overview, which covers the full landscape from shadow AI diagnosis through to regulatory compliance. Understanding the operating model design choices is what this article is about.

What does an AI operating model actually include — and what is it not?

An AI operating model is the organisational structure that connects your AI strategy, your AI policy, and your technology stack to actual execution. It is not any one of those three things on its own.

Here’s the three-layer distinction worth getting clear on. Your AI strategy answers what and why. Your AI policy answers what is permitted. Your AI operating model answers how it actually runs — who owns it, what the approval process looks like, who has stop authority, and how investment decisions get made. According to Databricks, “Enterprise AI readiness is ultimately an operating model decision.” Strategy and policy are inputs. The operating model is the machinery.

Every operating model has five structural components: named ownership of AI systems and outcomes; approval, deployment, monitoring, and decommission processes; an AI asset inventory as the visibility foundation; a portfolio management discipline for investment decisions; and governance structures calibrated to your current maturity level.

What it is not: a team name, a set of principles, or a project plan. Alation’s framework requires three layers to function simultaneously — the knowledge layer, the process layer, and the ownership layer. Drop any one of them and you end up with governance theatre: activity that generates documentation without providing any real oversight.

McKinsey finds fewer than 10% of AI use cases make it out of pilot mode or materially influence P&L outcomes. IBM puts it plainly: successful implementation and scaling of enterprise AI is fundamentally a people and operating model challenge, not a technology challenge.

Why does operating model design matter more than model selection?

Organisations consistently over-invest in model evaluation and under-invest in ownership design. The governance failures that follow are not technology failures — they are ownership problems.

Without clear ownership, AI projects pile up. They persist past their useful life because nobody has stop authority. They duplicate effort because nobody has a portfolio view. When incidents happen, the response is ad hoc because the accountability structure doesn’t exist.

Bain’s research on enterprise AI transformation found that assigning accountability to general managers rather than IT leadership is one of the distinguishing factors in organisations achieving meaningful EBITDA gains from AI. Governance authority is most effective when it sits where business outcomes are owned.

The policy document trap is very common. 54% of IT leaders say ensuring AI solutions comply with governance regulations is a top priority for the next 12 months. A policy tells people what they should do. An operating model determines who does it, who checks it, and who stops it when it fails.

For a 50–500 person SaaS company, the gap is sharper because there is no default organisational structure for AI governance. Unlike large enterprises that inherit governance structures from regulated industries, a mid-market tech company has to build it from scratch. Nobody designs it deliberately, so it does not exist.

What is the AI asset inventory and why is it the first step?

You cannot govern AI you cannot see. The AI asset inventory is a living register of all AI systems, models, datasets, integrations, and shadow AI deployments across your organisation. It is not a one-time audit — it is the foundational visibility layer that everything else depends on.

The shadow AI discovery step is where most organisations get uncomfortable. Nearly 60% of employees use unapproved AI tools at work, feeding sensitive company information to unsanctioned products. Shadow AI incidents account for 20% of all breaches, and 27% of organisations report that more than 30% of their AI-processed data contains private information — customer records, trade secrets, financial data. Building the inventory will surface this. That discomfort is the point.

A minimum viable inventory needs seven fields: AI tool name and vendor; business unit using it; data inputs and outputs; business process affected; approval status (sanctioned / unsanctioned / under review); named owner; last review date.

Nikhil Gupta at ArmorCode puts it well: “The CISO who claims their organization has responsible AI governance should be able to answer three questions immediately: Where is every AI asset deployed right now? Who is accountable for each one? What governed decisions were made about AI risk in the last 30 days? If any of those questions requires a manual scramble, you do not have governance. You have intent.”

The inventory needs a named owner and a quarterly review cadence. SaaS platforms add AI features in routine product updates. Developers call model APIs that never reach procurement. Assign ownership before you publish the first version. And make approved tools easier to use than unsanctioned alternatives — governance that creates friction just drives employees underground.

What is data–AI proximity and how does it signal whether governance is real?

Data–AI proximity is a governance maturity diagnostic from Dael Williamson at Databricks: how close does ownership of data and AI sit to the CEO? The shorter the distance, the more serious the company’s AI posture.

Williamson is direct: “If data and AI are owned directly by or close to the CEO, that signals a high level of strategic importance. More often, ownership sits several layers down, and in many cases data and AI are owned by entirely different groups.” Fragmented ownership produces fragmented governance — regardless of which models you’ve deployed.

45% of IT leaders point to lack of executive sponsorship as a major blocker to AI orchestration. Executive sponsorship without structural proximity is just enthusiasm.

At 150 employees you probably can’t justify a dedicated Chief AI Officer. But you can map who owns AI decisions today and assess the distance from executive authority. If the honest answer is “nobody owns it clearly,” that is the gap to close first.

Two questions worth sitting with before moving on:

Self-assessment question 1: Who in your organisation has named accountability for AI decisions today — and what is their reporting line to the CEO?

Self-assessment question 2: Is your company’s data strategy and AI strategy owned by the same person or team — or are they separate functions with separate reporting lines?

Centralised vs. federated governance — what to build first and when to evolve?

For most 50–500 person SaaS companies, start with a centralised AI governance model. One small team, or one person in a CTO-adjacent role, owns the AI standards, tool approval process, and asset inventory. Centralised governance is simpler to execute, easier to audit, and appropriate for the volume of AI decisions at this scale.

Dataiku’s five-stage maturity taxonomy gives you a staging guide: Siloed → Centre of Excellence → Hub-and-Spoke → Centre for Acceleration → Embedded. Most mid-market companies sit at Siloed or early CoE. Hub-and-Spoke is the realistic near-term target — companies that have actually scaled AI are three times more likely to use it.

The trigger to move from centralised to federated is specific: when individual business units have developed their own AI capabilities, have named AI owners, and are running AI in production. Covasant’s analysis is clear on the risk of moving too early — without documented standards, decentralisation produces duplication, shadow AI, and compliance blind spots.

Three things must happen before you decentralise: accountability must be explicitly re-assigned to business units; common standards must be documented; and the central function must redefine its role from executor to enabler. For context on why this structure matters beyond the operating model layer, see the shadow AI governance framework that drives these design decisions. Once governance structure is defined, you also need accountability structures within your operating model — who owns which AI systems, and who can stop them.

Self-assessment question 3: Which business unit in your organisation has the most mature AI usage today — and does it have a named AI owner with explicit accountability?

Why naming a CoE as “AI owner” creates a governance vacuum rather than filling it?

The most common anti-pattern at this stage is naming the AI Centre of Excellence as the accountable owner of enterprise AI outcomes. It sounds sensible. It creates a governance vacuum rather than filling one.

A CoE typically has no authority over business unit decisions, no budget ownership for business outcomes, and no named accountability when an AI system produces a harmful result. You’ve delegated accountability to a team that structurally cannot hold it. As the RACI principle makes clear: exactly one person must be Accountable per activity. When two people share accountability, nobody is truly accountable.

IBM’s enterprise AI governance model is cleaner: enterprise AI is owned by the business. The CoE enables standards. The Business Owner — a named individual in the affected business unit — holds outcome accountability.

What a CoE should own: AI tool evaluation standards; shared engineering infrastructure; governance templates and training; risk escalation pathways. What it should not own: business outcomes or production AI decisions.

Self-assessment question 4: Can you name the specific individual in each business unit who is accountable for the AI systems that team uses in production?

How do you build an AI portfolio management discipline at mid-market scale?

AI portfolio management is the discipline of treating AI initiatives as a portfolio of bets with explicit invest, pause, and stop decisions — made at a regular cadence, not by default or budget exhaustion.

Databricks is direct: “They manage AI initiatives as a portfolio, not a pipeline, with discipline around where to invest, pause, or stop. Not every project succeeds. Some need to be paused. Others warrant additional investment.” Without that discipline, projects fail quietly rather than being stopped intentionally.

The minimum viable portfolio review: quarterly, 30–60 minutes, led by the CTO or AI governance owner. Each initiative gets three criteria — current status, measurable outcome versus original intent, and an explicit decision (continue / pause / stop / scale). Define those criteria before the review, not during it.

Portfolio management has three dependencies: the AI asset inventory must exist; named ownership at the business-unit level must be in place; executive sponsorship must be real enough that pause and stop decisions get enacted without a new approval cycle each time. The technical layer of runtime enforcement — runtime AI governance and the observability infrastructure that makes portfolio decisions auditable — depends on the operating model having defined those owners first.

ISO/IEC 42001 requires documented objectives, performance evaluation, and continual improvement — all of which the portfolio review directly addresses.

Self-assessment question 5: When did your organisation last formally review all active AI initiatives and make an explicit investment, pause, or stop decision on each one?

If the honest answer is “never” or “we’re not sure,” that’s where to start. Not with a new policy document. And once you have reviews running, the question of measuring operating model effectiveness — whether the outcomes you intended are actually materialising — becomes the next governance milestone.

Frequently Asked Questions

What is the difference between an AI policy and an AI operating model?

An AI policy describes rules and principles — what is permitted, what is prohibited, what requires approval. An AI operating model is the organisational structure that determines who executes those rules, who enforces them, and who is accountable when they are violated. A policy tells people what to do. An operating model determines how it actually happens. Most organisations have a policy. Far fewer have a functioning operating model behind it.

What does “data–AI proximity” mean in practice?

Data–AI proximity, a concept from Dael Williamson at Databricks, describes how close ownership of data and AI sits to the CEO. When data and AI ownership is held at senior level, governance decisions get executive authority and budget. When it is fragmented across business units, governance becomes performative. Proximity is a maturity signal, not an org-chart preference.

How do I know which AI governance model is right for my company’s size?

Start with your current AI volume and business-unit maturity. If you have one or two AI tools in production and no business unit with a named AI owner, a centralised model is appropriate. When business units develop their own AI capabilities and named owners, begin planning for Hub-and-Spoke. Dataiku’s five-stage maturity taxonomy provides a practical staging guide.

Can a 300-person SaaS company run a Centre of Excellence?

Yes, but scope it carefully. A CoE at this scale is typically two to five people responsible for AI tool standards, shared infrastructure, governance templates, and risk escalation. The critical discipline: the CoE must never be named as the accountable business owner. It sets standards and enables business units to execute — it does not own outcomes.

What should an AI asset inventory actually contain?

At minimum: AI tool name and vendor, business unit using it, data inputs and outputs, business process affected, approval status (sanctioned / unsanctioned / under review), named owner, and last review date. The inventory is often the first governance artefact that surfaces shadow AI usage — tools employees are using outside IT procurement. Treat it as a living document, not a one-time audit output.

Who should own the AI asset inventory?

The AI governance function — the CTO directly at sub-100 employee companies, or a dedicated governance owner as the company scales. The inventory owner needs cross-functional visibility across IT procurement, business unit tool usage, and security monitoring data. Without cross-functional authority, the inventory will systematically miss shadow AI deployments.

How often should AI governance structures be reviewed and updated?

Operating model design should be reviewed annually, or when a significant organisational change occurs — a new business unit acquiring AI capability, a major new AI system going to production, or a governance incident. The AI asset inventory and portfolio review both require a quarterly cadence.

What is the minimum viable governance structure for a company deploying AI into production for the first time?

Five components: a completed AI asset inventory; named accountability for each production AI system (one individual, not a team); an acceptable use framework; a stop authority assignment (who can pause or roll back without escalation); and a quarterly portfolio review on the calendar.

How does ISO/IEC 42001 relate to building an AI operating model?

ISO/IEC 42001 is the international management system standard for AI. It requires documented objectives, defined responsibilities, performance evaluation, and continual improvement — all of which map directly to operating model design choices. Using it as a design reference produces more rigorous governance than building from scratch.

What happens when AI governance is not in place before a governance failure occurs?

Without a functioning operating model, governance failures follow a predictable pattern: the incident occurs, the responsible party is unclear, the resolution path is undefined, the response is ad hoc. The immediate incident cost is followed by remediation, reputational impact, and regulatory exposure — particularly under the EU AI Act for organisations with European operations.

What is the difference between AI governance and AI compliance?

AI governance is the internal operating discipline — ownership structures, accountability assignments, decision rights, and monitoring processes. AI compliance is the external requirement — adherence to regulations such as the EU AI Act, ISO/IEC 42001, or NIST AI RMF. Governance is the infrastructure that enables compliance. Attempting to satisfy compliance requirements without the governance infrastructure produces compliance theatre rather than risk reduction.

Is there a self-assessment tool for AI governance maturity?

No single standardised self-assessment exists for mid-market SaaS companies. The five diagnostic questions in this article provide a starting framework: Who has named accountability for AI decisions? Are data strategy and AI strategy owned by the same team? Which business unit has the most mature AI usage? Can you name the individual accountable for each production AI system? When did you last formally review all active AI initiatives? Answering those questions honestly surfaces the structural gaps to address first.

The requirements don’t change based on company size. What scales is the organisational infrastructure you build to meet them. For a complete overview of the shadow AI governance challenge — what’s driving the gap, how accountability structures fit in, and what regulatory frameworks require — see What AI Governance Actually Requires and Why Most Policies Fall Short.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter