MIT research found that nearly 95% of generative AI pilots fail to deliver real financial results — and it’s not because the technology didn’t work. The organisations around it weren’t set up to use it. That’s a structural problem. And it plays out at every scale, including yours.
Dael Williamson, CTO for EMEA at Databricks, puts it plainly: “Enterprise AI readiness is ultimately an operating model decision.” The companies pulling ahead aren’t running better models. They’re running better operating models.
Most of the useful frameworks out there — IBM’s AI Fusion Teams, Databricks’ AI Maturity Model, the portfolio management approach — were built for organisations with dedicated AI labs and thousands of employees. Here we translate each of them into what they actually look like at the 50–500 person SaaS scale, where the AI ROI accountability crisis plays out very differently. There are five operating model decisions that determine whether your AI investments pay off: executive ownership proximity, team composition, portfolio governance, data infrastructure ownership, and measurement accountability.
Why Does Technology Choice Matter Less Than How Your Team Is Structured Around AI?
The companies outperforming on AI ROI are largely using the same off-the-shelf models as everyone else. OpenAI at 77%, Google at 55%, Anthropic at 51% — model access is commoditised. The ICONIQ “State of AI” report frames 2026 as the “Execution Era.” Building AI features is table stakes, not a differentiator.
The technology-first procurement trap is well documented. Organisations buy AI tools before defining business problems. RAND research shows four out of five AI projects stall out, and S&P Global found organisations abandoning AI initiatives at double the prior year’s rate.
Shadow AI is the clearest symptom of all this. When your team is using ChatGPT with corporate data outside any governance framework, that’s not an employee behaviour problem — it’s an operating model problem. The conventional centralised-IT approach is dissolving as AI’s scope expands, and employees just experiment with whatever tools are available. The operating model provides no governed path. The demand for AI capability is real. The supply of approved tools and process is absent.
What Is an AI Operating Model and What Decisions Does It Actually Govern?
An AI operating model is the set of organisational decisions — ownership structures, team composition, governance mechanisms, portfolio management, and measurement accountability — that determine whether enterprise AI delivers durable value or stays episodic.
Here’s the distinction that matters. Strategy describes what you want and why. The operating model describes who owns what, how decisions get made, and where accountability sits. Most organisations have a strategy document and no operating model. That produces goals without accountability, which is exactly how you get expensive pilots that never reach production.
Williamson identifies five signals when assessing enterprise AI seriousness:
- Who owns data and AI and how close that ownership sits to the CEO
- Whether the organisation has an inventory of its data assets
- Whether AI initiatives are managed as a portfolio of bets rather than a linear roadmap
- Whether teams are organised around outcomes rather than tools
- Who owns the AI ROI outcomes
Only 14% of enterprises enforce AI governance enterprise-wide. The structural decisions come before the tooling decisions. See governance design as part of the operating model for the full picture.
What Does IBM’s AI Fusion Team Model Look Like at a 200-Person SaaS Company?
“Enterprise AI isn’t primarily a technology problem. It’s a people problem. An operating model problem.” That’s IBM CIO Matt Lyteson. IBM developed AI fusion teams: hybrid groups combining domain experts with technologists. The traditional handoff chain loses context at every step. Fusion teams collapse it — so the procurement expert who understands the workflow builds directly on the enterprise platform, while the IT person focuses on technical plumbing.
At 200 employees, your version of the handoff is usually two steps: the product manager writes requirements, then waits for a sprint slot. The fusion principle at your scale means embedding a technically capable person in the product squad with access to the data pipeline and a mandate to ship — no separate IT approval cycle required.
IBM also developed an AI Licence to Drive: certification covering data privacy, information security, and enterprise system integration. “It’s not about limiting who can build; it’s about ensuring that everyone builds responsibly.” Your equivalent is a lightweight internal sign-off: team members confirm which data they can use in AI prompts, which tools are approved, and what the output quality bar is.
What doesn’t scale down is IBM’s enterprise platform — watsonX Orchestrate, watsonX Data, watsonX Governance. For most growing tech companies, a governed stack of 3–5 approved SaaS AI tools with clear data handling policies does the same job.
How Do You Decide Which AI Use Cases to Back First When Budget Is Limited?
Treat AI initiatives as a portfolio of bets, not a linear roadmap. A roadmap implies predetermined sequence and outcome. A portfolio implies active management — explicit decisions about where to invest, pause, or stop, based on evidence. The diagnostic entry point is workflow mapping: map core workflows step by step, looking at where time actually goes, not where you think it goes.
IBM’s three ROI categories give you a sequencing framework. Everyday productivity tools — code assist, faster summarisation — deliver the fastest ROI with the lowest risk. End-to-end agentic workflows have higher ROI potential but require operating model readiness first. Risk reduction use cases are meaningful but harder to measure directly. Working with a limited budget? Sequence in that order.
On build vs. buy: most teams do both. Pre-built SaaS AI tools deliver faster time to value for common tasks; custom solutions are justified when proprietary data creates real differentiation. The structural discipline here ties directly to structural readiness for the pilot-to-production transition: gate criteria. A pilot that can’t demonstrate a pre-agreed metric doesn’t get scaled. That decision happens at the gate, not after the budget runs out. This connects directly to measurement infrastructure as an operating model decision.
What Does Data-AI Proximity Mean and Why Does It Determine Whether Your Agents Work?
Data-AI proximity is the structural principle that data infrastructure ownership and AI capability ownership should sit in the same reporting chain — as close to the CEO as possible. When they operate on the same foundation, organisations can support more dynamic use cases. When they’re separated, AI relies on slower, more static inputs.
“When AI is structurally distant from data, the result tends to be static use cases and fragmented experiences. But the world that organizations operate in is dynamic.” This matters most for agentic AI systems. An agent that can’t access the current state of a customer account can’t make reliable decisions. Critical context is spread across different systems that don’t talk to each other — and that’s a data architecture problem before it’s an AI problem.
At 50–200 employees, proximity reduces to one question: does the same person own both the data pipeline and the AI feature that depends on it? If there’s a handoff, you have a separation problem at any scale. You should also plan for drift — AI agents don’t behave consistently the way traditional software does. Without proximity, monitoring becomes guesswork. This connects to your data infrastructure decisions and back to the AI ROI accountability crisis.
How Does the AI Maturity Model Tell You Which Operating Model Changes to Make First?
Databricks publishes an AI Maturity Model as a staged readiness framework. Use it diagnostically, not aspirationally. There are three stages: early (contained pilots, ad hoc ownership), mid (scaling applications, investing in data architecture), and mature (full portfolio management, continuous monitoring). The operating model changes that work for a mature organisation create overhead that kills momentum for an early-stage one.
If your AI programme is in its early stages, the right move is clarifying who owns the AI programme and ensuring that person also owns data. Three decisions you can make this quarter:
- Map current AI initiatives to the maturity stages — identify where you are, not where you want to be
- Assign a single owner, not a committee
- Establish a portfolio review rhythm — quarterly, 30 minutes, explicit decisions for every active initiative
What Does a Phased AI Adoption Roadmap Look Like in Practice?
Months 1–3: Pick one high-frequency workflow — customer support ticket triage, internal knowledge search, or code review assistance. Define the baseline metric before you deploy anything. Run with a two-person team: one technical, one domain expert. Measure at week 4 and week 8. At week 12: hit the gate, scale it; miss the gate, stop or adjust. The proof of concept proved the concept. Production proves the business case.
Months 4–9: Move the successful pilot into the primary workflow. Assign a named owner. Begin lightweight governance — which data can team members use in prompts, which tools are approved. Start mapping two or three additional use cases for the next portfolio cycle. Heavy committee approvals are a proven bottleneck — keep the approval chain tight: team plus direct manager sign-off, no cross-team committees.
Month 10 and beyond: When two or more AI applications are generating measurable ROI and the operating model feels routine rather than experimental, formalise. Portfolio review rhythm. Fusion team principles for new product development. Monitoring infrastructure. The teams that run five pilots simultaneously in Phase 1 are the same ones whose AI programmes stall and never recover. For the full picture on why most AI programs fail to prove value — and what the five root causes have in common — start with the broader accountability framework. Pick one. Prove it. Then move.
FAQ
What is an AI operating model in simple terms?
The set of decisions about who owns AI, how teams are structured around it, how use cases are prioritised, and how success is measured. Not the AI tools themselves — the organisational context in which those tools operate. A strategy document is not an operating model. An operating model describes real accountability, real ownership, real decisions.
Do I need a dedicated AI team or can I embed AI into existing squads?
In the early stages of an AI programme, a dedicated AI team tends to recreate the handoff problem it was supposed to solve. Embed a technically capable person in each product squad with direct data access and a mandate to ship. When two or more squads are building redundant AI infrastructure, that’s the signal to create a shared platform function.
Should AI ownership sit with the CTO or a new dedicated AI role?
In most growing tech companies, AI ownership should sit with whoever holds the technical leadership role — provided that person also owns data, or has direct access to it. Wherever data ownership sits, AI ownership should sit in the same reporting chain. Creating a dedicated AI role without data authority creates the same fragmentation that derails large enterprises.
How do I handle a data engineer who is also supposed to support AI initiatives?
This is a data-AI proximity problem — the person is serving two masters with different incentive structures. Short-term: establish clear ownership and protect AI initiative time from data maintenance pulls. Medium-term: separate the roles as the AI programme grows.
What is the difference between an AI strategy and an AI operating model?
Strategy is the “what” and “why” — which AI opportunities to pursue, what outcomes are expected. Operating model is the “who”, “how”, and “where” — who owns AI outcomes, how teams are structured, where data and AI ownership sit. Organisations that have strategy documents but no operating model have goals without accountability.
Why do AI pilots so rarely make it into production?
The pilot-to-production gap is a structural failure. Only 14% of organisations have production-ready AI agent solutions; just 11% are running agents in production at scale. Production requires ongoing ownership, data integration, monitoring, and a business unit willing to change its workflow. Assign a named owner before the pilot starts. Establish gate criteria before it ends. Make the scale-or-stop decision at the gate.
What does “centralised vs. federated AI” mean for a small company?
Centralised: one person owns all AI capability and serves the whole organisation — creates bottlenecks. Federated: AI embedded in each squad — scales experimentation but creates governance risk. Most small companies start centralised by default and should move toward embedded when two or more squads need AI support simultaneously.
How does IBM’s AI Licence to Drive translate to a small company?
The IBM version is formal certification covering data privacy, information security, and enterprise system integration. Your version: a lightweight sign-off confirming the team member knows which data they can use in prompts, which tools are approved, and what the output quality bar is. Not bureaucracy — the minimum governance required to avoid data breaches and shadow AI proliferation.
What is the AI maturity model and how do I use it?
Databricks’ staged framework for assessing organisational readiness for AI at scale. Use it diagnostically: map your current AI initiatives to the maturity stages. In Phase 1, focus on ownership and gate criteria — don’t build portfolio governance infrastructure before you have a successful pilot to govern.
How do I treat AI initiatives as a portfolio rather than a roadmap?
A roadmap assumes sequence and predetermined outcomes. A portfolio assumes active management — explicit decisions about where to invest more, where to pause, and where to stop. Set a 30-minute quarterly review where the AI owner assesses each active initiative and makes a call on every one of them. Stopping a pilot that isn’t working is good portfolio management.
When does it make sense to go from proof of concept to scaling AI?
When the pilot demonstrates a pre-agreed measurable improvement — a 20% reduction in resolution time, a 15% improvement in conversion rate — against a pre-established baseline. Before scaling, confirm that data ownership is aligned with AI ownership, that a named person owns the outcome, and that the business unit is committed to changing its workflow.
What is shadow AI and why is it an operating model problem?
Shadow AI proliferates when the operating model provides no governed path for experimentation. The demand is real; the approved supply is absent. The fix: a lightweight governance process with a fast path to approval for new tools and clear data handling rules. IBM reduced provisioning from two weeks to about five minutes by embedding governance into the platform itself — speed of approved access removes the incentive to go around the system.