Picture this. It’s a Tuesday morning and your AI-powered refund approval system has been running overnight. By the time anyone notices, it has auto-approved several hundred refund requests well outside policy bounds — some legitimate, many not. The finance team calls. The CEO calls. Someone asks: “Who owns this?”
Silence.
That silence is not a technical failure. It is a governance failure, and it lands on you. Not because you wrote the code, but because no one could point to a named person who was accountable for that system’s behaviour in production.
EY‘s February 2026 Technology Pulse Poll of 500 US technology executives found that 52% of department-level AI initiatives are operating without formal approval or oversight. For 42%, stopping a production AI system requires board or CEO intervention.
This is a governance failure with direct personal career and legal consequences. The EU AI Act assigns accountability explicitly to deployers — not to the vendors who built the model. When something goes wrong, the accountability question has a legal dimension that goes well beyond your job description.
Here is the practical framework to close that gap: the Enterprise AI Ownership Stack, the Stop Authority concept, a Decision Rights Matrix you can build in a day, and a Minimum Viable Governance package you can implement without building enterprise bureaucracy.
This article is the third in a cluster on why AI governance execution matters — building on the accountability problem from ART001 and the operating model structure from ART002, and leading into the technical enforcement layer in ART004 and the regulatory stakes in ART007.
Who is personally accountable when an AI system causes harm?
Accountability in enterprise AI means one named individual answers for outcomes — not a team, a platform, or a vendor. The Business Owner is that individual for each high-impact AI use case. Accountability cannot be transferred to the AI vendor: under the EU AI Act, the deployer retains full legal responsibility regardless of contract terms.
Most organisations blur the line between accountability and responsibility. They are not the same thing. Accountability is outcome ownership — one person answers for what the AI system did. Responsibility is execution ownership — many people share the work. As Infosys frames it: accountability is who answers for outcomes; responsibility is who does the work; decision rights is who can approve, change, pause, or stop AI. All three are distinct and must be assigned separately.
The governance trigger moment is the pilot-to-production threshold. When AI advises — drafts, predicts, recommends — a human still decides. When AI acts — approves a refund, changes a credit limit, triggers a workflow — accountability must crystallise into a named person. If it does not, you have deployed a decision-making system with no owner.
The vendor accountability trap is where most organisations come unstuck. The logic is intuitive but wrong: “We’re using Vendor X’s model, so Vendor X is accountable.” The EU AI Act closes this loop explicitly. Deployment is ownership. Vendor contracts can define responsibilities but cannot transfer accountability. A contract clause that attempts to do so will not protect you.
What is the Enterprise AI Ownership Stack — and which roles matter first?
The Enterprise AI Ownership Stack distributes accountability, responsibility, and decision rights across nine named roles rather than assigning everything to one team. The three roles to fill first are the Business Owner (accountable for outcomes), AI Product Owner (owns the use case), and Platform Owner (holds stop authority).
The Infosys Enterprise AI Ownership Framework defines the full stack:
Business Owner — Outcomes, risk acceptance, escalation authority. The single person who answers when things go wrong.
AI Product Owner — Acceptance criteria, human-in-the-loop design, escalation rules. Bridges business accountability and technical delivery.
Platform Owner — Model gateways, logging, monitoring, guardrails. Holds operational stop authority; enforces safety at runtime.
Model Owner — Model performance, robustness, drift response.
Data Owner / Data Steward — Data definitions, access approvals, quality SLAs. Most AI failures are disguised as data failures.
AI Risk Owner — Risk assessments, control testing, bias and harm checking.
AI Security Owner — Threat modelling, prompt injection risks, access patterns.
Legal / Compliance / Privacy Owner — Regulatory mapping, privacy and consent, audit readiness.
AI Ops / SRE Owner — Production reliability, run books, on-call, rollback procedures. If the AI fails at 2:00 AM, you need a plan, not a research paper.
Every enterprise AI system in production requires dual ownership — a Business Owner accountable for outcomes, and a System Owner accountable for operability. Business-only ownership creates chaos when things break. IT-only ownership creates irrelevance when risk decisions are being made.
The AI Center of Excellence anti-pattern is the single most common governance mistake here. A CoE cannot formally accept business risk, and becomes a bottleneck as deployments multiply. The CoE defines standards — business units own use cases and outcomes. Governance without ownership becomes documentation; ownership without governance becomes risk.
Fill the Business Owner, AI Product Owner, and Platform Owner roles first. Defer the rest until you have the governance maturity to support them without creating bureaucracy.
Stop authority: the governance test most organisations fail
Stop authority is the pre-assigned right of a named individual to pause, halt, or roll back an AI system in production without requiring board or CEO approval. If you cannot answer “who can stop this?” in ten seconds, you do not own it — you are experimenting with it. EY (2026): only 50% of AI governance leaders have independent halt authority; 42% require board or CEO intervention.
Think about what that data means in practice. A production AI system is causing harm. The person who notices it has no authority to stop it. They must find the right executive — who may be in a meeting or a different time zone. While approval is sought, the system keeps running. When halting a production AI system requires a board meeting, your governance structure cannot protect against real-time harm.
Here is who holds stop authority:
- Platform Owner / AI Ops/SRE Owner: Immediate operational halt — can pause, rate-limit, roll back, or disable without any approval. This authority must be pre-built into the technical infrastructure, not improvised under pressure. Implementing stop authority through observability infrastructure is covered in the companion article on runtime AI governance.
- Business Owner: Post-incident decisions. After the system is stopped, the Business Owner decides whether and how to restart it.
- Legal/Compliance Owner: In high-liability regulated contexts, may hold concurrent stop authority.
The CTO should not be the named stop-authority holder for every AI system — that creates a bottleneck. Pre-delegation is the governance act.
Run this test now: “Who can stop our most important AI system, without calling me?” A pause before the answer is a governance gap.
How to build a Decision Rights Matrix for AI in five steps
A Decision Rights Matrix maps five governance decisions — use case approval, production launch, change approval, incident authority, and risk acceptance — to named roles with defined authority levels. It is a document, not a committee: one named decider per decision, with a clear escalation path.
The Decision Rights Matrix is a governance-specific RACI. The critical rule: one Accountable person per decision row, or the matrix does not function.
Step 1: List your current AI systems and planned use cases. Start with systems already in production, not theoretical ones.
Step 2: Define the five decisions:
Use Case Approval — Should we build and deploy this at all? The Business Owner decides; Risk/Compliance are consulted for high-impact cases.
Production Launch — Is this system ready to go live? The Business Owner gives final sign-off; Platform Owner, Model Owner, and Risk/Compliance must also sign off.
Change Approval — Can we modify prompts, models, or tools in production? The Product Owner handles minor changes; Platform Owner plus Model Owner handle major changes.
Incident Authority (Stop Authority) — Who can halt or pause this system right now? Platform Owner / AI Ops for immediate action; Business Owner for post-incident escalation.
Risk Acceptance — We know the residual risk — do we accept it? Business Owner only. This cannot be delegated to the Platform or engineering team — it’s the decision where governance most commonly fails silently.
Step 3: Name the decider and escalation path. Not a role title — a person’s name, or at minimum a role that maps to a single person in your current org chart.
Step 4: Validate with role holders that they accept the authority. A RACI where the Accountable party does not know they hold it is documentation, not governance.
Step 5: Attach the matrix to each AI system’s production launch checklist. A matrix that lives only in a governance document has no operational power. It must be a gate that production launches pass through.
Align reviews to model retraining cycles — a matrix reflecting last year’s team structure creates false confidence.
The minimum viable ownership package for a new CTO
Minimum Viable Governance (MVG) is the smallest credible accountability package you can implement immediately: one named Business Owner per high-impact AI use case, an AI Product Owner, a Platform Owner with stop authority, and a documented Decision Rights Matrix. This is enough to govern safely without enterprise bureaucracy.
The Infosys minimum viable ownership package defines six items:
1. One named Business Owner per high-impact AI use case. The person who answers when things go wrong. Not a team, not a department — one name. Must exist before production launch.
2. An AI Product Owner per use case. Owns acceptance criteria, escalation rules, human-in-the-loop design, and feedback loops.
3. A Platform Owner with stop authority. Enforces guardrails, holds operational stop authority, owns runtime infrastructure. Without it, stop authority defaults to “whoever notices the problem and can reach an executive.”
4. A lightweight AI Risk Review for each use case. Intended use, failure modes, harm potential, escalation paths — aligned to NIST AI RMF Govern principles. No enterprise risk management machinery required.
5. Explicit, documented stop authority for Ops/Platform. Written down, communicated, and tested. The Platform Owner must know they hold this authority and have the technical mechanisms pre-built.
6. A Decision Rights Matrix — even a simple spreadsheet. The five decisions, named role holders, escalation paths. Attached to each AI system in production.
What MVG does not require: a dedicated AI governance team, an AI Center of Excellence, a formal compliance certification, months of policy work, or a separate AI ethics committee.
Apply MVG strictly to high-impact AI systems. For low-risk use cases — internal productivity tools, AI-assisted drafting — lighter-touch governance is appropriate. Not everything needs the same treatment.
To close the AI policy and execution gap, start with MVG and build from there. The enterprise AI governance structures described here provide the named owners and documented decisions that make governance real. Measuring whether your accountability structures are working — not just whether they exist — is the next discipline to build.
What do the EU AI Act and NIST AI RMF require of your accountability structure?
The EU AI Act requires deployers to assign accountability, maintain human oversight, and document governance decisions. The NIST AI RMF “Govern” function requires lifecycle accountability: named owners across the full AI lifecycle. ISO/IEC 42001 requires a management system with defined AI roles and responsibilities — implementation provides real governance value without formal certification.
Governance is how you manage AI risk internally. Compliance is demonstrating that management to regulators. You need governance first.
EU AI Act. Core deployer obligations: risk classification, pre-built human oversight mechanisms, technical documentation, named accountability chains, and incident reporting. The compliance timeline is active — bans on unacceptable-risk systems took effect February 2025. You cannot transfer accountability to the LLM vendor through contract terms. If your AI system causes harm to an EU resident, your organisation is accountable regardless of what your vendor agreement says.
NIST AI RMF. The Govern function requires named roles, documented decision rights, and policies for each lifecycle phase. In practice: documented names, documented decisions, documented escalation paths.
ISO/IEC 42001. The first certifiable AI management system standard. For most companies, the relevant question is not whether to certify but whether to implement its governance blueprint. If you already run an ISO/IEC 27001 security management system, AI governance can fold into existing audit cadences.
The six MVG items map directly: named Business Owner covers EU AI Act deployer accountability and NIST AI RMF Govern named roles; the Decision Rights Matrix covers EU AI Act documentation requirements; the Platform Owner with stop authority covers EU AI Act human oversight mechanisms.
For the detailed regulatory stakes and the full compliance timeline, ART007 covers the regulatory requirements that mandate clear accountability in depth.
Frequently asked questions
Who should be responsible for AI decisions — the CTO, the business unit, or a governance team?
The Business Owner (a named executive in the relevant business unit) is accountable for outcomes; the CTO is responsible for the governance infrastructure that makes accountability work. A governance team or AI CoE sets standards but does not own AI systems. Treating the CTO as the accountable owner of every AI use case is one of the most common governance mistakes.
What is the difference between AI governance and AI compliance?
AI governance is how you manage AI risk internally — accountability structures, decision rights, stop authority. AI compliance is demonstrating that governance to external regulators. You need governance first. The most common failure mode: investing in compliance documentation without real governance in place.
Can you outsource accountability for AI to the vendor who built it?
No. Under the EU AI Act, the deployer retains full accountability regardless of vendor contracts. Contracts can define responsibilities but cannot transfer accountability. Before deploying any vendor-supplied AI system, your own governance roles must be assigned — the vendor’s compliance documentation does not cover your obligations.
What is the pilot-to-production threshold and why does it matter for governance?
It’s the moment AI transitions from advisory (drafting, predicting) to behavioural (approving, triggering, acting). At that threshold, accountability must crystallise into named roles — Business Owner, Platform Owner, and Decision Rights Matrix must be in place before go-live, not after.
How do you actually test whether your AI governance is real or governance theater?
Ask: “Who can stop our most important AI system right now, without calling the board?” If you cannot answer in ten seconds, you have a policy document, not governance. EY (2026): 42% of organisations require board or CEO intervention to halt a high-priority AI project — that’s governance theater by definition.
What does the NIST AI RMF “Govern” function actually require?
Named roles across the AI lifecycle, documented decision rights, policies for each phase, and feedback loops between governance and operations. It does not require a large team — it requires documented names, documented decisions, and documented escalation paths.
Do I need ISO/IEC 42001 certification to govern AI responsibly?
No. ISO/IEC 42001 provides a governance blueprint — implementing its requirements gives you real value without formal certification. Certification matters when clients or regulators require it. The infrastructure matters more than the certificate.
What is the difference between a Business Owner and a Platform Owner in AI governance?
The Business Owner answers for what the AI does — value, risk acceptance, escalation. The Platform Owner answers for how it runs — guardrails, monitoring, stop authority. Combining them is a governance anti-pattern: stop authority becomes meaningless if the person who benefits from the AI also decides when to halt it.
Why do executives believe AI governance is in place when operational teams say it isn’t?
Executives see policy documents and assume governance. Operational teams see runtime reality and know the distance between the two. Closing the gap means moving from governance as documentation to governance as named people with documented decisions.
How does the EU AI Act affect companies outside the European Union?
Any company deploying AI systems affecting EU residents is subject to EU AI Act requirements regardless of where it’s headquartered. For Australian and Asia-Pacific SaaS companies with EU customers, the deployer obligations apply. Even without EU exposure, the governance structures required for EU compliance are simply good governance practice.