Insights Business| SaaS| Technology What the EU AI Act NIST and ISO 42001 Actually Require Organisations to Do
Business
|
SaaS
|
Technology
Mar 30, 2026

What the EU AI Act NIST and ISO 42001 Actually Require Organisations to Do

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic What the EU AI Act NIST and ISO 42001 Actually Require Organisations to Do

Boards and legal teams are asking harder questions about AI. The internal risk argument — “something might go wrong” — stopped moving budgets a while ago. What cuts through is external obligation: “we are legally required to act, and the penalties are specific.” Three frameworks now give you exactly that external obligation. The EU AI Act (Regulation EU 2024/1689) is binding law with extraterritorial reach. The NIST AI Risk Management Framework is a voluntary US standard that has quietly become a global reference point. ISO/IEC 42001 is a certifiable international management system standard you can implement right now. This article is part of our series on what enterprise AI governance actually requires in practice — and it translates what each framework actually requires into CTO-level action items, shows where they overlap, and demonstrates that one governance investment satisfies all three. The underlying reality — “polyrisk” — is that regulatory, reputational, operational, and legal AI risks compound each other. These frameworks exist because that compounding is real.

What Is the Difference Between AI Governance and AI Compliance?

AI governance is your internal management system — how your organisation assigns accountability, makes decisions about AI, and enforces controls day to day. AI compliance is the external demonstration of that management: evidence presented to regulators, auditors, or customers that governance exists and functions.

Both need the same underlying structures: named roles, documented processes, risk registers, and incident response authority. Without genuine governance structures underneath, compliance artefacts cannot be produced — and any evidence you present to regulators won’t hold up under scrutiny.

The EU AI Act, NIST AI RMF, and ISO/IEC 42001 describe the same internal governance structures from different angles — law and enforcement, operational risk management, and certifiable management system respectively. For a board presentation, here’s how to frame it: governance is the operating model investment; compliance is the return. The polyrisk concept makes this concrete — a regulatory breach triggers reputational damage, which triggers customer churn, which triggers legal exposure. One governance programme, including how ISO 42001 blueprints an AI operating model, addresses all of it.

What Does the EU AI Act Actually Require of Companies Deploying AI — Not Just Building It?

Here’s the assumption most SaaS companies make: “We use third-party AI APIs, so we’re not covered.” That assumption doesn’t hold. The EU AI Act distinguishes between providers and deployers. A provider develops and places an AI system on the market under their own name. A deployer uses a third-party AI system under their own authority. A SaaS company embedding OpenAI or Anthropic into its product is likely acting as both simultaneously — deployer of the base model and provider of the combined product.

The enforcement timeline is already partially in effect. AI literacy obligations (Article 4) became applicable in February 2025. GPAI model rules entered force in August 2025. High-risk AI system obligations become fully enforceable in August 2026.

High-risk AI systems are defined in Annex III across eight sectors — employment and workforce management (hiring and performance tools), biometric identification, essential services, and education among them. If your product touches any of these, the full obligation set applies: risk management system, technical documentation, human oversight, data governance, and conformity assessment.

The penalty numbers frame the board conversation nicely. Fines reach €35M or 7% of global turnover for prohibited AI practices; up to €15M or 3% for high-risk violations. For a 200-person SaaS company with €15M ARR, 3% is €450,000. That’s a material number.

Shadow AI is where a lot of teams miss exposure they didn’t know they had. Engineering teams shipping internal LLM plug-ins to external users may be acting as GPAI providers — and the August 2025 GPAI enforcement deadline has passed. The starting point is clarifying how EU AI Act requirements translate to accountability structures for each role your organisation actually occupies.

What Does the NIST AI Risk Management Framework Say About Accountability and Operating Models?

The NIST AI RMF is a voluntary US framework — no legal force. But it has become a de facto global reference standard adopted across jurisdictions, and the structures it describes are the same structures the EU AI Act requires.

The framework is built around four functions: Govern (establish accountability structures, policies, and AI risk culture); Map (understand your AI systems and their context); Measure (analyse and monitor risks, performance, and bias); Manage (prioritise and respond to risks in your workflows).

The Govern function is where the real accountability work lives: policies, roles with decision authority, risk oversight processes. Stop authority — the formally assigned right to halt an AI system in production — is the operational output of Govern. NIST demands that someone is accountable for each governance decision. That accountability maps directly to the structures in how ISO 42001 blueprints an AI operating model.

There’s a practical efficiency worth knowing about here. NIST has published a crosswalk to ISO 42001 clauses. Risk assessments using NIST guidance serve as direct evidence for ISO 42001 audits — one set of work, two frameworks served. The AI governance execution requirements these frameworks share — inventory, accountability assignment, monitoring — map directly to the operational practice the series covers.

What Does ISO/IEC 42001 Actually Require, and Does Your Company Need to Care?

ISO/IEC 42001 is the management system layer that ties the whole programme together. It is the first global AI Management System (AIMS) standard — structured like ISO 27001 for information security, which is familiar territory for most technical leadership teams. Published in December 2023, it defines requirements for establishing, implementing, and continually improving a formal AI management system.

In practical terms: define the scope of your AI activities; appoint responsible roles with defined authorities; conduct risk and impact assessments across the AI lifecycle; maintain an AI system register; establish continual improvement processes.

Certification is voluntary. ISO 42001 is not yet harmonised under the EU AI Act and does not confer automatic presumption of conformity. But what you build when you implement it is the documented governance infrastructure EU AI Act compliance requires. Certification provides third-party verification for auditors and enterprise customers.

For a 50–500 person company without certification intent, ISO 42001 gives you a ready-made blueprint that’s faster to adapt than to build from scratch. The key clause mappings are worth knowing: Clause 6.1 (risk and impact assessment) maps to EU AI Act Article 9; Clause 5 (leadership) maps to the accountability structures required by NIST Govern function; Clause 7.2 (competence) addresses the Article 4 AI literacy obligation.

How Do You Satisfy Multiple Frameworks Without Duplicating Effort?

The EU AI Act, NIST AI RMF, and ISO 42001 share a common structural core: risk identification, accountability assignment, documented controls, and ongoing monitoring. A single governance programme implemented against one framework simultaneously satisfies the others.

Risk-tiered governance is the synthesis: lightweight checks for low-risk AI systems; rigorous documentation, human oversight, and conformity assessment for Annex III high-risk systems. This mirrors the EU AI Act’s risk tiers, ISO 42001’s risk-based approach, and NIST AI RMF’s Map function all at once.

Here’s how the crosswalk works in practice. One AI system inventory satisfies EU AI Act registration requirements, ISO 42001 Clause 8.4, and NIST AI RMF Map function simultaneously. One risk assessment document serves all three. The internal AI working group — legal, engineering, product, compliance — is the organisational structure that both ISO 42001 Clause 5 and the EU AI Act’s deployer obligations require. Build it once. The sequenced path is straightforward: AI system inventory → provider vs. deployer classification → risk tier classification → accountability role assignment → monitoring.

The stop authority test makes for a sharp board presentation. Ask the room: “Who in this organisation has formal authority to halt an AI system in production right now if it is causing harm?” If no one can answer in ten seconds, your organisation is simultaneously non-compliant with NIST AI RMF Govern, EU AI Act human oversight obligations, and ISO 42001 Clause 6.1. One question that exposes the shadow AI governance gap across all three frameworks at once.

How Do You Use Regulatory Requirements to Make the Case to Your Board?

Internal risk arguments have a ceiling. External obligation arguments land differently. Regulation provides the urgency that governance investment needs to clear board-level scrutiny.

The penalty exposure is concrete. For a 200-person company with €15M ARR, EU AI Act fines are material — €450K for high-risk violations, over €1M for prohibited AI practices. Add product launch delays for non-compliant products and the risk profile becomes a straightforward board conversation.

Market access is the strategic argument. EU market access increasingly requires demonstrated AI Act compliance. Enterprise procurement and client due diligence questionnaires already reference ISO 42001 and NIST AI RMF. Governance is becoming a commercial gate, not just a regulatory one.

The confidence gap is worth surfacing too. EY data shows 82% of executives believe their existing policies protect against unauthorised AI use; only 14.4% of organisations have full security approval for AI agent deployment (Gravitee). The gap between what executives believe and what’s actually in place is a governance liability your board is carrying without knowing it.

The investment framing is simple: bounded vs. unbounded. The cost of a minimum viable governance programme — AI inventory, role assignment, risk classification, accountability matrix — is bounded. The cost of a regulatory enforcement action or a reputational incident from an ungoverned high-risk AI system is not. For a complete overview of what enterprise AI governance actually requires in practice across all dimensions — operating model, accountability, runtime enforcement, and measurement — see the series overview.

FAQ

Does the EU AI Act apply to my company if we are based outside the EU?

Yes — any organisation serving EU-based users is subject to the Act, regardless of where you’re based. It’s the same extraterritorial logic as GDPR. US-headquartered SaaS companies with EU customers are in scope.

What is the difference between a provider and a deployer under the EU AI Act?

A provider develops and places an AI system on the market under their own name. A deployer uses a third-party system under their own authority. A SaaS company embedding a third-party LLM may simultaneously be both. Providers carry the heavier burden: technical documentation, conformity assessment, CE-marking for high-risk systems.

When do EU AI Act obligations become legally enforceable?

Prohibited AI systems banned and AI literacy (Article 4) applicable: February 2025. GPAI model rules in force: August 2025. High-risk AI system obligations fully enforceable: August 2026. All remaining systems: August 2027.

Is ISO/IEC 42001 certification mandatory for EU AI Act compliance?

No. ISO 42001 is voluntary and not a harmonised standard — it does not confer automatic presumption of conformity. But implementing its management system builds the governance infrastructure EU AI Act compliance requires. Certification provides third-party verification of that infrastructure.

Do I need to implement both NIST AI RMF and ISO 42001, or is one sufficient?

Neither is legally required for most growth-stage SaaS companies. Implementing ISO 42001 satisfies most NIST AI RMF guidance through shared structural requirements. For resource-constrained teams, ISO 42001 plus the NIST crosswalk is the most efficient path to multi-framework coverage.

What counts as a high-risk AI system under the EU AI Act?

Annex III defines eight sectors: biometric identification, critical infrastructure, education, employment/workforce management (including recruitment tools), essential services, law enforcement, migration, and administration of justice. AI systems used in these sectors for the specified purposes must meet the full high-risk obligation set.

What are the EU AI Act penalties for non-compliance?

Prohibited AI practices: up to €35M or 7% of global annual turnover. High-risk system violations: up to €15M or 3%. Incorrect information to authorities: up to €7.5M or 1%. For SMEs and start-ups, the lower of the absolute or percentage figure applies.

What is the minimum I need to do before the August 2026 EU AI Act deadline?

Build an AI system inventory first — identify all AI systems in use and classify each as provider, deployer, or both. Classify against the EU AI Act risk tiers. For Annex III high-risk systems, begin risk management documentation and human oversight design. The Article 4 AI literacy obligation has been in effect since February 2025. Each high-risk system needs a named accountable owner before August 2026.

What does it mean to give someone “stop authority” over an AI system?

Stop authority is the formally assigned right of a named individual to pause, halt, or roll back an AI system in production without escalation. EU AI Act Article 14 requires deployers to design for technically feasible human intervention. Under NIST AI RMF and ISO 42001, stop authority is the operational test of whether governance is real — if no one has explicit halt authority, governance documents are theoretical.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter