Insights Business| SaaS| Technology Understanding EU AI Act and Automated Decision-Making Compliance for Tech Products
Business
|
SaaS
|
Technology
Nov 27, 2025

Understanding EU AI Act and Automated Decision-Making Compliance for Tech Products

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic AI-Specific Regulation and Automated Decision-Making

You’re deploying AI in your product. Maybe it’s making hiring recommendations, scoring credit applications, or routing customer support tickets. The EU has rules about this now, and they’re not optional.

This guide is part of our comprehensive tech regulatory compliance overview, focusing specifically on AI-specific requirements that layer onto existing privacy frameworks.

The EU AI Act and GDPR Article 22 create overlapping compliance requirements that can trigger penalties up to 35 million euros or 7% of global turnover. Get the risk classification wrong and you might be locking yourself out of the EU market entirely.

The EU uses a four-tier risk system that determines everything – what documentation you need, whether you can self-certify, and what happens if you get it wrong. Most importantly, you need to figure out if your AI counts as “solely automated decision-making” under GDPR Article 22, because that’s where the compliance burden kicks in.

This article covers the risk classification system, how to determine where your AI sits, and the DPIA framework you need for high-risk systems. Plus examples – Microsoft Copilot in hiring scenarios, and what happened with Clearview AI’s biometric violations.

How Does GDPR Article 22 Apply to Automated Decision-Making?

Article 22 establishes a qualified prohibition. Data subjects shall not be subject to a decision based solely on automated processing, including profiling, which produces legal effects or similarly significant effects. Understanding Article 22 is fundamental to GDPR compliance for any company deploying AI systems.

“Solely automated” means the entire decision-making process happens without meaningful human intervention. Rubber-stamping doesn’t qualify. Neither does implementation. The European Data Protection Board clarifies that human involvement must be substantive and capable of influencing the outcome.

Legal effects are straightforward – automatic contract refusal, denial of government benefits, immigration application rejections, tax assessments. Similarly significant effects are trickier: automatic refusal of online credit applications, AI-driven recruitment screening that excludes candidates, insurance claim denials without human review.

There are three valid exceptions. The decision is necessary for contract performance, authorised by EU or member state law with safeguards, or based on explicit consent with protections. Legitimate interests, implied consent, or standard contractual necessity don’t suffice for Article 22 processing.

For meaningful human involvement, the reviewer needs authority to change the automated decision, access to all relevant data, understanding of the decision logic, and ability to consider additional context. If an AI tool is making hiring recommendations and decisions happen without substantive human review, you’re in Article 22 territory.

Article 35 of GDPR mandates a DPIA for automated decision-making processes covered by Article 22. That’s your baseline compliance requirement before EU AI Act obligations.

What are the Four Risk Categories in the EU AI Act?

The EU categorises AI into four buckets based on potential harm. Your risk level determines regulatory burden, documentation requirements, conformity assessment procedures, and maximum penalties.

The classification hinges on potential impact to fundamental rights, safety, and the legal significance of automated decisions. If your AI makes decisions with legal or similarly significant effects, you’re probably looking at high-risk classification.

Here’s the breakdown. When AI systems serve both high-risk and minimal-risk functions, apply the highest applicable risk classification. You don’t get to cherry-pick the easy category.

You need to classify each AI system separately. A single product might contain multiple AI systems at different risk levels. That SaaS platform you’re building? The productivity scoring module might be high-risk while the email sorting feature is minimal risk.

AI systems get classified as high risk if they’re part of regulated products or listed in Annex III, which covers biometric identification, critical infrastructure, credit scoring, border control, and more. If you’re uncertain about classification, treat it as high-risk. Better to over-comply than face enforcement.

Unacceptable risk systems are prohibited. Deploying prohibited AI can result in penalties up to €35 million or 7% of global turnover – the Act’s highest penalty tier.

Prohibited practices include cognitive behavioural manipulation, social scoring systems, and biometric categorisation inferring sensitive characteristics like sexual orientation or political opinions. Real-time remote biometric identification in public spaces falls here too, with narrow law enforcement exceptions.

High-risk systems include safety components in regulated products and applications listed in Annex III. These need extensive conformity assessments before market entry, plus technical documentation, EU database registration, risk assessments, human oversight protocols, and ongoing monitoring.

The Annex III categories cover biometric identification, critical infrastructure management, education and vocational training access, employment decisions, access to essential services, law enforcement, migration and asylum, and administration of justice. If your AI makes decisions in these areas, you’re high-risk.

Limited risk applies to most generative AI. Systems like large language models must inform users they’re interacting with AI. Unlike high-risk applications, you don’t need conformity assessment or EU database registration – just user awareness and responsible deployment.

General-purpose AI models exceeding systemic risk thresholds (10^25 FLOPs) face extra obligations, including model evaluation and incident reporting. If you’re training foundation models at that scale, you know who you are.

Minimal risk includes AI-enabled video games, spam filters, and basic recommendation systems. No specific regulatory obligations beyond general product safety laws.

The penalty structure scales with risk. Non-compliance with high-risk obligations carries fines up to €15,000,000 or 3% of annual worldwide turnover. Providing incorrect information triggers up to €7,500,000 or 1% penalties.

What is a Data Protection Impact Assessment (DPIA) for AI Systems?

A DPIA is a systematic evaluation required under GDPR Article 35 for processing likely to result in high risk to individual rights and freedoms. If you’re doing automated decision-making with legal or significant effects, you need one.

The template includes four required elements. Describe the processing operations and purposes. Assess necessity and proportionality. Evaluate risks to rights and freedoms. Document mitigation measures.

For AI systems specifically, you need to document data minimisation strategy, bias detection procedures, fairness testing methodology, security controls, retention policies, and data subject rights implementation. Key risk factors include accuracy, bias and discrimination, transparency, data quality, statistical procedures, and security.

You must consult with your Data Protection Officer if appointed, and with supervisory authority if residual risks remain high after mitigation. That supervisory authority consultation isn’t optional when you’re dealing with novel technologies, high residual risk, large-scale sensitive data processing, or vulnerable populations.

This is a living document. DPIAs must be regularly reviewed, particularly when algorithms or data sources undergo changes. Every model update, every new data source integration, every change in processing activities – these trigger DPIA reviews.

Failure to conduct required DPIA can trigger fines up to 10 million euros or 2% of global turnover under GDPR. That’s before EU AI Act penalties.

The DPIA focuses on data protection. But high-risk AI systems under the EU AI Act also need a Fundamental Rights Impact Assessment addressing broader concerns – non-discrimination, freedom of expression, due process, human dignity. The scopes overlap but they’re distinct assessments. For detailed implementation guidance on conducting DPIAs, see our comprehensive compliance programme guide.

How Do I Determine if My AI System is High-Risk Under the EU AI Act?

Start with Annex III categories – if your AI system falls within listed use cases, it’s presumptively high-risk requiring conformity assessment.

Employment decisions? That’s CV scanning, interview evaluation, performance monitoring, promotion decisions, task allocation algorithms. Credit scoring and creditworthiness? Automated loan approvals, credit limit determinations, interest rate calculations, payment plan assignments.

Education access includes university admissions, scholarship allocations, exam proctoring with decision-making capability, student performance predictions affecting opportunities. Essential services covers utility provision, social benefits, emergency services dispatch, healthcare resource allocation.

Use a two-step analysis. Does your system fall in an Annex III category? And is it likely to cause fundamental rights harm or safety risks? Both questions need to be yes for high-risk classification.

Purpose matters more than technology. The same AI used for high-risk hiring decisions versus minimal-risk internal task suggestions gets classified differently based on application context.

Here’s what this looks like in practice. Your SaaS collaboration platform with productivity scoring that affects employment decisions? Potentially high-risk. Project management tools making suggestions? Likely minimal.

FinTech automated underwriting? High-risk. Fraud detection with human review? Depends on decision authority. Budgeting recommendations? Minimal.

HealthTech diagnostic decision support influencing treatment? High-risk. Symptom checkers requiring doctor consultation? Limited risk. Appointment scheduling? Minimal.

EdTech automated grading affecting student advancement? High-risk. Personalised learning paths? Limited or minimal depending on implementation. Administrative scheduling? Minimal.

If your system is offered in the EU market, you must classify regardless of company location due to extraterritorial application. US-based companies don’t get a pass.

What Technical Documentation is Required for EU AI Act Compliance?

Technical file required for high-risk AI systems before market placement, maintained for 10 years after last product unit placed on market.

General description covers intended purpose, AI system design and architecture, development process and methodology, versions and updates. Risk management documentation includes risk assessment procedures, known limitations, foreseeable misuse scenarios, mitigation measures implemented.

Data governance requires detailed documentation. You need training, validation, and testing dataset descriptions, data sources and collection methods, bias detection and correction procedures, data quality metrics.

Model documentation includes algorithms and techniques used, key design choices and assumptions, performance metrics across demographic groups, validation and testing results. That “across demographic groups” bit is non-negotiable – you can’t just report aggregate performance.

Human oversight design documentation needs capabilities and limitations disclosed to users, technical measures for human intervention, escalation procedures. If you’re claiming human-in-the-loop compliance, the technical implementation needs to prove it.

Conformity assessment records include assessment body identification, certificates issued, test reports, compliance declarations. Quality management system documentation covers compliance monitoring procedures, incident response protocols, post-market monitoring plans, corrective action processes.

You need to demonstrate compliance with transparency requirements, accuracy benchmarks, cybersecurity measures, and logging capabilities. Supervisory authorities will audit these claims, so your documentation must demonstrate actual compliance, not aspirational goals.

Version control matters. When you update models, you need to document whether changes trigger new conformity assessment requirements. Managing deployed system variations across customers requires systematic tracking.

There’s a balance between transparency obligations and trade secret protection. You can legitimately withhold some information from public disclosure, but supervisory authorities get broader access during investigations.

When you’re using vendor AI services, documentation responsibilities split between provider and deployer. Make sure your contracts specify who documents what, or you’ll be scrambling during audits.

What Human Oversight Measures are Required for High-Risk AI?

The EU AI Act mandates human oversight capability for all high-risk systems to prevent or minimise risks to health, safety, and fundamental rights.

Meaningful human involvement requires the same standard explained in the Article 22 section: the reviewer must have authority to change decision, access to all relevant data, understanding of decision logic, ability to consider additional context. Rubber-stamping doesn’t count. Automatic approval doesn’t count. Review without authority to override doesn’t count.

There are three implementation models. Human-in-the-loop requires intervention in each decision cycle before implementation – the most stringent option for high-stakes decisions. Human-on-the-loop means humans monitor and can intervene during operation with capacity to override in real-time. Human-in-command provides oversight of overall system activity with ability to interrupt or shut down.

Technical implementation includes override mechanisms, decision explanation interfaces, confidence threshold alerts, escalation workflows, audit logging of human interventions. Your UI needs to support informed human review, not just binary approve/reject buttons.

Organisational measures matter as much as technical ones. Human reviewers must understand algorithmic logic, have clear decision authority, adequate review time, protection from override pressure.

That last bit about override pressure means if your system design or operational metrics incentivise rubber-stamping AI outputs, you’re not compliant. Performance reviews can’t penalise legitimate overrides of automated decisions.

Article 22(3) guarantees data subjects the right to obtain human intervention, express views regarding the decision, and contest automated decisions. You need clear, accessible procedures for requesting human intervention with timely responses.

When deploying AI systems for workplace decisions, the technical architecture needs to enforce human review for consequential outcomes. Design override mechanisms that log who reviewed what, when, and what factors they considered beyond the AI recommendation.

Track override rates and analyse intervention patterns. If humans override the AI 2% of the time or 98% of the time, something’s wrong with either the AI or the oversight process.

FAQ Section

What happens if I don’t comply with the EU AI Act?

Maximum penalties reach 35 million euros or 7% of global annual turnover (whichever is higher) for prohibited AI use, 15 million euros or 3% for high-risk non-compliance, 7.5 million or 1.5% for incorrect information provided to authorities. Supervisory authorities can also impose market access bans, product recalls, and operational restrictions.

Do I need to comply with the EU AI Act if my company is based in the US?

Yes. The EU AI Act has extraterritorial reach similar to GDPR, applying to providers placing AI systems on EU market and deployers using AI systems affecting persons in EU. Any provider or deployer in a third country must comply if the output produced by the AI system is intended to be used in the EU.

Can I use Microsoft Copilot without violating GDPR Article 22?

Depends on deployment context. If any AI tool makes solely automated decisions with legal or similarly significant effects like hiring, termination, or performance ratings affecting compensation, Article 22 applies. This requires a valid exception and safeguards including meaningful human review, right to explanation, and right to contest. The key question is whether the AI is making decisions without substantive human involvement. For more context on the Microsoft ACCC lawsuit and AI product enforcement, see our analysis of Australian regulatory actions.

What’s the difference between a DPIA and a Fundamental Rights Impact Assessment?

DPIA (GDPR Article 35) focuses on data protection risks, privacy impacts, and security safeguards. FRIA (EU AI Act) addresses broader fundamental rights including non-discrimination, freedom of expression, due process, and human dignity. High-risk AI systems often require both assessments with overlapping but distinct scopes.

When do I need to complete my AI Act conformity assessment?

Depends on Annex III category – high-risk AI systems must complete conformity assessment before market placement. Phased enforcement: prohibited AI since August 2024, high-risk systems by August 2026-2027 depending on category, limited risk transparency by August 2026.

Is my SaaS chatbot considered high-risk under the EU AI Act?

Likely not high-risk unless it makes decisions in Annex III categories like hiring, credit, education access, or essential services. Most chatbots fall under limited risk requiring transparency disclosures that users are interacting with AI, or minimal risk with no specific obligations if purely informational.

How do I implement the right to explanation for my AI system?

Provide clear, accessible information about automated decision-making logic, significance, and consequences. Balance trade secret protection with transparency by explaining general methodology, factors considered, weighting approaches, and decision criteria without disclosing proprietary algorithms. Use plain language avoiding technical jargon.

Can I self-certify my AI system or do I need third-party assessment?

Depends on Annex III category and conformity assessment procedure specified. Some high-risk systems permit internal conformity assessment based on internal testing and quality management, others require notified body involvement for third-party verification. Biometric identification always requires notified body assessment.

What documentation do I need to prove EU AI Act compliance?

Technical file including system description, risk management documentation, training data governance, model documentation, human oversight design, conformity assessment records, and quality management system procedures. Must be maintained for 10 years after last system unit placed on market.

Should I do a DPIA for my AI hiring tool?

Yes. Automated hiring decisions fall under both GDPR Article 35 (high-risk processing requiring DPIA) and EU AI Act Annex III (high-risk employment category requiring risk assessment). DPIA documents data protection measures, bias prevention, fairness testing, and safeguards required for compliance. Our guide on building a compliance programme includes detailed DPIA templates and implementation processes.

How long does it take to conduct a DPIA for an AI system?

Simple systems with good existing documentation: 2-4 weeks. Complex systems with novel processing, bias testing requirements, or supervisory authority consultation: 2-4 months. Ongoing maintenance required as system evolves.

Is facial recognition AI prohibited under the EU AI Act?

Real-time remote biometric identification in publicly accessible spaces is prohibited except for narrow law enforcement exceptions like missing children, imminent threats, or serious crime suspects. Post-event biometric identification and workplace or private property facial recognition fall under high-risk category requiring strict compliance. The Clearview AI case demonstrates criminal GDPR enforcement for facial recognition violations, showing how serious these breaches can become.


AI-specific compliance sits within a broader regulatory landscape. For a complete overview of how the EU AI Act fits alongside GDPR, CCPA, and Australian Privacy Act requirements, consult our comprehensive regulatory compliance guide that addresses the full compliance journey from framework selection through audit preparation.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices
Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Jakarta

JAKARTA

Plaza Indonesia, 5th Level Unit
E021AB
Jl. M.H. Thamrin Kav. 28-30
Jakarta 10350
Indonesia

Plaza Indonesia, 5th Level Unit E021AB, Jl. M.H. Thamrin Kav. 28-30, Jakarta 10350, Indonesia

+62 858-6514-9577

Bandung

BANDUNG

Jl. Banda No. 30
Bandung 40115
Indonesia

Jl. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660