The EU AI Act‘s enforcement phase kicks in August 2026, and you’re looking at fines ranging from EUR 7.5M to EUR 35M depending on what you did wrong. If you’re managing AI deployments, you need to know what actually gets you fined versus what’s just theoretical obligations on paper.
Here’s the thing – there’s a big gap between the maximum fines in the statute and what you’ll actually cop. Understanding enforcement reality helps you figure out where to spend your compliance budget versus where the actual penalty risk sits. This guide is part of our comprehensive EU AI Act implementation landscape, where we explore the broader regulatory compliance challenges CTOs face.
In this article we’re going to cover AI Office versus national authority jurisdiction, how they calculate penalties, and the mitigation strategies you can use. You’ll understand the enforcement discretion factors – AI Pact participation, cooperation, self-reporting, and serious incident response. And you’ll know which authority to contact so you don’t waste time as we approach August 2026.
What are the penalties for non-compliance with the EU AI Act?
The EU AI Act sets up three penalty tiers under Article 99. Tier 1 hits you with EUR 35M or 7% of global turnover for prohibited AI practices. Tier 2 brings EUR 15M or 3% for high-risk system non-conformity and GPAI violations. Tier 3 lands at EUR 7.5M or 1% for providing incorrect information to authorities. These penalties apply whether it’s the AI Office for GPAI jurisdiction or national market surveillance authorities for high-risk systems. If you’re deploying employment AI or other Annex III systems, understanding the highest penalties for high-risk violations is critical to calibrating your compliance investment.
The fines are calibrated to how bad the violation is and how big you are – whichever is higher between the fixed amount or the revenue percentage. SMEs get adjusted fines at lower thresholds based on member state rules.
The penalties are meant to scare you. If you look at GDPR enforcement patterns, actual fines typically land well below statutory maximums. The enforcement discretion methodology in Article 99 creates a range between theoretical maximum and what you’ll realistically face.
Member States had to lay down penalty rules by August 2, 2025. For GPAI model providers, penalties are postponed until August 2, 2026, lining up with enforcement powers for GPAI models.
What is the penalty calculation methodology under the EU AI Act?
Article 99 requires authorities to consider ten factors when they’re working out your fine. Nature, gravity, and duration of the infringement form the baseline. Number of affected people and damage level tell them how much harm was done. Cooperation with authorities, self-reporting, and prompt remedial action are the things that can reduce your penalty. AI Pact participation explicitly reduces penalties as documented good-faith effort. Previous violations increase what you’ll cop next time.
The methodology creates incentives for getting ahead of compliance. Any actions you take to mitigate effects reduce your exposure. Your size, annual turnover, and market share all influence the calculation. Any financial gain or loss from the offence factors in too.
Here’s what matters: cooperation rewards transparency during investigations rather than obstruction. Self-reporting lets you disclose violations before they discover them, reducing fines. How quickly you take remedial action demonstrates you’re taking incident response seriously. AI Pact participation creates a documented evidence trail of compliance intent before enforcement deadlines hit.
For GPAI model providers, the Commission imposes fines. For Union bodies, the European Data Protection Supervisor handles it. For everyone else, penalty amounts depend on national legal systems.
What triggers penalties under the EU AI Act?
Penalties get triggered by deploying prohibited AI practices, high-risk system non-conformity, GPAI obligation violations, or providing false information to authorities. The common triggers you need to watch for include unregistered high-risk systems in the EU database, missing conformity assessments, late serious incident reports, and transparency requirement failures. Enforcement investigations start from authority audits, individual complaints, serious incident reports, or cross-border coordination.
Prohibited practices under Article 5 trigger the highest penalty tier regardless of what mitigating factors you have. These include cognitive behavioural manipulation designed to exploit vulnerabilities, social scoring systems evaluating social behaviour, and biometric categorisation inferring sensitive characteristics like sexual orientation or political opinions. Deploy any of these and you’re looking at penalties up to EUR 35M or 7% of global turnover.
If you put a high-risk system on the market without CE marking, that’s an automatic violation. Authorities can detect your missing EU database registration through cross-referencing and marketplace monitoring.
Serious incident reporting failures often get discovered when the harm becomes public before you’ve notified authorities. GPAI transparency violations get identified through Code of Practice adherence audits by the AI Office.
Any person with grounds can file infringement reports with MSAs. When your high-risk system isn’t in conformity, you must immediately inform relevant actors and take corrective actions.
What is the difference between AI Office and national market surveillance authority enforcement?
The European AI Office holds exclusive jurisdiction over general-purpose AI models – that includes foundation models and systemic risk systems. National market surveillance authorities enforce the rules for high-risk AI systems, prohibited practices, and all the non-GPAI obligations within their Member State. The AI Office coordinates cross-border GPAI enforcement while MSAs handle localised high-risk system compliance. Understanding this jurisdictional split is critical to navigating the regulatory compliance overview effectively. If you’re using foundation models, understanding the AI Office exclusive GPAI jurisdiction is essential for determining your provider or deployer obligations. For GPAI questions, contact the AI Act Service Desk at [email protected]. For high-risk system registration and compliance, you need to contact your national MSA Single Point of Contact.
The jurisdiction boundaries are there to prevent regulatory forum shopping and make sure the right authority is overseeing things. GPAI enforcement is centralised at EU level because of the cross-border nature of models and their systemic impact. High-risk system enforcement is delegated to MSAs who know local market conditions and sector-specific requirements.
For systems combining both a GPAI model and a high-risk application, both authorities may have jurisdiction over their respective bits. Coordination happens through the European AI Board. Understanding how coordinated enforcement between AI Office and DPAs works can help you integrate your compliance efforts and reduce enforcement risk through unified programs.
The AI Act Service Desk is an accessible information hub offering clear guidance on how the AI Act applies. The Single Information Platform provides online interactive tools to help you work out your legal obligations. Sending your inquiry to the wrong place wastes compliance preparation time as we approach the August 2026 deadlines.
What are serious incident reporting obligations under the EU AI Act?
High-risk AI providers must report serious incidents to national MSAs under Article 73 when systems cause death, health impacts, fundamental rights violations, property damage, or environmental harm. GPAI providers with systemic risk designation report incidents to the AI Office under Article 55. Reports are required immediately when you discover the incident, using Commission-provided templates. Late reporting increases your penalty exposure. Timely reporting demonstrates you’re taking incident response seriously and reduces enforcement discretion risk.
The serious incident definition covers technical failures causing real-world harm, not just system malfunctions. Reporting triggers enforcement investigations, but it also shows provider vigilance and good-faith compliance.
MSAs share incident reports with Fundamental Rights Protection Authorities when rights violations are involved. Non-reporting gets discovered when harm becomes public through media coverage, lawsuits, or regulatory inquiries.
The Code of Practice defines minimum standards for information to be provided in reporting, with staggered timelines for varying severity.
Article 20 corrective action requirements mean providers must immediately inform actors and take remedial measures. Documentation of your incident response actions provides evidence for penalty calculation discretion factors.
Reports must include incident description, affected persons, damage assessment, corrective actions taken, and timeline. Report immediately when you discover it. Delayed reporting reduces enforcement discretion benefits and cooperation factor benefits in penalty calculation.
How does the AI Pact affect enforcement and penalties?
The AI Pact is a voluntary compliance initiative that lets you commit to AI Act obligations before legal deadlines hit. Participation is explicitly considered as a mitigating factor in Article 99 penalty calculations. It demonstrates documented good-faith compliance efforts to authorities. The Pact creates safe harbour during the transition period by signalling you’ve got a proactive compliance posture. The AI Office will assume signatories are acting in good faith when they’re assessing violations.
The pledges aren’t legally binding and don’t impose legal obligations on participants. Companies can sign pledges at any moment until the AI Act fully applies.
The AI Office will account for commitments made when they’re working out fine amounts, though compliance with the Code of Practice doesn’t give you complete immunity from fines.
Participation doesn’t provide immunity from fines but it does reduce penalty amounts when violations occur. Committing to obligations before the August 2026 deadlines shows compliance intent rather than a reactive scramble.
The Pact allows front-runners to test and share solutions with the wider community. It’s strategic positioning for organisations who aren’t sure about their classification or want the enforcement discretion benefits.
How to prepare for August 2026 compliance deadlines?
Classify your AI systems using Annex III criteria to work out which ones have high-risk obligations. Register high-risk systems in the EU database before you put them on the market. Complete conformity assessments and get CE marking. Implement serious incident reporting procedures. Consider AI Pact participation to demonstrate good faith. Identify the correct authority – AI Office for GPAI, MSA for high-risk systems – for compliance inquiries.
System classification drives all the compliance obligations and authority jurisdiction that come after. EU database registration is a prerequisite for lawful high-risk system market placement. Non-registration is detectable and penalised through authority cross-referencing.
Conformity assessment timelines vary depending on system complexity and notified body availability. Third-party audits are required for biometric and law enforcement applications. Quality management system requirements are ISO-aligned and form the foundation for ongoing compliance maintenance.
Technical documentation preparation requires cross-functional teams including legal, engineering, product, and compliance. Contact the AI Act Service Desk early for guidance rather than waiting until deadline pressure limits your preparation time.
High-risk AI system obligations and GPAI requirements begin August 2, 2026. The prohibited practices ban started February 2, 2025.
Document your reasoning thoroughly to demonstrate good faith compliance efforts during regulatory review. For GPAI systems, calculate training compute requirements against the 10^25 FLOPs threshold to work out systemic risk status. CE marking must be affixed in a visible, legible, and indelible manner before market placement.
Understanding enforcement priorities is critical, but it’s just one piece of the compliance puzzle. For a complete overview of all EU AI Act implementation challenges and decision points, see our EU AI Act enforcement guide.
FAQ Section
Can participating in the AI Pact reduce my fines?
Yes. AI Pact participation is explicitly considered as a mitigating factor in penalty calculations. While it doesn’t give you immunity, it demonstrates a proactive compliance posture that can reduce penalty amounts. See the “How does the AI Pact affect enforcement and penalties?” section above for the full details.
What happens if I report a serious incident late?
Late serious incident reporting increases your penalty exposure through two mechanisms: it demonstrates failure to meet Article 73/55 obligations, potentially triggering separate penalties, and it reduces the enforcement discretion benefits of timely reporting in penalty calculation methodology. Authorities view late reporting as reactive rather than responsible incident response, which reduces cooperation factor benefits.
Do I contact the AI Office or my national authority for high-risk systems?
Contact your national market surveillance authority’s Single Point of Contact for high-risk system registration, conformity assessments, and compliance questions. The AI Office handles only GPAI model enforcement. The European Commission will publish a list of Member State Single Points of Contact. Until then, contact the AI Act Service Desk at [email protected] for referral to the appropriate MSA.
Will cooperating with authorities reduce my penalty?
Yes, cooperation is an explicit factor in Article 99 penalty calculation methodology. Authorities consider whether you provided requested information promptly, facilitated investigations, and demonstrated transparency during enforcement proceedings. Obstruction or non-cooperation increases penalty amounts, while cooperation can reduce fines from theoretical maximums.
What’s the maximum fine I could face for AI Act violations?
Maximum penalties depend on the type of violation (see the penalty section above for full details). However, the penalty calculation methodology considers multiple mitigating factors that typically reduce actual fines well below these maximums.
When exactly must I be compliant with the EU AI Act?
High-risk AI system obligations and GPAI requirements begin August 2, 2026 (24 months after the Act entered into force). The prohibited practices ban started February 2, 2025 (6 months after entry). Different provisions have staggered timelines. Consult AI Act Article 113 implementation schedule for specific obligation deadlines.
Where to register high-risk AI systems?
High-risk AI systems must be registered in the EU database for high-risk AI systems before you put them on the market. Registration details get submitted to your national market surveillance authority, which enters the information into the centralised database. The database is managed by the European Commission and is publicly accessible for transparency. Contact your MSA’s Single Point of Contact for registration procedures and technical requirements.
Official AI Office contact information?
The AI Act Service Desk is the central contact point: [email protected]. Use this for GPAI compliance questions, general AI Act inquiries, and referral to the appropriate authority. For high-risk system questions, the Service Desk will direct you to the relevant national MSA Single Point of Contact. The AI Office operates within the European Commission’s Directorate-General for Communications Networks, Content and Technology.
How to report a serious AI incident?
High-risk AI providers report serious incidents to their national MSA using a Commission-provided template (see the “What are serious incident reporting obligations” section above for full details). GPAI providers with systemic risk designation report to the AI Office. Reports must include incident description, affected persons, damage assessment, corrective actions taken, and timeline. Report immediately when you discover it. Delayed reporting increases penalty exposure and reduces enforcement discretion benefits.
AI Office jurisdiction vs national market surveillance authority?
The AI Office holds exclusive jurisdiction over general-purpose AI models – that includes foundation models and systemic risk systems. National MSAs enforce high-risk AI system rules, prohibited practices, and all non-GPAI obligations. For systems combining both – a GPAI model deployed as a high-risk application – both authorities may have jurisdiction over their respective aspects. Coordination occurs through the European AI Board. See the “What is the difference between AI Office and national market surveillance authority enforcement?” section above for the comprehensive explanation.
Self-reporting violations vs waiting for enforcement discovery?
Yes. Self-reporting violations before authorities discover them reduces penalty exposure (see the penalty calculation methodology section above). Authorities view self-disclosure as evidence of good-faith compliance efforts and a responsible organisational culture. Waiting for enforcement discovery eliminates this mitigating factor and may suggest you were trying to conceal violations, which increases penalty amounts.
Steps to take if my AI system triggers an enforcement investigation?
Immediately: notify legal counsel with AI Act expertise, preserve all documentation related to system development and compliance efforts, designate a single point of contact for authority communications, assess whether self-reporting violations could mitigate penalties, document cooperation efforts, implement corrective actions per Article 20, and avoid obstruction or providing incorrect information (which is a separate penalty tier). Cooperation and transparency reduce enforcement discretion risk.