INHUMAIN.AI
The Watchdog Platform for Inhuman Intelligence
Documenting What Happens When Intelligence Stops Being Human
AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 | AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 |

EU AI Act Fines and Penalties: Complete Enforcement Guide

The definitive guide to EU AI Act enforcement. Penalty tiers, enforcement timelines, responsible authorities, first enforcement actions, GDPR comparison, compliance costs, SME exceptions, and how to report violations. Essential reading for any organization deploying AI in Europe.

The EU AI Act (Regulation 2024/1689) is the most consequential piece of AI legislation in the world. It is also the most complex. This guide focuses on the aspect that will ultimately determine whether the law succeeds or fails: enforcement.

Laws without enforcement are suggestions. The EU learned this lesson with GDPR, which spent its first years as a paper tiger before enforcement actions — including multi-billion-euro fines against Meta, Amazon, and Google — demonstrated that the regulation had real teeth. The AI Act faces the same challenge, compressed into a shorter timeline and applied to a technology that is evolving faster than any regulatory framework in history.

This guide covers every dimension of EU AI Act enforcement: the penalty structure, the enforcement timeline, the responsible authorities, the first enforcement actions, what compliance costs in practice, and how to report violations. It is written for compliance officers, legal teams, AI developers, and anyone who needs to understand what the law actually requires and what happens when those requirements are not met.

For the broader global regulatory picture, see our AI Regulation Tracker. For AI terminology used in this guide, see our AI Glossary.


Penalty Structure

The EU AI Act establishes three tiers of administrative fines, scaled by the severity of the violation. As with GDPR, fines are calculated based on the higher of a fixed amount or a percentage of the company’s total worldwide annual turnover.

Tier 1: Prohibited Practices — EUR 35 Million / 7% of Global Turnover

Dimension Detail
Violations Covered Deploying prohibited AI practices
Fixed Maximum EUR 35,000,000
Turnover Maximum 7% of worldwide annual turnover (preceding financial year)
Applicable Fine Whichever is higher
Effective Date February 2, 2025

Prohibited practices include:

  • Social scoring by public authorities (or on their behalf)
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement (with limited exceptions)
  • AI systems that deploy subliminal techniques to materially distort behavior
  • AI systems that exploit vulnerabilities of specific groups (age, disability, social/economic situation)
  • Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
  • Emotion recognition in workplaces and educational institutions (with exceptions for medical/safety purposes)
  • Biometric categorization to infer sensitive attributes (race, political opinions, trade union membership, religious beliefs, sex life, sexual orientation)

What 7% means in practice:

Company 2024 Revenue (approx.) 7% Maximum Fine
Apple $383 billion $26.8 billion
Microsoft $245 billion $17.2 billion
Alphabet/Google $340 billion $23.8 billion
Meta $161 billion $11.3 billion
Amazon $638 billion $44.7 billion
NVIDIA $130 billion $9.1 billion
Samsung $215 billion $15.1 billion

These figures are theoretical maximums. Actual fines will be far lower. But the mere existence of 7%-of-revenue penalties creates a compliance incentive that no AI company can ignore.

Tier 2: High-Risk and GPAI Violations — EUR 15 Million / 3% of Global Turnover

Dimension Detail
Violations Covered Non-compliance with high-risk AI system requirements; GPAI model obligations
Fixed Maximum EUR 15,000,000
Turnover Maximum 3% of worldwide annual turnover
Applicable Fine Whichever is higher
Effective Date August 2, 2025 (GPAI); August 2, 2026 (high-risk)

This tier covers violations of:

  • Risk management system requirements (Article 9)
  • Data governance and management obligations (Article 10)
  • Technical documentation requirements (Article 11)
  • Record-keeping and logging (Article 12)
  • Transparency and provision of information to deployers (Article 13)
  • Human oversight design requirements (Article 14)
  • Accuracy, robustness, and cybersecurity (Article 15)
  • Quality management system requirements (Article 17)
  • Conformity assessment procedures (Article 43)
  • Registration obligations (Article 49)
  • GPAI model transparency requirements (Article 53)
  • GPAI systemic risk obligations (Article 55)

Tier 3: Misinformation to Authorities — EUR 7.5 Million / 1.5% of Global Turnover

Dimension Detail
Violations Covered Supplying incorrect, incomplete, or misleading information to regulatory authorities
Fixed Maximum EUR 7,500,000
Turnover Maximum 1.5% of worldwide annual turnover
Applicable Fine Whichever is higher
Effective Date August 2, 2025

This tier targets organizations that lie to regulators, provide incomplete documentation during investigations, or misrepresent their AI systems’ capabilities and risk profiles. It is the EU’s answer to the problem of “safety washing” — making safety claims that do not withstand scrutiny.

Special Provisions for SMEs and Startups

The AI Act includes reduced penalty caps for small and medium-sized enterprises:

Entity Type Tier 1 Cap Tier 2 Cap Tier 3 Cap
Large enterprise EUR 35M / 7% EUR 15M / 3% EUR 7.5M / 1.5%
SME (< 250 employees) Lower of fixed / % Lower of fixed / % Lower of fixed / %
Startup (< 50 employees, < EUR 10M turnover) Lower of fixed / % Lower of fixed / % Lower of fixed / %

The critical difference: for SMEs and startups, the fine is the lower of the fixed amount or the percentage of turnover, rather than the higher. This means a startup with EUR 5 million in revenue would face a maximum Tier 1 fine of EUR 350,000 (7% of EUR 5M) rather than EUR 35 million.

Additionally, the AI Act mandates that member states establish regulatory sandboxes where SMEs and startups can test AI innovations in a controlled environment with reduced compliance burdens. As of February 2026, at least 12 member states have announced or established sandbox programs.


Enforcement Timeline

The AI Act’s enforcement provisions are being phased in over a two-year period from the regulation’s entry into force on August 1, 2024.

Date Milestone Status
August 1, 2024 Regulation enters into force Complete
February 2, 2025 Prohibited practices ban takes effect (Tier 1 fines active) Active
February 2, 2025 AI literacy obligations take effect Active
August 2, 2025 GPAI model obligations take effect (Tier 2 fines for GPAI active) Active
August 2, 2025 Governance provisions fully applicable Active
August 2, 2025 Codes of practice for GPAI should be finalized Active
August 2, 2025 Penalties for non-compliance with prohibited practices and GPAI rules Active
February 2, 2026 Member states designate national competent authorities In progress
August 2, 2026 Full application of high-risk system requirements (Tier 2 fines fully active) Approaching
August 2, 2026 Conformity assessment requirements for high-risk systems Approaching
August 2, 2027 Obligations for high-risk AI systems that are safety components of products Upcoming

Where We Are Now (February 2026)

As of publication, the enforcement landscape looks like this:

  • Prohibited practices (Tier 1): Fully enforceable since February 2025. The EU AI Office and national authorities can investigate and fine organizations deploying prohibited AI systems.
  • GPAI obligations (Tier 2 partial): Fully enforceable since August 2025. Providers of general-purpose AI models must comply with transparency, documentation, and (for systemic risk models) safety evaluation requirements.
  • High-risk systems (Tier 2 full): Coming August 2026. The most complex and commercially significant provisions are six months away from enforcement.
  • National authorities: Most member states have designated or are in the process of designating their national competent authorities for AI Act enforcement.

Enforcement Bodies

EU AI Office

The European AI Office, established within the European Commission’s Directorate-General for Communications Networks, Content and Technology (DG CNECT), has direct enforcement authority over:

  • General-purpose AI models, particularly those with systemic risk
  • Cross-border AI Act violations
  • Coordination of national enforcement activities
  • Development of codes of practice and technical standards

The AI Office has a staff of approximately 140 as of early 2026, with plans to expand. It can conduct investigations, request information from providers, and impose fines for GPAI violations. For high-risk AI system violations, enforcement is primarily the responsibility of national authorities, with the AI Office playing a coordinating role.

National Competent Authorities

Each EU member state must designate one or more national authorities responsible for:

  • Market surveillance of AI systems in their jurisdiction
  • Investigation of complaints and reported violations
  • Conformity assessment oversight
  • Enforcement actions including fines
  • Regulatory sandbox operation
Member State Designated Authority Status
France CNIL + new AI authority Designated
Germany BfDI + sector regulators In progress
Italy AgID + Garante Designated
Spain AESIA (Spanish AI Agency) Operational
Netherlands Autoriteit Persoonsgegevens Designated
Ireland AI Act coordination office (under DETE) In progress
Belgium FPS Economy Designated
Sweden AI Sweden regulatory unit In progress
Poland Ministry of Digital Affairs In progress
Others Various stages of designation In progress

The quality and aggressiveness of enforcement will vary significantly across member states — just as it has with GDPR. Ireland’s historically light-touch approach to tech regulation, compared to France’s more assertive stance, is likely to be replicated in AI Act enforcement.


First Enforcement Actions

Although the AI Act’s high-risk provisions are not yet fully enforceable, early enforcement activity has already begun under the prohibited practices provisions (effective February 2025) and pre-existing laws applied to AI contexts.

Prohibited Practices Investigations (2025)

Date Target Allegation Status Jurisdiction
Mar 2025 Undisclosed employer Emotion recognition in workplace performance evaluations Investigation opened Netherlands
Apr 2025 Facial recognition vendor Untargeted scraping of facial images from social media Formal complaint; investigation ongoing France
Jun 2025 Municipal government Social scoring system for public housing allocation Investigation opened; system suspended Italy
Aug 2025 EdTech company Emotion recognition of students during online exams Warning issued; system modified Spain

GPAI Compliance (2025-2026)

Date Target Issue Status Authority
Sep 2025 Major GPAI provider Incomplete technical documentation submitted Information request issued EU AI Office
Nov 2025 Open-weight model provider Failure to publish training data summary Investigation opened EU AI Office
Jan 2026 Frontier lab Systemic risk evaluation questioned as insufficient Under review EU AI Office

These early actions are modest — investigations and information requests rather than blockbuster fines. This mirrors the early trajectory of GDPR enforcement.


GDPR Enforcement: A Predictive Model

The GDPR enforcement trajectory offers the most relevant precedent for what to expect from the AI Act. The pattern is instructive.

GDPR Fine Trajectory

Year Total Fines (EUR) Largest Fine Notable Trend
2018 ~1 million 400K (Portugal hospital) Enforcement barely beginning
2019 ~450 million 204M (British Airways) First major fines announced
2020 ~350 million 100M (Google, France) COVID slowed enforcement
2021 ~1.3 billion 746M (Amazon, Luxembourg) Record fines, cross-border cases
2022 ~2.0 billion 405M (Meta, Ireland) Consistent large-scale enforcement
2023 ~2.1 billion 1.2B (Meta, Ireland) Largest GDPR fine ever issued
2024 ~1.8 billion 310M (LinkedIn, Ireland) Mature enforcement
2025 ~2.5 billion (est.) Multiple 100M+ fines Steady state

What This Tells Us About AI Act Enforcement

If the AI Act follows a similar trajectory:

  • 2025-2026: Small investigations, information requests, warnings, and a handful of fines in the single-digit millions. Enforcement focuses on clear-cut prohibited practices violations.
  • 2027-2028: First significant fines in the tens of millions. High-risk system enforcement begins in earnest. Cross-border coordination challenges become apparent.
  • 2029-2030: Mature enforcement with fines potentially in the hundreds of millions. Regulatory capacity has been built. Case law provides clarity on ambiguous provisions.

The critical question is whether the AI Act can compress this timeline. The technology is moving faster than GDPR’s subject matter (data processing), and the risks of delayed enforcement are arguably greater.


Compliance Costs

Organizations deploying AI systems in the EU market should budget for substantial compliance costs, particularly for high-risk systems.

Estimated Compliance Costs by Company Size

Company Size One-Time Setup Annual Ongoing Primary Cost Drivers
Large enterprise (10,000+ employees) EUR 1-5 million EUR 500K-2M Conformity assessment, documentation, governance infrastructure
Mid-size company (250-10,000) EUR 200K-1M EUR 100K-500K Risk management systems, documentation, training
SME (50-250 employees) EUR 50K-200K EUR 30K-100K Simplified assessment, documentation
Startup (< 50 employees) EUR 10K-50K EUR 5K-30K Sandbox participation, minimal documentation
GPAI model provider (non-systemic) EUR 500K-2M EUR 200K-1M Technical documentation, transparency, copyright compliance
GPAI model provider (systemic risk) EUR 2-10M EUR 1-5M Red-teaming, adversarial testing, incident reporting, cybersecurity

Cost Breakdown for High-Risk System Compliance

Requirement Typical Cost Frequency
Risk management system EUR 50K-200K One-time setup + annual review
Data governance documentation EUR 30K-100K One-time + updates
Technical documentation EUR 20K-80K Per system version
Conformity assessment (self) EUR 10K-50K Per system
Conformity assessment (third-party, if required) EUR 50K-200K Per system
Human oversight design EUR 30K-150K Per system
Post-market monitoring system EUR 20K-100K One-time + ongoing
AI literacy training EUR 5K-30K Annual
Legal counsel EUR 30K-200K Ongoing
Quality management system EUR 50K-300K One-time + ongoing

These figures are estimates based on early compliance activity and analogies to GDPR compliance costs. Actual costs will vary significantly based on the complexity of the AI system, the organization’s existing compliance infrastructure, and the evolving interpretation of requirements by enforcement authorities.


How to Report Violations

If you believe an organization is deploying an AI system that violates the EU AI Act, several reporting channels are available.

National Authorities

Most member states are establishing or have established complaint mechanisms through their designated national competent authorities. Check with the relevant national authority in the member state where the violation is occurring.

EU AI Office

For GPAI model violations or cross-border issues, complaints can be submitted to the EU AI Office through the European Commission’s complaint system. The AI Office has committed to acknowledging complaints within 15 business days.

INHUMAIN.AI

We maintain an independent record of AI Act compliance issues and can assist with:

Key Information to Include in a Report

  1. The AI system: Name, provider, deployer, and description of what it does
  2. The violation: Which specific provision of the AI Act you believe is being violated
  3. The evidence: Screenshots, documentation, experiences, or other evidence supporting the complaint
  4. The harm: Who is being affected and how
  5. The jurisdiction: Where the AI system is being deployed

Extraterritorial Application

The AI Act applies to organizations outside the EU in several critical circumstances:

  • Providers of AI systems that are placed on the EU market or put into service in the EU, regardless of where the provider is established
  • Deployers of AI systems that are established or located in the EU
  • Providers and deployers established in a third country where the output produced by the AI system is used in the EU

This means that a US-based AI company whose model generates outputs used by EU-based customers is subject to the AI Act’s requirements and penalties. A HUMAIN-developed AI system used by European businesses would similarly fall within scope.

The extraterritorial reach is broader than GDPR’s and will be tested early by enforcement actions against non-EU AI providers. The practical enforceability of fines against companies with no EU presence remains an open question — but any company with customers, partners, or assets in the EU has strong incentives to comply.


Comparison to Other AI Penalty Frameworks

Jurisdiction Maximum Fine Scope Enforcement Status
EU AI Act EUR 35M / 7% global turnover Comprehensive, risk-based Phased enforcement (2025-2027)
GDPR (AI-related) EUR 20M / 4% global turnover Data protection aspects of AI Mature enforcement
China (AI regulations) Varies; up to 5% revenue Content, deepfakes, algorithms Active enforcement
South Korea (AI Act) KRW 300M (~EUR 200K) + damages Framework law Early implementation
Brazil (AI Bill) Up to 2% of revenue Rights-based approach Under legislative consideration
Canada (AIDA) CAD 25M / 5% global revenue Proposed, not enacted Legislative process
US (FTC, sectoral) Varies by statute Sector-specific enforcement Active but fragmented

For comprehensive jurisdiction-by-jurisdiction analysis, see our AI Regulation Tracker.


What Comes Next

The next twelve months will determine whether the EU AI Act becomes a meaningful enforcement framework or an elaborate paperwork exercise. Three developments will be decisive:

August 2026: High-risk system enforcement begins. This is the most commercially significant deadline. Thousands of AI systems deployed in healthcare, education, employment, law enforcement, and critical infrastructure will need to demonstrate compliance with detailed requirements. Organizations that have not prepared will face enforcement risk.

First significant fine. The first fine exceeding EUR 10 million will send a signal about whether the EU is serious. GDPR’s turning point was the British Airways fine in 2019, which demonstrated that the regulation was not merely theoretical. The AI Act needs its equivalent moment. See our AI Prediction Scorecard — we predict this will happen before the end of 2026.

Enforcement capacity. The AI Office has approximately 140 staff. Twenty-seven member states need trained, resourced national authorities. The EU’s ability to enforce the AI Act depends on whether these institutions are adequately funded and staffed — and whether they develop the technical expertise to evaluate increasingly complex AI systems.

The stakes are high. If enforcement is weak, the AI Act becomes a compliance burden that responsible companies bear while irresponsible ones ignore. If enforcement is strong, it establishes a global standard that raises the floor for AI safety and accountability worldwide.


This guide is maintained by the INHUMAIN.AI regulatory analysis team and updated as enforcement actions, guidance, and interpretations develop. It is not legal advice. Organizations subject to the EU AI Act should consult qualified legal counsel. For corrections or updates, contact us through our contact page.