INHUMAIN.AI
The Watchdog Platform for Inhuman Intelligence
Documenting What Happens When Intelligence Stops Being Human
AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 | AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 |

EU AI Act: The Definitive Compliance Guide for 2026

The complete guide to the EU AI Act — risk tiers, prohibited practices, high-risk classification, transparency requirements, penalties up to EUR 35M or 7% of global turnover, phased timeline, and compliance roadmap.

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive binding legislation governing the development, deployment, and use of artificial intelligence. Adopted by the European Parliament in June 2024 and published in the Official Journal of the European Union on July 12, 2024, the Act entered into force on August 1, 2024. It will be fully applicable on August 2, 2026, with certain provisions taking effect earlier and others later.

This guide is designed for compliance officers, legal teams, AI engineers, and executives who need to understand what the Act requires, who it applies to, and what happens if you fail to comply. It is not legal advice; it is the most comprehensive independent analysis available.


Why the EU AI Act Matters Globally

The EU AI Act has extraterritorial reach. Article 2 makes clear that the Act applies to:

  1. Providers placing AI systems on the EU market or putting them into service in the EU, regardless of where they are established
  2. Deployers of AI systems who are located within the EU
  3. Providers and deployers located outside the EU, where the output produced by the AI system is used in the EU
  4. Importers and distributors of AI systems in the EU
  5. Product manufacturers placing products on the EU market with an integrated AI system

This means a company headquartered in San Francisco, Riyadh, or Beijing whose AI system produces output consumed by EU residents is subject to the Act. The extraterritorial principle mirrors the approach taken by GDPR, which has become the de facto global data protection standard. The AI Act is designed to achieve the same regulatory gravity for artificial intelligence.


The Four Risk Tiers

The AI Act establishes a risk-based classification system. Every AI system falls into one of four tiers, each with different obligations.

Tier 1: Unacceptable Risk (Prohibited)

Certain AI practices are outright banned because they pose an unacceptable risk to fundamental rights. These prohibitions took effect on February 2, 2025 — the first provisions of the Act to become enforceable.

Prohibited practices include:

  • Social scoring by public authorities: AI systems that evaluate or classify natural persons based on their social behavior or personal characteristics, leading to detrimental treatment disproportionate to context
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (with narrow exceptions for serious crime, missing persons, and imminent threats)
  • Subliminal manipulation: AI systems deploying subliminal techniques beyond a person’s consciousness to materially distort behavior in a manner likely to cause harm
  • Exploitation of vulnerabilities: AI systems exploiting vulnerabilities related to age, disability, or social/economic situation
  • Biometric categorization based on sensitive attributes (race, political opinions, sexual orientation, religious beliefs)
  • Untargeted scraping of facial images from the internet or CCTV for facial recognition databases
  • Emotion recognition in workplaces and educational institutions (with limited exceptions)
  • Predictive policing based solely on profiling or personality traits

For full analysis of each prohibition, see our Prohibited Practices guide.

Tier 2: High Risk

High-risk AI systems are the core regulatory target of the Act. These systems are subject to extensive compliance requirements including conformity assessments, documentation, human oversight, data governance, accuracy requirements, and post-market monitoring.

Two pathways to high-risk classification:

Annex I — Safety Components of Regulated Products: AI systems that are safety components of products already subject to EU harmonization legislation (medical devices, machinery, toys, aviation, automotive, marine equipment, rail, elevators, pressure equipment, radio equipment, personal protective equipment, cableway installations, civil explosives, pyrotechnic articles).

Annex III — Standalone High-Risk Systems: AI systems in eight specific domains:

  1. Biometric identification and categorization
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, workers management, and access to self-employment
  5. Access to essential private and public services
  6. Law enforcement
  7. Migration, asylum, and border control
  8. Administration of justice and democratic processes

For complete Annex III classification analysis, see our High-Risk Systems guide.

High-risk system requirements (Articles 8-15):

Requirement Article Description
Risk management system Art. 9 Continuous, iterative process throughout the AI system lifecycle
Data governance Art. 10 Training, validation, and testing datasets must meet quality criteria
Technical documentation Art. 11 Comprehensive documentation prior to market placement
Record-keeping Art. 12 Automatic logging of system operations
Transparency Art. 13 Clear information for deployers on system capabilities and limitations
Human oversight Art. 14 Design enabling effective oversight by natural persons
Accuracy, robustness, cybersecurity Art. 15 Appropriate levels throughout the lifecycle

Conformity assessment (Article 43): Before placing a high-risk AI system on the market, providers must undergo a conformity assessment. For most Annex III systems, this can be done through internal control (self-assessment). For biometric identification systems, a third-party conformity assessment by a notified body is required.

Tier 3: Limited Risk (Transparency Obligations)

AI systems posing limited risk are subject to specific transparency requirements under Article 50:

  • Chatbots: Users must be informed they are interacting with an AI system (unless obvious from context)
  • Emotion recognition and biometric categorization: Individuals must be informed when such systems are applied to them
  • Deepfakes / AI-generated content: AI-generated or manipulated images, audio, or video must be labeled as artificially generated or manipulated
  • AI-generated text: Text generated by AI on matters of public interest must be labeled as AI-generated (with exceptions for editorially reviewed or human-authored content)

Tier 4: Minimal Risk

AI systems that do not fall into the above categories are subject to no requirements under the Act. Examples include spam filters, AI-enabled video games, and inventory management systems. Providers of minimal-risk systems are encouraged — but not required — to voluntarily adopt codes of conduct.


General-Purpose AI Models (GPAI)

Title IIIA of the Act establishes requirements for general-purpose AI models, including large language models. These provisions apply to the model itself, not the application layer, and took effect on August 2, 2025.

All GPAI Models

All GPAI models must comply with:

  • Technical documentation: Detailed information about the model, training process, and evaluations
  • Information for downstream providers: Sufficient information to enable downstream compliance
  • Copyright compliance: Policy to comply with EU copyright law, including the text and data mining opt-out mechanism
  • Training data summary: A sufficiently detailed summary of the content used for training (template developed by the AI Office)

GPAI Models with Systemic Risk

GPAI models classified as posing systemic risk face additional obligations. A model is presumed to pose systemic risk if the cumulative amount of computation used for its training exceeds 10^25 FLOPs, or if the European Commission designates it based on other criteria (capabilities, reach, impact).

Additional requirements for systemic risk models:

  • Model evaluation in accordance with standardized protocols
  • Assessment and mitigation of systemic risks
  • Tracking, documenting, and reporting serious incidents to the AI Office and national authorities
  • Adequate cybersecurity protections
  • Reporting energy consumption

Who Must Comply

The Act defines several roles, each with specific obligations:

Provider (Article 3(3)): Any natural or legal person that develops an AI system or GPAI model, or has one developed, and places it on the market or puts it into service under its own name or trademark. Providers bear the heaviest compliance burden.

Deployer (Article 3(4)): Any natural or legal person using an AI system under their authority, except for personal non-professional activity. Deployers of high-risk systems have obligations around human oversight, monitoring, and reporting.

Importer (Article 3(6)): Any natural or legal person located in the EU that places an AI system on the market that bears the name of a non-EU provider.

Distributor (Article 3(7)): Any natural or legal person in the supply chain, other than the provider or importer, that makes an AI system available on the EU market.

Authorized Representative (Article 3(5)): Any natural or legal person in the EU who has received a written mandate from a provider outside the EU to act on its behalf.

Critical note on role shifts: Under Article 25, a deployer, distributor, or importer becomes a provider — and assumes provider obligations — if they:

  • Put their name or trademark on a high-risk AI system
  • Make a substantial modification to a high-risk AI system
  • Modify the intended purpose of an AI system in a way that makes it high-risk

The Penalty Structure

The AI Act establishes a tiered penalty structure among the most severe in EU regulatory history:

Violation Category Maximum Fine
Prohibited AI practices EUR 35 million or 7% of worldwide annual turnover (whichever is higher)
High-risk system non-compliance EUR 15 million or 3% of worldwide annual turnover
Incorrect information to authorities EUR 7.5 million or 1% of worldwide annual turnover

For SMEs and startups: The Act provides that penalties for small and medium-sized enterprises, including startups, shall be the lower of the absolute amount and the percentage. Additionally, the AI Office and national authorities must consider the specific interests of SMEs when designing enforcement approaches.

Aggravating and mitigating factors (Article 99(3)): When determining penalty amounts, authorities must consider:

  • The nature, gravity, and duration of the infringement
  • Whether the provider is an SME or startup
  • Whether the infringement was intentional or negligent
  • Actions taken to mitigate harm
  • Degree of cooperation with authorities
  • Previous infringements
  • Financial impact on the entity

Governance and Enforcement

EU-Level

European AI Office: Established within the European Commission, the AI Office is responsible for:

  • Supervising and enforcing rules for GPAI models
  • Developing guidelines and codes of practice
  • Coordinating with national authorities
  • Managing the AI Pact (voluntary early compliance)
  • Maintaining the EU database for high-risk AI systems

European Artificial Intelligence Board: Advisory body composed of representatives from each member state. Provides guidance, assists with coordination, and contributes to uniform application of the Act.

Advisory Forum: Stakeholder consultation body including industry, SMEs, civil society, academia, and standardization bodies.

Scientific Panel of Independent Experts: Provides technical expertise on GPAI models, including assessment of systemic risk.

National Level

Each EU member state must designate at least one national competent authority to supervise application and enforcement of the Act. Member states must also designate or establish a market surveillance authority as the notifying authority.


Documentation and Compliance Requirements

Technical Documentation (Annex IV)

Providers of high-risk AI systems must prepare comprehensive technical documentation before placing the system on the market. Required documentation includes:

  1. General description of the AI system
  2. Detailed description of system elements and development process
  3. Monitoring, functioning, and control of the AI system
  4. Risk management system documentation
  5. Description of data governance practices
  6. Changes made throughout the system lifecycle
  7. Performance metrics and evaluation results
  8. Declaration of conformity
  9. Post-market monitoring plan

Quality Management System (Article 17)

Providers of high-risk AI systems must implement a quality management system covering:

  • Strategy for regulatory compliance
  • Techniques and procedures for design, development, and testing
  • Examination, test, and validation procedures
  • Technical specifications
  • Systems and procedures for data management
  • Risk management process
  • Post-market monitoring
  • Incident reporting procedures
  • Communication with authorities

EU Database Registration (Article 71)

All high-risk AI systems must be registered in the EU database before being placed on the market. Registration information is publicly accessible and includes system name, provider identity, intended purpose, and conformity status.


Phased Implementation Timeline

The Act’s requirements phase in over a three-year period:

Date Milestone
August 1, 2024 Entry into force
February 2, 2025 Prohibited practices take effect; AI literacy obligations begin
August 2, 2025 GPAI model obligations apply; governance structures (AI Office, Board) operational
August 2, 2026 Full application: high-risk AI system requirements, conformity assessments, deployer obligations, transparency requirements, penalties
August 2, 2027 Extended compliance deadline for high-risk AI embedded in products under Annex I sectoral legislation

For detailed milestone analysis, see our EU AI Act Timeline.


Codes of Practice and Standards

GPAI Code of Practice

The AI Office is developing a Code of Practice for GPAI model providers, with input from industry, civil society, and academia. Providers who adhere to the Code are presumed to comply with the Act’s GPAI requirements. The Code is expected to cover:

  • Training data documentation
  • Copyright compliance mechanisms
  • Systemic risk assessment methodologies
  • Safety evaluation protocols
  • Incident reporting frameworks

Harmonised Standards

The European Commission has issued standardization requests to CEN and CENELEC to develop harmonised standards supporting the AI Act. These standards will provide detailed technical specifications for compliance with the Act’s requirements. Until harmonised standards are adopted, providers may rely on common specifications published by the Commission.


Practical Compliance Roadmap

Step 1: AI System Inventory

Map all AI systems within your organization. Identify which systems fall within the scope of the Act and their risk classification.

Step 2: Risk Classification

For each AI system, determine whether it is prohibited, high-risk (Annex I or Annex III), limited-risk, or minimal-risk. Pay particular attention to the criteria in Article 6(3), which provides an exception for AI systems that do not pose a significant risk despite falling within Annex III categories.

Step 3: Gap Analysis

For high-risk systems, assess current practices against the Act’s requirements for risk management, data governance, documentation, transparency, human oversight, accuracy, and robustness.

Step 4: Compliance Implementation

Develop and implement the required quality management system, technical documentation, risk management processes, and human oversight mechanisms.

Step 5: Conformity Assessment

Complete the applicable conformity assessment procedure (internal control or third-party assessment) and prepare the EU declaration of conformity.

Step 6: Registration and Ongoing Compliance

Register high-risk AI systems in the EU database. Implement post-market monitoring systems and establish incident reporting procedures.


Relationship to Other EU Legislation

The AI Act does not exist in isolation. It interacts with and complements a substantial body of existing EU law:

  • GDPR: AI systems processing personal data must comply with both the AI Act and GDPR. The AI Act does not replace or reduce GDPR obligations.
  • Product Liability Directive (revised): AI products are subject to strict liability for defects.
  • AI Liability Directive (proposed): Would establish civil liability rules specific to AI-caused damage.
  • Digital Services Act: Platforms using AI for content recommendation, content moderation, or advertising are subject to both DSA and AI Act obligations.
  • Cybersecurity Act: AI systems must comply with applicable cybersecurity certification schemes.
  • Sectoral Legislation: AI systems that are safety components of regulated products (Annex I) must comply with both the AI Act and the applicable sectoral legislation.

What Happens Next

The period between now and August 2, 2026, is the critical compliance window. Organizations must:

  1. Complete their AI system inventories and risk classifications
  2. Engage with the evolving codes of practice and standardization efforts
  3. Build or adapt quality management systems
  4. Train staff on AI literacy requirements (already in effect since February 2, 2025)
  5. Prepare technical documentation for high-risk systems
  6. Plan for conformity assessments

The AI Office is actively developing implementation guidance, and the European Artificial Intelligence Board is coordinating national approaches to enforcement. Organizations participating in the AI Pact — a voluntary early compliance initiative — gain access to guidance and demonstrate good faith commitment to compliance.

The EU AI Act will reshape how AI systems are built, deployed, and governed. The question is no longer whether to comply, but how quickly organizations can adapt their development and deployment practices to meet the most comprehensive AI regulation in history.


This guide is maintained by INHUMAIN.AI as an independent analysis of the EU AI Act. It is not legal advice. Organizations should consult qualified legal counsel for specific compliance guidance. For related analysis, see our Global AI Regulation Tracker, High-Risk Systems Guide, Prohibited Practices, and Implementation Timeline.