EU AI Act Implementation Timeline: Every Deadline Through 2027
Complete phased enforcement timeline for the EU AI Act — from prohibited practices (February 2025) through full application (August 2026) to extended deadlines (August 2027). Every milestone, every obligation, every date.
The EU AI Act does not arrive as a single compliance event. Its requirements phase in over a 36-month period from entry into force on August 1, 2024, to the final extended deadline on August 2, 2027. This staggered approach is designed to give organizations time to adapt while ensuring the most urgent prohibitions take effect immediately.
This guide maps every milestone in the implementation timeline, what obligations activate at each phase, and what organizations must have in place by each deadline.
Phase 0: Entry Into Force
August 1, 2024
The EU AI Act enters into force twenty days after its publication in the Official Journal of the European Union on July 12, 2024. No obligations are immediately enforceable, but the compliance clock starts.
What happens:
- The legal text becomes binding (though most provisions have delayed application dates)
- The EU AI Office begins operationalizing its mandate
- Member states begin designating national competent authorities
- The European Artificial Intelligence Board is formally established
- Standardization requests to CEN/CENELEC become active
What organizations should do:
- Begin mapping all AI systems in use and development
- Initiate risk classification analysis for all AI systems
- Establish a cross-functional AI compliance team (legal, technical, business)
- Begin monitoring guidance from the AI Office and national authorities
- Assess supply chain dependencies on AI providers subject to the Act
Phase 1: Prohibited Practices and AI Literacy
February 2, 2025 (6 months after entry into force)
The first enforceable obligations take effect. Article 5 prohibitions are now binding, and AI literacy requirements begin.
What becomes enforceable:
Prohibited Practices (Article 5):
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces (with narrow exceptions)
- Subliminal manipulation causing significant harm
- Exploitation of vulnerabilities (age, disability, social/economic situation)
- Untargeted scraping of facial images for facial recognition databases
- Emotion recognition in workplaces and educational institutions (with medical/safety exceptions)
- Biometric categorization inferring sensitive attributes
- Individual-level predictive policing based solely on profiling
AI Literacy (Article 4): Providers and deployers must take measures to ensure that their staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy. This is not a one-time training requirement but an ongoing obligation scaled to the context.
Maximum penalties for prohibited practices violations: EUR 35 million or 7% of worldwide annual turnover.
What organizations must have done by this date:
- Completed an audit of all AI systems against Article 5 prohibitions
- Ceased any AI practices that fall within the prohibited categories
- Documented the audit findings and any remediation actions taken
- Initiated AI literacy programs for relevant personnel
- Established monitoring processes to prevent drift into prohibited practices
Phase 2: GPAI and Governance
August 2, 2025 (12 months after entry into force)
General-purpose AI model obligations take effect, and key governance structures become fully operational.
What becomes enforceable:
GPAI Model Obligations (Articles 51-56):
All GPAI models:
- Technical documentation requirements
- Information provision to downstream providers
- Copyright compliance policy (including text and data mining opt-out)
- Training data summary (using AI Office template)
GPAI models with systemic risk (>10^25 FLOPs or Commission designation):
- Model evaluation per standardized protocols
- Systemic risk assessment and mitigation
- Serious incident tracking, documentation, and reporting
- Adequate cybersecurity protections
- Energy consumption reporting
Governance Structures:
- EU AI Office fully operational with enforcement powers
- European Artificial Intelligence Board issuing guidance
- Advisory Forum active
- Scientific Panel of Independent Experts operational
- Codes of Practice for GPAI models expected to be finalized
Maximum penalties for GPAI non-compliance: EUR 15 million or 3% of worldwide annual turnover.
What organizations must have done by this date:
- GPAI model providers: Complete technical documentation and training data summaries
- GPAI providers with systemic risk models: Implement model evaluation, systemic risk assessment, and incident reporting
- All providers: Monitor Codes of Practice and assess relevance to their systems
- Downstream AI system providers: Ensure adequate information from upstream GPAI providers
Phase 3: Full Application
August 2, 2026 (24 months after entry into force)
The primary application date. The majority of the AI Act’s requirements become enforceable, including the entire high-risk AI system regime.
What becomes enforceable:
High-Risk AI System Requirements (Articles 6-15):
- Risk management systems (Article 9)
- Data and data governance (Article 10)
- Technical documentation (Article 11)
- Record-keeping and logging (Article 12)
- Transparency and information to deployers (Article 13)
- Human oversight design (Article 14)
- Accuracy, robustness, and cybersecurity (Article 15)
Conformity Assessment (Articles 40-49):
- Internal control procedures for most Annex III systems
- Third-party assessment for biometric identification systems
- EU declaration of conformity
- CE marking requirements
Provider Obligations (Articles 16-22):
- Quality management systems
- Post-market monitoring
- Incident reporting (serious incidents reported without delay, within 15 days maximum)
- EU database registration
- Retention of documentation (10 years)
Deployer Obligations (Articles 26-27):
- Use in accordance with instructions
- Human oversight implementation
- Input data relevance monitoring
- Logging retention (minimum 6 months)
- Fundamental rights impact assessment (for public bodies and certain private deployers)
- Information to affected individuals
- Data Protection Impact Assessment under GDPR where applicable
Transparency Requirements (Article 50):
- AI interaction disclosure (chatbots)
- Deepfake and AI-generated content labeling
- Emotion recognition and biometric categorization notification
- AI-generated text labeling for public interest content
National Governance:
- National competent authorities and market surveillance authorities fully operational
- National regulatory sandboxes available
- Enforcement mechanisms active
Penalties:
- Prohibited practices: EUR 35 million or 7% of worldwide annual turnover
- High-risk system non-compliance: EUR 15 million or 3% of worldwide annual turnover
- Incorrect information to authorities: EUR 7.5 million or 1% of worldwide annual turnover
- Proportionate reductions for SMEs and startups
What organizations must have done by this date:
- All high-risk AI systems fully compliant with Articles 8-15
- Conformity assessments completed
- EU declarations of conformity issued
- Systems registered in the EU database
- Quality management systems operational
- Post-market monitoring active
- Deployer obligations implemented (human oversight, monitoring, logging)
- Transparency measures in place for limited-risk AI systems
- Fundamental rights impact assessments completed where required
- Staff AI literacy training updated and documented
Phase 4: Extended Deadline for Regulated Products
August 2, 2027 (36 months after entry into force)
Extended compliance deadline for high-risk AI systems that are safety components of products covered by EU harmonization legislation listed in Annex I.
What becomes enforceable:
High-risk AI system requirements for AI embedded in:
- Medical devices (Regulation 2017/745, Directive 98/79/EC)
- In vitro diagnostic medical devices
- Machinery (Regulation 2023/1230)
- Toys (Directive 2009/48/EC)
- Recreational craft and personal watercraft
- Lifts (Directive 2014/33/EU)
- Equipment for explosive atmospheres (Directive 2014/34/EU)
- Radio equipment (Directive 2014/53/EU)
- Pressure equipment (Directive 2014/68/EU)
- Cableway installations (Regulation 2016/424)
- Personal protective equipment (Regulation 2016/425)
- Gas appliances (Regulation 2016/426)
- Aviation safety (Regulation 2018/1139)
- Motor vehicles (Regulation 2019/2144)
- Agricultural vehicles (Regulation 167/2013)
- Marine equipment (Directive 2014/90/EU)
- Rail interoperability (Directive 2016/797/EU)
- Civil drones
- Pyrotechnic articles
Why the extension: These AI systems are already subject to sectoral conformity assessment processes. The extended timeline allows coordination between AI Act requirements and existing sectoral compliance frameworks. Notified bodies under sectoral legislation need time to develop competence in AI-specific assessment.
What organizations must have done by this date:
- AI systems embedded in Annex I products fully compliant with the AI Act
- Dual conformity assessments completed (AI Act + sectoral legislation)
- Documentation integrated across both regulatory frameworks
- Post-market monitoring systems covering both AI and product safety requirements
Parallel Timelines
Standards Development
The European Commission has issued standardization requests to CEN and CENELEC for harmonised standards supporting the AI Act. The timeline:
| Milestone | Expected Date |
|---|---|
| Standardization request issued | 2024 |
| First draft harmonised standards | 2025 |
| Published harmonised standards | Mid-2026 |
| Presumption of conformity available | Upon publication in the Official Journal |
Until harmonised standards are published, the Commission may adopt common specifications through implementing acts. Compliance with common specifications also provides a presumption of conformity.
Codes of Practice for GPAI
| Milestone | Expected Date |
|---|---|
| Drafting process initiated | Late 2024 |
| Stakeholder consultation | 2025 |
| Final Code of Practice | August 2025 |
| Compliance provides presumption of conformity | Upon publication |
Delegated and Implementing Acts
The Commission is empowered to adopt delegated acts to:
- Update Annex III (high-risk system categories)
- Amend the list of prohibited practices
- Specify systemic risk thresholds for GPAI models
- Establish benchmarks and measurement methodologies
These acts may modify compliance requirements at any point, making ongoing monitoring essential.
Regulatory Sandbox Timeline
Member states must establish at least one AI regulatory sandbox by August 2, 2026. Sandboxes provide controlled environments where AI systems can be developed, tested, and validated under regulatory supervision before market placement.
Sandbox benefits:
- Guidance from competent authorities on compliance
- Controlled testing of innovative AI systems
- Priority access for SMEs and startups
- Reduced compliance costs during sandbox participation
Critical Preparation Windows
Now Through August 2026: The Compliance Sprint
Organizations that have not yet begun compliance preparation face a compressed timeline. Priority actions:
Immediate (Complete Now):
- Verify no prohibited practices are in use (already enforceable)
- Ensure AI literacy programs are operational (already enforceable)
- For GPAI providers: Verify GPAI obligations are met (already enforceable since August 2025)
Near-Term (Complete by Q1 2026):
- Finalize high-risk system classification for all AI systems
- Begin conformity assessment processes
- Develop or update quality management systems
- Draft technical documentation for high-risk systems
- Implement data governance frameworks
Pre-Deadline (Complete by August 2, 2026):
- Complete conformity assessments
- Issue EU declarations of conformity
- Register high-risk systems in the EU database
- Activate post-market monitoring systems
- Deploy transparency measures for limited-risk systems
- Finalize deployer compliance (human oversight, logging, impact assessments)
What Happens If You Miss a Deadline
Missing a compliance deadline does not trigger an automatic penalty. However, from the applicable date forward, non-compliant AI systems are in violation of the Act, and enforcement action can be initiated by national market surveillance authorities. The consequences include:
- Financial penalties up to the applicable maximum
- Market withdrawal orders requiring removal of non-compliant AI systems from the EU market
- Corrective action orders requiring modifications within a specified timeframe
- Reputational damage from public enforcement actions
- Supply chain disruption as EU-based deployers may be prohibited from using non-compliant AI systems
The Act does not provide a grace period after the application date. Organizations must be compliant on the date the relevant provision takes effect.
This timeline is maintained by INHUMAIN.AI and updated as implementation guidance, standards, and delegated acts are published. See also: EU AI Act Complete Guide, High-Risk Systems, Prohibited Practices, and Global AI Regulation Tracker.