EU AI Act High-Risk AI Systems: Complete Classification Guide
Deep dive into Annex III of the EU AI Act — the eight categories of high-risk AI systems, compliance requirements for each, and how to determine if your AI system qualifies.
The high-risk classification is the regulatory core of the EU AI Act. AI systems classified as high-risk face the most demanding compliance requirements in the Act: mandatory risk management systems, data governance obligations, comprehensive technical documentation, human oversight mechanisms, accuracy and robustness standards, and conformity assessments before market placement.
This guide provides a complete analysis of Annex III — the eight categories of standalone high-risk AI systems — along with the compliance requirements that apply to each.
Two Pathways to High-Risk Classification
Under Article 6 of the AI Act, an AI system is classified as high-risk through one of two pathways:
Pathway 1: Safety Components (Annex I)
An AI system is high-risk if it is a safety component of a product that falls under EU harmonization legislation listed in Annex I, or if the AI system is itself such a product, and the product is required to undergo a third-party conformity assessment under that legislation.
Annex I sectors include medical devices, machinery, toys, lifts, equipment for explosive atmospheres, radio equipment, pressure equipment, cableway installations, personal protective equipment, gas appliances, civil aviation, motor vehicles, agricultural vehicles, marine equipment, rail interoperability, and civil drones.
These AI systems must comply with both the AI Act and the applicable sectoral legislation. The extended compliance deadline for these systems is August 2, 2027.
Pathway 2: Standalone High-Risk Systems (Annex III)
AI systems listed in Annex III are classified as high-risk based on their area of deployment, regardless of whether they are embedded in a physical product. These are the focus of this guide.
Important exception — Article 6(3): An AI system in an Annex III area is not considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. Specifically, an Annex III system is not high-risk if it:
- Performs a narrow procedural task
- Is intended to improve the result of a previously completed human activity
- Detects decision-making patterns without replacing or influencing human assessment
- Performs a preparatory task that is relevant to a use case listed in Annex III
Providers who believe their Annex III system is not high-risk under this exception must document their assessment and register it with the EU database before placing the system on the market.
Annex III: The Eight Categories
Category 1: Biometrics
AI systems intended to be used for:
-
Remote biometric identification (not in real-time, which is addressed under prohibited practices): Systems that identify natural persons at a distance by comparing biometric data against reference databases. This includes facial recognition systems used for retrospective identification.
-
Biometric categorization: Systems that assign natural persons to specific categories based on biometric data, such as categorizing individuals by ethnicity, gender, age, hair color, eye color, tattoos, or other physical characteristics.
-
Emotion recognition: Systems that identify or infer emotions or intentions of natural persons based on their biometric data. This includes systems analyzing facial expressions, voice patterns, body posture, or physiological signals to determine emotional states.
Compliance considerations: Biometric identification systems are the only Annex III category that requires a third-party conformity assessment by a notified body (Article 43(1)). All other Annex III categories permit internal conformity assessment (self-assessment). This reflects the heightened risk biometrics pose to fundamental rights, particularly privacy and non-discrimination.
Category 2: Critical Infrastructure
AI systems intended to be used as safety components in the management and operation of:
-
Road traffic and supply of water, gas, heating, and electricity: This includes AI systems managing traffic signals, autonomous vehicle decision-making components, power grid management systems, and water treatment process control.
-
Digital infrastructure: AI systems managing internet traffic routing, DNS systems, cloud infrastructure operations, and data center management where failure could affect large populations.
Compliance considerations: These systems often operate in environments where failure has immediate physical consequences. Risk management systems must address cascading failure scenarios, and robustness requirements are particularly stringent. Human oversight must enable rapid intervention. Post-market monitoring must be continuous and real-time where technically feasible.
Category 3: Education and Vocational Training
AI systems intended to be used for:
-
Determining access to or admission to educational and vocational training institutions at all levels: This covers AI systems used in admissions decisions, from primary school placement to university selection and professional certification programs.
-
Evaluating learning outcomes: AI-driven grading and assessment systems, including automated essay scoring, examination proctoring, and competency evaluation.
-
Assessing the appropriate level of education for an individual: Adaptive learning systems that determine educational placement or track assignment.
-
Monitoring and detecting prohibited behavior during tests: AI-powered proctoring systems that monitor students during examinations.
Compliance considerations: Educational AI systems directly affect individuals’ life opportunities and are subject to particular scrutiny around bias and discrimination. Data governance requirements demand attention to training data representativeness across demographic groups. Transparency obligations require that students and parents receive meaningful information about how AI systems affect educational outcomes. The prohibition on emotion recognition in educational institutions (Article 5(1)(f)) must be distinguished from legitimate proctoring systems that detect specific behaviors.
Category 4: Employment, Workers Management, and Access to Self-Employment
AI systems intended to be used for:
-
Recruitment and selection: AI systems used in CV screening, candidate ranking, interview analysis, and hiring recommendations.
-
Decisions affecting terms of work: AI systems making or influencing decisions about promotion, task allocation, performance evaluation, termination, or contract conditions.
-
Advertising job vacancies and evaluating candidates: Systems that target job advertisements or filter applications.
-
Monitoring and evaluation of workers: Systems that monitor employee performance, productivity, behavior, or engagement through surveillance or data analysis.
Compliance considerations: Employment AI is among the most litigated areas of AI regulation globally. NYC Local Law 144 already requires bias audits for automated employment decision tools. Under the AI Act, providers and deployers of employment AI must implement particularly robust anti-discrimination measures. Human oversight requirements in employment contexts are demanding: Article 14 requires that human overseers have the competence, training, and authority to override AI system outputs. Deployers (typically employers) bear significant obligations, including conducting Data Protection Impact Assessments under GDPR and informing works councils or employee representatives.
Category 5: Access to and Enjoyment of Essential Private and Public Services and Benefits
AI systems intended to be used for:
-
Evaluating eligibility for public assistance benefits and services: Systems used by government agencies to determine entitlement to social security, housing assistance, unemployment benefits, or other welfare programs.
-
Credit scoring and creditworthiness assessment: AI systems that evaluate an individual’s creditworthiness or determine credit scores. This is one of the most commercially significant Annex III categories, affecting banks, fintech companies, and credit bureaus.
-
Risk assessment and pricing in life and health insurance: AI systems used to assess risk and determine premiums for life and health insurance products. Motor vehicle and property insurance are not covered.
-
Emergency response dispatch and prioritization: AI systems that evaluate or classify emergency calls, determine dispatch priority, or triage requests for first responders.
Compliance considerations: Credit scoring AI is already subject to extensive financial regulation. The AI Act adds a layer of requirements on top of existing obligations under the Consumer Credit Directive, GDPR, and national financial regulation. Insurance pricing AI must address the risk of indirect discrimination through proxy variables. Public benefits AI raises particular concerns about error rates disproportionately affecting vulnerable populations. Documentation must demonstrate that systems have been tested for adverse impact on protected groups.
Category 6: Law Enforcement
AI systems intended to be used by law enforcement authorities for:
-
Individual risk assessment: Systems assessing the risk that a natural person will commit or recommit a criminal offense, or the risk of victimization.
-
Polygraph and similar tools: AI systems used as polygraphs or to detect deception during investigations.
-
Evaluation of evidence reliability: AI systems assessing the reliability or weight of evidence in criminal investigations.
-
Profiling during investigation: AI systems used for profiling in the course of detection, investigation, or prosecution of criminal offenses.
-
Crime analytics on large datasets: AI systems analyzing large datasets to identify unknown patterns and relationships relevant to criminal investigation.
Compliance considerations: Law enforcement AI operates at the intersection of public safety and fundamental rights. The AI Act’s requirements must be read alongside the Law Enforcement Directive (Directive 2016/680) and national criminal procedure laws. Documentation must be particularly thorough given the potential for judicial review. Human oversight in law enforcement contexts requires that officers understand the limitations of AI outputs and retain decision-making authority. Bias testing must address the well-documented risks of discriminatory policing outcomes.
Category 7: Migration, Asylum, and Border Control Management
AI systems intended to be used by competent public authorities for:
-
Polygraph and similar tools: AI systems used in the assessment of asylum claims or migration proceedings.
-
Risk assessment: AI systems assessing security, irregular migration, or health risks posed by individuals.
-
Examination of applications: AI systems assisting in the examination of applications for asylum, visa, and residence permits, including assessment of evidence reliability.
-
Detection and identification: AI systems used for detection, recognition, or identification of individuals in the context of migration and border control, except for travel document verification.
Compliance considerations: Migration AI involves some of the most vulnerable populations subject to AI-driven decisions. International human rights obligations, including the principle of non-refoulement, must be integrated into risk management systems. The UNHCR has raised concerns about AI in asylum processing, and compliance documentation should demonstrate awareness of and response to these concerns. Language barriers, cultural differences, and the high-stakes nature of asylum decisions demand robust human oversight mechanisms.
Category 8: Administration of Justice and Democratic Processes
AI systems intended to be used by judicial authorities for:
-
Researching and interpreting facts and the law: AI systems used by courts to analyze legal questions, evaluate evidence, or identify relevant precedent.
-
Applying the law to facts: AI systems that assist judges in determining how legal rules apply to specific cases.
-
Alternative dispute resolution: AI systems used in mediation, arbitration, or other dispute resolution processes.
-
Influencing the outcome of elections or referendums: AI systems used to influence voters, including through targeted political advertising, personalized political content, or information manipulation during electoral periods. This does not cover AI used for organizational, accessibility, or logistical aspects of elections.
Compliance considerations: The use of AI in judicial proceedings raises fundamental questions about the right to a fair trial and judicial independence. Several EU member states have placed limitations on predictive justice systems. Article 6 of the European Convention on Human Rights guarantees fair trial rights, and any AI system used in judicial settings must be compatible with these guarantees. Democratic process protections intersect with the Digital Services Act’s requirements for platform transparency during elections.
Compliance Requirements for All High-Risk Systems
Every high-risk AI system, regardless of Annex III category, must meet the following requirements:
Risk Management System (Article 9)
A continuous, iterative risk management process throughout the AI system lifecycle. Must include:
- Identification and analysis of foreseeable risks
- Estimation and evaluation of risks from intended use and reasonably foreseeable misuse
- Adoption of risk management measures
- Testing to ensure measures are adequate
Data and Data Governance (Article 10)
Training, validation, and testing datasets must be:
- Relevant, sufficiently representative, and to the best extent possible free of errors
- Subject to appropriate data governance and management practices
- Assessed for potential biases that may lead to discrimination
Technical Documentation (Article 11)
Comprehensive documentation drawn up before market placement and kept up to date. Must cover system design, development methodology, performance metrics, risk management, and data governance.
Record-Keeping (Article 12)
High-risk AI systems must be designed to automatically log events throughout their operational lifetime. Logs must be sufficient to trace the system’s operation and enable post-market monitoring.
Transparency and Information to Deployers (Article 13)
Providers must supply deployers with clear, comprehensive instructions for use, including system capabilities, limitations, intended purpose, performance characteristics, and human oversight provisions.
Human Oversight (Article 14)
Systems must be designed to enable effective oversight by natural persons during the period of use. Human overseers must be able to:
- Fully understand the system’s capacities and limitations
- Properly monitor the system’s operation
- Decide not to use the system or disregard its output
- Intervene or interrupt the system’s operation
Accuracy, Robustness, and Cybersecurity (Article 15)
High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle, as declared in accompanying documentation.
Conformity Assessment Procedures
Internal Control (Annex VI)
Most Annex III high-risk AI systems are assessed through internal control, where the provider verifies compliance against the Act’s requirements. The provider must:
- Verify the quality management system
- Examine the technical documentation
- Verify that the system conforms to the technical documentation
- Issue the EU declaration of conformity
- Affix the CE marking
Third-Party Assessment (Annex VII)
Biometric identification systems (Category 1) require assessment by an independent notified body. The notified body examines the technical documentation, quality management system, and performs testing or inspection as necessary before issuing a certificate.
Practical Classification Decisions
Determining whether a specific AI system is high-risk requires careful analysis. Consider these factors:
-
Does the system fall within an Annex III category? Map the system’s intended purpose against each of the eight categories.
-
Does the Article 6(3) exception apply? Assess whether the system performs a narrow procedural task, improves human activity, detects patterns without replacing judgment, or is preparatory in nature.
-
What is the system’s actual impact on individuals? Even if an exception appears to apply, consider whether the system’s output materially affects individuals’ rights or life opportunities.
-
Document the classification decision. Whether the system is classified as high-risk or not, the reasoning must be documented and defensible.
Organizations should err on the side of caution in borderline cases. Misclassifying a high-risk system as non-high-risk exposes the provider to penalties of up to EUR 15 million or 3% of worldwide annual turnover.
This guide is part of INHUMAIN.AI’s comprehensive EU AI Act coverage. See also: EU AI Act Complete Guide, Prohibited Practices, Implementation Timeline, and Global AI Regulation Tracker.