EU AI Act: What AI Practices Are Banned in Europe
Complete analysis of AI practices prohibited under Article 5 of the EU AI Act — social scoring, subliminal manipulation, biometric surveillance, and more. Enforceable since February 2, 2025.
Article 5 of the EU AI Act establishes an outright ban on AI practices deemed to pose an unacceptable risk to fundamental rights, safety, and democratic values. These are the AI applications that the European Union has determined cannot be made safe through compliance requirements alone — they are prohibited entirely.
The prohibitions took effect on February 2, 2025, making them the first enforceable provisions of the AI Act. Violations carry the Act’s maximum penalty: up to EUR 35 million or 7% of the offender’s total worldwide annual turnover, whichever is higher.
The Prohibited Practices
1. Subliminal Manipulation
Banned: AI systems that deploy subliminal techniques beyond a person’s consciousness, or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting the behavior of a person or group in a manner that causes or is reasonably likely to cause significant harm.
What this means in practice: This prohibition targets AI systems designed to influence human behavior through mechanisms that bypass conscious awareness. The key elements are:
- The technique must operate below conscious perception or be deliberately manipulative or deceptive
- The distortion of behavior must be material (not trivial)
- There must be a causal link to actual or likely significant harm (physical, psychological, or financial)
What it does NOT ban: Persuasion through legitimate means. An AI-powered advertising system that makes transparent claims about a product is not subliminal manipulation, even if it is effective. The prohibition targets techniques specifically designed to circumvent conscious decision-making — not all forms of AI-assisted influence.
Enforcement challenge: Demonstrating that a technique operates below conscious awareness requires expert testimony and technical evidence. Regulators will need to develop testing methodologies to identify subliminal AI manipulation techniques.
2. Exploitation of Vulnerabilities
Banned: AI systems that exploit vulnerabilities of a specific group of persons due to their age, disability, or specific social or economic situation, with the objective or effect of materially distorting their behavior in a manner that causes or is likely to cause significant harm.
What this means in practice: This prohibition protects populations that are particularly susceptible to AI-driven manipulation:
- Age: Both children and elderly individuals, whose cognitive capacities may make them more susceptible to certain AI-driven interactions
- Disability: Physical, mental, intellectual, or sensory disabilities that may increase vulnerability to AI manipulation
- Social or economic situation: Individuals in financial distress, social isolation, or precarious situations that may make them more susceptible
Examples that would likely be prohibited:
- An AI-powered lending app that targets financially distressed individuals with predatory loan terms, using behavioral modeling to maximize acceptance
- A gaming AI that identifies and exploits patterns of addictive behavior in minors
- An AI system that uses voice analysis to identify elderly individuals with cognitive decline and targets them with scam interactions
3. Social Scoring by Public Authorities
Banned: AI systems used by or on behalf of public authorities to evaluate or classify natural persons or groups based on their social behavior or known, inferred, or predicted personal or personality characteristics, where the social score leads to detrimental or unfavorable treatment that is:
- Unrelated to the context in which the data was generated or collected, or
- Unjustified or disproportionate to the social behavior or its gravity
What this means in practice: This is the “China social credit” prohibition, but it is more nuanced than a blanket ban on all public scoring. The prohibition has two conditions — the treatment must be either contextually inappropriate or disproportionate. A tax authority using financial data to assess tax risk is not prohibited (contextually appropriate). A police authority using social media behavior to determine access to public housing would likely be prohibited (contextually unrelated).
Private sector gap: The prohibition applies to public authorities and those acting on their behalf. It does not explicitly ban private-sector social scoring, though such practices may be addressed through other provisions (high-risk classification, GDPR, consumer protection law, and the anti-discrimination prohibition).
4. Predictive Policing (Individual Risk Assessment)
Banned: AI systems that assess the risk of a natural person committing a criminal offense based solely on profiling or the assessment of personality traits and characteristics. This prohibition does not apply to AI systems used to support human assessment of involvement in criminal activity based on objective, verifiable facts directly linked to criminal activity.
What this means in practice: The prohibition targets predictive policing systems that attempt to predict criminal behavior based on personal characteristics rather than evidence of actual criminal conduct. Systems that predict who will commit a crime based on demographic data, personality profiling, or behavioral modeling are banned. Systems that analyze evidence of criminal activity that has already occurred are not.
The critical distinction: The phrase “based solely on profiling” is the operative constraint. An AI system that flags individuals for investigation based on evidence (financial transactions consistent with money laundering, communication patterns consistent with known criminal networks) is not prohibited because it is not based solely on profiling. A system that flags individuals as high-risk based on their neighborhood, social connections, age, or psychological profile — without any link to actual criminal evidence — is prohibited.
5. Untargeted Facial Image Scraping
Banned: AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
What this means in practice: This prohibition directly targets practices like those employed by companies that have scraped billions of facial images from social media platforms and public websites to build facial recognition databases. The prohibition covers:
- Scraping facial images from social media platforms (Facebook, Instagram, LinkedIn, etc.)
- Harvesting facial images from CCTV and surveillance camera networks without specific targeting
- Building facial recognition reference databases from publicly available photographs
What it does NOT ban: Targeted collection of facial images with consent, law enforcement collection of images for specific investigations (subject to legal basis), and the use of existing lawfully compiled facial recognition databases.
6. Emotion Recognition in Workplaces and Education
Banned: AI systems that infer emotions of natural persons in the areas of workplace and educational institutions, except where the AI system is intended to be placed on the market or put into service for medical or safety reasons.
What this means in practice: Employers cannot deploy AI systems that analyze employees’ facial expressions, voice tone, or physiological signals to determine their emotional state, engagement level, or satisfaction. Schools cannot use AI to monitor students’ emotional responses during classes.
Exceptions:
- Medical reasons: AI systems used to monitor the emotional state of patients for medical purposes (e.g., detecting distress in care settings)
- Safety reasons: AI systems used to detect drowsiness or attention levels in safety-critical roles (e.g., pilots, heavy machinery operators, drivers)
Boundary issue: The line between prohibited workplace emotion recognition and permitted safety monitoring requires careful analysis. A system monitoring a truck driver’s alertness level for safety purposes may be permitted; the same system monitoring an office worker’s engagement level is likely prohibited.
7. Biometric Categorization Based on Sensitive Attributes
Banned: AI systems that categorize natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. Systems that lawfully label or filter biometric datasets in the area of law enforcement are excluded.
What this means in practice: AI systems that analyze photographs, voice recordings, or other biometric data to classify individuals by sensitive protected characteristics are prohibited. This covers systems that attempt to determine sexual orientation from facial features, infer religious affiliation from appearance, or categorize race from biometric measurements.
8. Real-Time Remote Biometric Identification in Public Spaces
Banned: AI systems for real-time remote biometric identification in publicly accessible spaces for the purposes of law enforcement.
Exceptions (strictly limited): Real-time remote biometric identification may be permitted in specific, narrowly defined circumstances:
- Targeted search for victims: Search for specific potential victims of abduction, trafficking, or sexual exploitation; search for missing persons
- Prevention of imminent threat: Prevention of a specific, substantial, and imminent threat to life or physical safety, or a genuine and present or foreseeable threat of a terrorist attack
- Serious crime investigation: Identification of a person suspected of committing a serious criminal offense (as defined by national law)
Safeguards for exceptions: Even when exceptions apply, the use must be:
- Authorized by a judicial authority or independent administrative authority (ex ante authorization required, except in duly justified cases of urgency where authorization must be sought within 24 hours)
- Necessary and proportionate
- Limited in time and geographic scope
- Subject to a prior fundamental rights impact assessment
- Registered in the EU database for high-risk AI systems
Enforcement Landscape
Penalty Structure
Violation of Article 5 prohibitions carries the AI Act’s highest penalty tier:
- Natural persons: Up to EUR 35 million
- Legal persons: Up to EUR 35 million or 7% of total worldwide annual turnover in the preceding financial year, whichever is higher
For context, 7% of global turnover for a major technology company would represent billions of euros — significantly exceeding GDPR’s maximum of 4% of global turnover.
Enforcement Authority
National market surveillance authorities are responsible for enforcing the prohibitions within their member states. The European Commission, through the AI Office, coordinates enforcement and can initiate proceedings relating to GPAI models.
Early Enforcement Actions
As the prohibitions took effect in February 2025, the enforcement landscape is still developing. National authorities have been establishing their enforcement mechanisms and developing assessment methodologies. Several data protection authorities, already experienced in AI enforcement through GDPR, have signaled active interest in Article 5 enforcement.
Relationship to Existing Enforcement
The Article 5 prohibitions complement existing enforcement regimes:
- GDPR: Biometric data processing violations can trigger both GDPR and AI Act enforcement
- Consumer protection: Manipulative AI practices may also violate the Unfair Commercial Practices Directive
- Fundamental rights: The EU Charter of Fundamental Rights provides additional grounds for challenging prohibited practices
- National law: Member states may have additional prohibitions under domestic law
Practical Compliance Guidance
Step 1: Audit Existing AI Systems
Review all deployed AI systems against the eight prohibitions. Pay particular attention to:
- Any system that influences user behavior (manipulation risk)
- Any system processing biometric data (multiple prohibitions apply)
- Any system used in employment or education (emotion recognition prohibition)
- Any system performing risk assessment of individuals (predictive policing prohibition)
Step 2: Assess Borderline Cases
Many AI systems will fall near the boundary of prohibited practices. Document the analysis for each borderline system, including:
- Why the system does not fall within a prohibition
- What safeguards are in place to prevent drift into prohibited territory
- How the system will be monitored for compliance
Step 3: Implement Technical Controls
Where systems operate near prohibited boundaries, implement technical controls:
- Data processing constraints that prevent collection of prohibited categories
- Output filters that prevent generation of prohibited classifications
- Audit logs that demonstrate ongoing compliance
Step 4: Train Personnel
Ensure that all personnel involved in AI system design, development, deployment, and operation understand the prohibitions and can identify potential violations.
This guide is part of INHUMAIN.AI’s comprehensive EU AI Act coverage. See also: EU AI Act Complete Guide, High-Risk Systems, Implementation Timeline, and Global AI Regulation Tracker.