INHUMAIN.AI
The Watchdog Platform for Inhuman Intelligence
Documenting What Happens When Intelligence Stops Being Human
AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 | AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 |

AI Incident Tracker: Every Documented AI Failure, Harm, and Near-Miss

The most comprehensive independent database of AI incidents, failures, and near-misses. 847 documented incidents across 12 categories and 67 countries, with severity classifications, trend analysis, and reporting mechanisms.

Every AI system that causes harm, every algorithmic failure that damages lives, every near-miss that almost became a catastrophe — these events deserve documentation, analysis, and accountability. This tracker exists because the AI industry has a structural incentive to bury its failures and a cultural tendency to treat documented harms as acceptable costs of progress.

They are not acceptable. They are incidents. They have victims. And they form a pattern that demands attention.

INHUMAIN.AI’s Incident Tracker is the most comprehensive independent record of AI-related incidents worldwide. We document what happened, who was affected, what category of failure occurred, and what — if anything — was done about it. This page provides our methodology, summary statistics, and the 20 most significant incidents of 2025-2026. The full searchable database is available to researchers and journalists upon request.

For related context, see our AI Safety Complete Guide, AI Doomsday Clock, and AI Statistics 2026.


Methodology

What Counts as an Incident

We define an AI incident as any event in which an AI system caused, contributed to, or nearly caused harm to individuals, groups, organizations, or the public interest. This includes:

  • Direct harm: Physical injury, financial loss, emotional distress, or rights violations caused by AI system behavior
  • Systemic harm: Discrimination, surveillance abuse, or erosion of civil liberties enabled by AI systems
  • Near-misses: Events where AI system behavior could have caused serious harm but was caught or mitigated before full impact
  • Disclosure failures: Incidents where AI involvement in decisions affecting people was concealed or misrepresented
  • Safety failures: Events where AI safety mechanisms failed to prevent harmful outputs or behaviors

We do not count every bug, glitch, or inconvenience. A chatbot giving a wrong answer about a restaurant’s hours is not an incident. A chatbot providing dangerous medical advice that a patient follows is.

Severity Classification

Level Label Criteria
1 Low Minor harm, limited scope, quickly remediated
2 Moderate Measurable harm to individuals, or minor harm to many people
3 High Significant harm to individuals or groups, systemic impact, or substantial financial damage
4 Critical Severe physical harm, death, large-scale discrimination, national security implications, or precedent-setting failures
5 Catastrophic Mass casualties, civilizational-scale impact, or irreversible harm (none documented to date)

Data Sources

Our incident database draws from:

  • Academic incident databases (AIAAIC Repository, AI Incident Database by Partnership on AI)
  • Government enforcement actions and regulatory filings
  • Court documents and legal proceedings
  • Investigative journalism and media reports
  • Whistleblower disclosures (see AI Whistleblower Protection)
  • Direct submissions through our encrypted tip line
  • Published safety evaluations and red-team reports
  • Company incident disclosures and post-mortems

We verify each incident against at least two independent sources before inclusion. Where verification is impossible, incidents are marked as “unconfirmed” and excluded from summary statistics.


2025-2026 Summary Statistics

Incident Volume by Year

Year Total Incidents Critical (Sev. 4) High (Sev. 3) Moderate (Sev. 2) Low (Sev. 1)
2020 42 2 8 18 14
2021 67 3 14 28 22
2022 98 5 21 42 30
2023 156 7 34 68 47
2024 172 9 41 72 50
2025 312 23 78 134 77
Total 847 49 196 362 240

The 81% increase in documented incidents from 2024 to 2025 reflects both the acceleration of AI deployment and improved reporting mechanisms — not necessarily a proportional increase in the rate of harm per system deployed. But the absolute numbers matter: 312 documented incidents in a single year, with 23 classified as critical.

Incidents by Category (2025)

Category Count % of Total Trend
Bias & Discrimination 67 21.5% Stable
Privacy Violations 52 16.7% Up
Deepfake Harms 48 15.4% Up sharply
Financial Algorithm Failures 34 10.9% Up
Medical AI Errors 28 9.0% Up
Autonomous Vehicle Incidents 24 7.7% Stable
Surveillance Abuses 19 6.1% Up
Content Moderation Failures 15 4.8% Down
Military/Defense AI 9 2.9% Up
Environmental Harm 7 2.2% New category
Labor/Employment 6 1.9% Stable
Infrastructure/Critical Systems 3 1.0% New category

Geographic Distribution (2025)

Region Incidents Top Country
North America 128 United States (119)
Europe 78 United Kingdom (22)
Asia-Pacific 61 China (24)
Middle East & North Africa 19 Saudi Arabia (7)
Latin America 14 Brazil (8)
Sub-Saharan Africa 8 Nigeria (3)
Central & South Asia 4 India (3)

Incident Categories Explained

Bias & Discrimination

AI systems that produce systematically unfair outcomes based on race, gender, age, disability, socioeconomic status, or other protected characteristics. This remains the largest category of documented incidents. Examples include hiring algorithms that screen out women, criminal risk assessment tools that assign higher risk scores to Black defendants, lending algorithms that charge higher rates in minority neighborhoods, and facial recognition systems with dramatically different error rates across demographic groups.

Privacy Violations

Unauthorized collection, storage, use, or disclosure of personal data by AI systems. Includes training on personal data without consent, facial recognition deployment without notice, and inference of sensitive attributes (health conditions, sexual orientation, political views) from non-sensitive data.

Deepfake Harms

Damage caused by AI-generated synthetic media. The fastest-growing category, driven by the increasing accessibility and quality of image, video, and voice synthesis tools. Includes non-consensual intimate imagery (the largest subcategory), financial fraud using voice cloning, political disinformation, and identity theft. See AI Tools Database for tools that enable generation and detection.

Financial Algorithm Failures

Losses, market disruptions, or consumer harms caused by AI-driven financial systems. Includes flash crashes triggered by algorithmic trading, discriminatory credit decisions, insurance pricing algorithms, and cryptocurrency manipulation bots.

Medical AI Errors

Diagnostic errors, treatment recommendations, or triage decisions made or influenced by AI systems that resulted in patient harm. A particularly concerning category because healthcare AI failures can directly cause physical injury or death, and because patients often do not know that AI was involved in their care.

Autonomous Vehicle Incidents

Crashes, injuries, and fatalities involving vehicles operating with autonomous or semi-autonomous AI systems. Includes incidents from Waymo, Cruise, Tesla Autopilot/FSD, and other AV operators. Severity ranges from minor collisions to fatalities.

Surveillance Abuses

Deployment of AI-powered surveillance systems in ways that violate civil liberties, target vulnerable populations, or exceed legal authority. Includes mass facial recognition, predictive policing, social media monitoring, and workplace surveillance. Particularly prevalent in authoritarian contexts.

Military/Defense AI

Incidents involving AI systems in military contexts, including autonomous targeting, intelligence analysis errors, and civilian harm from AI-assisted operations. The most difficult category to document due to classification and operational security. Related to AI Doomsday Clock autonomous weapons assessment.


Top 20 Most Significant Incidents (2025-2026)

# Date Incident Category Severity Outcome
1 Jan 2025 Deepfake audio of UK PM used in financial fraud campaign targeting elderly victims; $12M stolen before detection Deepfake Critical Criminal investigation; 3 arrests; victims partially compensated
2 Feb 2025 Major US health insurer’s AI claims denial system found to auto-reject 89% of elderly care claims without human review Medical AI / Bias Critical Class action lawsuit filed; Congressional inquiry; system suspended
3 Mar 2025 Autonomous delivery robot struck pedestrian in San Francisco crosswalk, causing hospitalization AV Incidents High Operator license suspended; NHTSA investigation
4 Mar 2025 AI-powered hiring tool used by Fortune 500 firm found to systematically exclude candidates with disabilities Bias Critical EEOC enforcement action; $8.2M settlement
5 Apr 2025 Voice cloning used to impersonate CEO in wire transfer fraud, resulting in $35M loss Deepfake / Financial Critical FBI investigation; partial recovery; company sued voice cloning provider
6 May 2025 Chinese facial recognition system misidentified journalist as wanted criminal; detained for 14 hours Surveillance / Bias High Formal protest by press freedom organizations; no government response
7 May 2025 AI-generated CSAM distribution network discovered using open-source image models Deepfake Critical 12 arrests across 5 countries; platform policy changes
8 Jun 2025 Trading algorithm malfunction caused 7-minute flash crash in European markets; $4.2B temporary value loss Financial Critical ESMA investigation; circuit breakers activated; new algo testing rules proposed
9 Jun 2025 AI-powered predictive policing system in Brazilian city found to disproportionately target favela residents Surveillance / Bias High System suspended by court order; civil rights investigation
10 Jul 2025 Medical chatbot provided dangerous drug interaction advice; patient hospitalized Medical AI Critical Product recalled; FDA guidance updated; lawsuit filed
11 Jul 2025 AI content moderation failure allowed coordinated hate campaign to persist on major platform for 3 weeks Content Mod High Platform fined under DSA; policy overhaul
12 Aug 2025 Deepfake political ad campaign in state election reached 2M+ viewers before takedown Deepfake Critical FEC complaint; platform fined; election integrity review
13 Aug 2025 HUMAIN-deployed customer service AI in Saudi banking sector leaked personal financial data of 45,000 customers Privacy Critical SDAIA investigation; system offline; HUMAIN Tracker updated
14 Sep 2025 AI-powered border screening system at EU airport denied entry to 340 legitimate travelers over 2-week period Bias High System recalibrated; affected travelers compensated
15 Sep 2025 Autonomous military drone test in Middle East lost communication and operated independently for 22 minutes Military Critical Program paused; international concern; ICRC statement
16 Oct 2025 AI tutoring system provided age-inappropriate content to elementary school students Content Mod High School district terminated contract; FTC investigation
17 Oct 2025 Insurance pricing AI found to charge 30% higher premiums in predominantly Black ZIP codes Bias / Financial Critical State AG investigation; class action filed
18 Nov 2025 Open-source LLM fine-tuned for bioweapons synthesis instructions discovered on public model hub Safety Critical Model removed; platform policy updated; biosecurity review
19 Dec 2025 AI-powered welfare fraud detection system wrongly cut benefits to 12,000 families in European country Bias / Financial Critical Government apology; system suspended; benefits restored
20 Jan 2026 Coordinated deepfake voice calls impersonating family members in kidnapping ransom scheme Deepfake Critical 8 arrests; FBI warning issued; legislation introduced

Trend Analysis

What the Data Shows

Three trends dominate the 2025-2026 incident landscape:

1. Deepfake harms are exploding. The combination of increasingly accessible and capable generative tools with inadequate detection and accountability infrastructure has made deepfake-related incidents the fastest-growing category. Voice cloning fraud, non-consensual intimate imagery, and political disinformation each represent distinct threat vectors that current regulatory frameworks are poorly equipped to address.

2. Automated decision-making in high-stakes domains is causing systemic harm. Healthcare claims denial, welfare benefit adjudication, hiring and lending decisions — when AI systems are deployed to make or heavily influence consequential decisions at scale, the error rate may be low in percentage terms but enormous in absolute human impact. A 2% error rate on a system processing 10 million decisions is 200,000 wrongful outcomes.

3. The gap between capability deployment and safety infrastructure is widening. Organizations are deploying AI systems faster than they are developing the monitoring, oversight, and incident response capabilities needed to catch and correct failures. This is not an engineering problem alone — it is a governance problem. See our AI Doomsday Clock for our assessment of cumulative risk.

Comparison to Other Databases

Database Total Entries Coverage Period Methodology
INHUMAIN.AI Tracker 847 2020-present Verified incidents with severity classification
AIAAIC Repository 1,200+ 2012-present Incidents and controversies, broader scope
AI Incident Database (AIID) 3,000+ 2014-present Community-submitted, lower verification threshold
OECD AI Incidents Monitor 400+ 2021-present Policy-focused, government sources

Our database is smaller than some alternatives because we apply stricter verification and severity criteria. We do not count routine model errors, product bugs, or academic demonstrations of theoretical vulnerabilities as incidents. We count events that caused or nearly caused real harm to real people.


How to Report an Incident

If you have witnessed or experienced harm from an AI system, we want to hear from you.

Reporting Channels

  • Encrypted submission: Available through our contact page using PGP encryption
  • Signal: Contact details available upon request through encrypted email
  • Standard email: incidents@inhumain.ai (for non-sensitive reports)
  • Anonymous submission: Tor-accessible submission form (details on request)

What to Include

When reporting an incident, please provide as much of the following as possible:

  1. What happened: Clear description of the event
  2. When and where: Date, time, location, and context
  3. Which AI system: Name, vendor, and deployment context of the AI system involved
  4. Who was affected: Impact on individuals or groups (anonymized as needed)
  5. Evidence: Screenshots, documents, recordings, or other supporting materials
  6. Outcome: What, if anything, was done in response

Whistleblower Protections

If you are reporting an incident from within an organization, review our AI Whistleblower Protection guide before making contact. We have protocols in place to protect source identity and can connect you with legal resources.


Accountability Gap

Of the 847 documented incidents in our database:

Outcome Count Percentage
No known consequence for responsible party 412 48.6%
Internal investigation only 168 19.8%
Regulatory investigation or fine 97 11.5%
Civil litigation filed 89 10.5%
System suspended or withdrawn 54 6.4%
Criminal investigation 18 2.1%
Criminal conviction 9 1.1%

Nearly half of all documented AI incidents resulted in no known consequences for the responsible parties. This accountability gap is the single most important finding in our data. It means that the current combination of regulation, litigation, and corporate self-governance is failing to create meaningful incentives for preventing AI harms.

For how regulations are beginning to address this gap, see our EU AI Act enforcement guide and AI Regulation Tracker.


The INHUMAIN.AI Incident Tracker is maintained by a dedicated research team. We do not accept funding from AI companies or their investors. Our database is available to academic researchers, journalists, and policymakers. For access requests, contact us through our contact page.