INHUMAIN.AI
The Watchdog Platform for Inhuman Intelligence
Documenting What Happens When Intelligence Stops Being Human
AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 | AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 |

AI Liability: Who Pays When AI Causes Harm?

Comprehensive guide to AI liability law — the EU AI Liability Directive, US product liability doctrine, autonomous vehicle liability, medical AI, and the emerging legal frameworks that determine who is responsible when AI systems cause damage.

When a self-driving car strikes a pedestrian, who is liable? The vehicle manufacturer? The AI model developer? The sensor supplier? The mapping data provider? The vehicle owner who activated autonomous mode? The regulatory body that approved the system for road use?

When an AI diagnostic tool misidentifies a benign tumor as malignant, leading to an unnecessary mastectomy, who compensates the patient? The hospital that deployed the system? The AI company that built it? The radiologist who relied on its output?

When an AI-powered hiring tool systematically discriminates against women, who pays the damages? The employer that used it? The vendor that sold it? The training data provider whose historical data encoded the bias?

These are not hypothetical questions. They are being litigated in courts across multiple jurisdictions right now. And the legal frameworks that will answer them are still being written.


The Fundamental Challenge

Traditional liability law was designed for a world where the chain of causation between a human decision and resulting harm was traceable. A manufacturer designs a product, a defect in that design causes injury, and the manufacturer is liable. The logic is clear. The chain is linear.

AI breaks this model in several ways:

Opacity: Many AI systems, particularly deep learning models, operate as functional black boxes. The relationship between inputs, model weights, and outputs cannot be meaningfully explained in the causal terms that liability law requires.

Autonomy: AI systems make decisions without direct human intervention. The more autonomous the system, the more difficult it becomes to identify a specific human decision that caused the harm.

Complexity of the supply chain: Modern AI systems involve multiple contributors — data providers, model developers, cloud infrastructure providers, integration partners, deployers, and end users. Harm may result from interactions between components provided by different entities.

Evolving behavior: AI systems that learn from new data can change their behavior after deployment. A system that performed safely during testing may develop harmful patterns in production as it encounters new data. Who is liable for behavior that emerged after deployment?

Data-dependent outcomes: AI system behavior depends on training data, and training data may contain biases, errors, or gaps that only manifest as harm in specific contexts. The causal chain between a data quality issue and a discriminatory outcome may be nearly impossible to trace.


EU AI Liability Framework

AI Liability Directive (Proposed)

The European Commission proposed the AI Liability Directive (COM/2022/496) in September 2022 as a companion to the AI Act. It is designed to make it easier for individuals harmed by AI systems to obtain compensation through civil liability claims.

Key mechanism: Rebuttable presumption of causality

The Directive introduces a rebuttable presumption: if a claimant demonstrates that (a) the defendant failed to comply with a relevant duty of care (such as requirements under the AI Act), and (b) the harm is of a type that the non-compliance is likely to produce, then the court may presume that the non-compliance caused the harm.

This shifts part of the burden of proof from the claimant to the defendant. In traditional tort claims involving AI, claimants face an almost insurmountable challenge in proving causation because they cannot access or understand the AI system’s internal operations. The presumption of causality addresses this asymmetry.

Disclosure obligations:

The Directive empowers courts to order defendants to disclose evidence about their AI systems. If a defendant fails to comply with a disclosure order, the court may presume the non-compliance for which the disclosure was sought.

Scope: The Directive applies to non-contractual civil liability claims for damage caused by AI systems. It does not cover contractual liability, product liability (addressed by the revised Product Liability Directive), or criminal liability.

Status: As of early 2026, the Directive is under trilogue negotiation between the European Parliament, Council, and Commission. Its final form may differ from the original proposal.

Revised Product Liability Directive (Directive 2024/2853)

Adopted in October 2024, the revised Product Liability Directive explicitly includes software — including AI systems — within the definition of “product.” This is a seismic change.

Key implications:

Strict liability for AI products: Under product liability, a producer is liable for damage caused by a defective product without the claimant needing to prove fault (negligence). The inclusion of AI as a “product” means strict liability applies to defective AI systems.

Definition of defect: A product is defective if it does not provide the safety that a person is entitled to expect. For AI systems, this includes expected safety considering the system’s ability to learn after deployment, the effect of other products or digital services on the AI system, and foreseeable misuse.

Expanded producer definition: The definition of “producer” includes the manufacturer, developer, or any person who substantially modifies a product. For AI, this means both the original model developer and entities that substantially fine-tune or modify the model may be considered producers.

Rebuttable presumptions: Where a claimant faces excessive difficulty in proving the defectiveness of a product or the causal link between defect and damage, the court may presume defectiveness or causation under certain conditions.

Transposition deadline: Member states must transpose the Directive into national law by December 9, 2026.


US Liability Framework

The United States does not have AI-specific liability legislation. Instead, AI liability is addressed through existing legal doctrines, primarily product liability, negligence, and various statutory schemes.

Product Liability

US product liability law imposes liability on manufacturers and sellers of defective products under three theories:

1. Manufacturing defect: The product deviated from its intended design. For AI, this might mean an AI system that was improperly trained, deployed with corrupted data, or released with a known software bug that caused it to behave differently from specification.

2. Design defect: The product was designed in a way that is unreasonably dangerous. Two tests are used:

  • Consumer expectations test: Did the product fail to perform as safely as an ordinary consumer would expect?
  • Risk-utility test: Do the risks of the design outweigh its benefits?

For AI systems, design defect claims might challenge the choice of training data, the model architecture, the decision not to include safety guardrails, or the absence of human oversight mechanisms.

3. Failure to warn: The manufacturer failed to provide adequate warnings about the product’s risks. For AI, this might include failure to disclose known limitations, failure to warn about potential biases, or inadequate instructions for safe deployment.

Key question: Is AI software a “product”?

Historically, US courts have drawn a distinction between products (tangible goods) and services, with strict product liability applying only to products. Whether software qualifies as a “product” under product liability law has been inconsistently resolved across jurisdictions. The Restatement (Third) of Torts notes that the application of strict liability to software and AI remains unsettled.

Negligence

Negligence claims require the claimant to prove four elements:

  1. The defendant owed a duty of care to the claimant
  2. The defendant breached that duty
  3. The breach caused the claimant’s harm
  4. The claimant suffered actual damages

For AI systems, negligence claims may target:

  • The developer’s failure to adequately test the system
  • The deployer’s failure to implement appropriate oversight
  • The failure to monitor the system’s performance post-deployment
  • The failure to respond to known issues or emerging harms

Section 230 and AI

Section 230 of the Communications Decency Act provides immunity to interactive computer services for content created by third parties. Whether AI-generated content qualifies for Section 230 protection is an actively debated legal question. If an AI system generates harmful content autonomously (rather than hosting user-created content), Section 230 immunity may not apply.


Sector-Specific Liability

Autonomous Vehicles

Autonomous vehicle liability represents the most developed sector-specific AI liability framework, driven by real-world deployments and incidents.

Current approach in most US states: The registered vehicle operator (human or company) bears primary liability. As vehicles become more autonomous, liability shifts from the human driver to the vehicle manufacturer or the autonomous driving system developer.

SAE levels and liability implications:

  • Level 2 (driver assistance): Human driver retains liability; manufacturer liable for system defects
  • Level 3 (conditional automation): Liability shifts to manufacturer when system is engaged
  • Level 4 (high automation): Manufacturer/operator bears primary liability in operational design domain
  • Level 5 (full automation): No human driver; manufacturer/operator liability

EU approach: The revised Motor Vehicle Insurance Directive addresses liability for autonomous vehicles. The proposed AI Liability Directive would apply to AI-driven vehicles. Additionally, the Product Liability Directive covers vehicles as products.

UNECE regulations: The United Nations Economic Commission for Europe has adopted regulations on automated lane-keeping systems (ALKS) and is developing further regulations for higher levels of automation, including liability provisions.

Medical AI

Medical AI liability intersects with established medical malpractice law and product liability for medical devices.

FDA-authorized AI medical devices: Over 800 AI/ML-enabled medical devices have received FDA authorization. The FDA’s regulatory framework provides a presumption of safety and effectiveness, but FDA authorization does not immunize manufacturers from liability.

Liability scenarios:

Diagnostic AI: An AI system recommends a diagnosis that a physician follows, leading to patient harm. Potential liable parties:

  • The AI developer (product liability for a defective medical device)
  • The physician (malpractice for over-reliance on AI without independent clinical judgment)
  • The hospital (vicarious liability for the physician; institutional negligence for AI procurement and deployment decisions)

Treatment recommendation AI: An AI system recommends a treatment protocol. The physician follows it. The patient suffers an adverse outcome. The liability analysis mirrors diagnostic AI, with the additional question of whether the AI system’s recommendation constituted the practice of medicine.

Learned intermediary doctrine: In US product liability law, the learned intermediary doctrine holds that a manufacturer discharges its duty to warn by adequately informing the prescribing physician, who then exercises independent judgment. Applied to medical AI, this doctrine suggests that the AI developer’s duty to warn runs to the physician, not directly to the patient. However, if AI systems increasingly make clinical decisions without meaningful physician review, this doctrine may erode.

Financial Services AI

Financial services AI liability is addressed through existing regulatory frameworks:

  • SEC enforcement: AI-related violations of securities law (fraud, manipulation, misleading disclosures)
  • Consumer financial protection: CFPB enforcement of fair lending laws against AI-driven credit decisions
  • Fiduciary duties: Investment advisers using AI must still fulfill fiduciary obligations
  • Algorithmic trading liability: Market participants are liable for harms caused by their algorithmic trading systems

The Insurance Dimension

The AI liability landscape has significant implications for insurance:

Product liability insurance: Insurers are adjusting product liability coverage to account for AI risks. Some insurers have introduced AI-specific exclusions or endorsements.

Professional liability (E&O): AI developers may be covered under professional liability insurance for errors in AI system design and development.

Cyber insurance: AI-related cybersecurity incidents may trigger cyber insurance coverage.

Emerging AI insurance products: Several insurers have developed AI-specific insurance products covering algorithmic liability, model failure, data bias, and regulatory compliance costs.

Insurability challenges: The opacity of AI decision-making, the potential for systemic failures affecting many claimants simultaneously, and the evolving legal landscape make AI risk difficult to price. Insurers require transparency into AI system operations to underwrite coverage effectively.


Practical Liability Risk Management

For AI Developers

  1. Document everything: Development decisions, testing results, known limitations, risk assessments, and deployment conditions. Documentation is your primary defense in liability claims.
  2. Implement robust testing: Test for failure modes, bias, edge cases, and adversarial attacks. Document test results comprehensively.
  3. Provide adequate warnings and instructions: Clear documentation of system capabilities, limitations, and appropriate use conditions.
  4. Monitor post-deployment: Implement systems to detect harmful behavior after deployment and respond promptly.
  5. Maintain records: Retain documentation for the period specified by applicable regulation (10 years under the EU AI Act for high-risk systems).

For AI Deployers

  1. Conduct due diligence: Evaluate AI systems before deployment. Understand their capabilities, limitations, and known risks.
  2. Implement human oversight: Ensure meaningful human review of AI-driven decisions, particularly in high-stakes contexts.
  3. Train users: Ensure that personnel using AI systems understand their limitations and know when to override AI outputs.
  4. Monitor outcomes: Track AI system performance and outcomes for signs of harm, bias, or degradation.
  5. Obtain appropriate insurance: Ensure liability insurance coverage addresses AI-specific risks.

For Individuals Affected by AI

  1. Document the harm: Preserve evidence of the AI system’s involvement in the harmful decision or outcome.
  2. Identify all potentially liable parties: Consider the full supply chain — developer, deployer, data provider, platform, and any entity that modified the system.
  3. Seek specialized legal counsel: AI liability is a rapidly evolving area requiring specialized expertise.
  4. File regulatory complaints: In addition to civil claims, regulatory bodies (FTC, EEOC, data protection authorities) may investigate and impose enforcement actions.

This guide is maintained by INHUMAIN.AI. For related coverage, see our Global AI Regulation Tracker, EU AI Act Complete Guide, AI Audit Guide, and AI and GDPR.