INHUMAIN.AI
The Watchdog Platform for Inhuman Intelligence
Documenting What Happens When Intelligence Stops Being Human
AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 | AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 |

AI Regulation Tracker: Every Country, Every Law, Every Deadline

The definitive global tracker of AI regulation across 40+ jurisdictions. Country-by-country analysis of laws, frameworks, compliance deadlines, and penalty structures. Updated continuously.

Artificial intelligence regulation is no longer a theoretical exercise. Governments across every continent have moved from publishing advisory principles to enacting binding legislation with enforcement mechanisms, financial penalties, and criminal liability. This tracker is the most comprehensive independent record of every AI law, framework, executive order, and compliance deadline worldwide.

The regulatory landscape is accelerating. In 2023, fewer than five countries had binding AI-specific legislation. By the end of 2025, that number exceeded twelve. By mid-2026, multiple jurisdictions will begin active enforcement of detailed compliance requirements that affect any organization deploying AI systems within their borders — regardless of where the organization is headquartered.

This page is organized by region and updated continuously. Bookmark it.


European Union

EU AI Act (Regulation 2024/1689)

The EU AI Act is the most comprehensive binding AI legislation in the world. Adopted in final form in June 2024 and published in the Official Journal in August 2024, it establishes a risk-based classification framework that applies to any AI system placed on the EU market or whose output is used within the EU.

Dimension Detail
Legal Instrument Regulation (directly applicable, no transposition required)
Adopted June 13, 2024 (European Parliament)
Entry into Force August 1, 2024
Full Application August 2, 2026 (most provisions)
Scope Providers, deployers, importers, distributors of AI systems in the EU market
Extraterritorial Yes — applies to non-EU providers if output is used in EU
Penalty Range Up to EUR 35 million or 7% of worldwide annual turnover
Enforcement National market surveillance authorities + EU AI Office

Risk Tiers:

  • Unacceptable Risk (Prohibited): Social scoring, real-time remote biometric identification in public spaces (with exceptions), subliminal manipulation, exploitation of vulnerabilities. Effective February 2, 2025.
  • High Risk: AI systems in critical infrastructure, education, employment, essential services, law enforcement, migration, justice. Subject to conformity assessments, human oversight, data governance, documentation.
  • Limited Risk: Chatbots, deepfakes, emotion recognition. Transparency obligations.
  • Minimal Risk: Spam filters, AI-enabled video games. No requirements.

Key Deadlines:

  • February 2, 2025: Prohibited practices ban takes effect
  • August 2, 2025: GPAI model obligations apply
  • August 2, 2026: Full application of high-risk system requirements
  • August 2, 2027: Extended deadline for high-risk AI embedded in regulated products (Annex I)

For comprehensive analysis, see our EU AI Act Complete Guide, High-Risk Systems Classification, Prohibited Practices, and Implementation Timeline.

EU AI Liability Directive (Proposed)

The proposed AI Liability Directive complements the AI Act by establishing civil liability rules for AI-caused damage. It introduces a presumption of causality: if a claimant demonstrates that an AI provider failed to comply with a duty of care (such as requirements under the AI Act), and the damage is of a kind that the non-compliance would typically produce, the court may presume causation.

Dimension Detail
Status Proposed (COM/2022/496), under trilogue negotiation
Mechanism Rebuttable presumption of causality
Scope Civil liability for damage caused by AI systems
Disclosure Courts can order disclosure of evidence from AI providers

EU Product Liability Directive (Revised)

Directive 2024/2853, adopted in October 2024, explicitly includes software and AI systems within the definition of “product.” This means strict liability (liability without fault) applies to defective AI products.

Dimension Detail
Status Adopted October 2024, transposition by December 9, 2026
Key Change Software and AI explicitly classified as “products”
Liability Strict liability for defective AI products
Burden Rebuttable presumptions to assist claimants

GDPR and AI

The General Data Protection Regulation remains the primary constraint on AI systems processing personal data in the EU. Article 22 provides the right not to be subject to solely automated decision-making with legal or significant effects. The European Data Protection Board has issued multiple guidelines on AI and data protection, including guidance on the use of personal data for AI model training.


United States

The United States does not have a single comprehensive federal AI law. Instead, AI governance operates through a layered system of executive orders, agency-specific guidance, sector-specific regulation, and an increasingly active state-level legislative landscape.

Federal Level

Executive Order 14110 (October 2023) — Biden Administration

Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” was the most comprehensive federal AI policy action until its partial modification in 2025. Key provisions:

  • Dual-use foundation model developers required to report safety test results to the federal government
  • NIST directed to develop AI safety standards and red-teaming guidelines
  • Agencies required to appoint Chief AI Officers
  • Procurement guidelines for government AI use
  • Watermarking and content authentication standards
  • Civil rights and equity impact assessments

Trump Administration Modifications (2025)

The incoming Trump administration in January 2025 revoked significant portions of EO 14110 through its own executive actions, emphasizing reduced regulatory burden, promotion of American AI competitiveness, and removal of reporting requirements viewed as overly burdensome for industry. Key changes:

  • Revocation of mandatory safety reporting for foundation model developers
  • Elimination of algorithmic discrimination directives
  • Retention of national security AI provisions
  • New emphasis on energy infrastructure for AI data centers
  • Continued support for NIST AI safety standards (voluntary framework)

CHIPS and Science Act (2022)

Provides $52 billion in semiconductor manufacturing subsidies and $200 billion in scientific research funding. While not AI-specific, the Act directly shapes the AI hardware supply chain by incentivizing domestic chip fabrication.

National AI Initiative Act (2020)

Established the National AI Initiative Office, the National AI Research Resource (NAIRR), and required a national AI strategy. Reauthorized through subsequent legislation.

Agency-Specific Actions:

Agency Action Status
SEC AI-related enforcement actions, AI-washing guidance Active enforcement
FTC AI fairness enforcement, Operation AI Comply Active enforcement
EEOC AI in employment discrimination guidance Guidance issued
FDA AI/ML-based Software as a Medical Device (SaMD) framework Active framework
DOD Responsible AI Strategy, CDAO established Implementation
NIST AI Risk Management Framework 1.0, Generative AI Profile Published

State-Level Legislation

State legislatures have become the most active arena for binding AI legislation in the United States.

Colorado AI Act (SB 24-205)

Signed into law in May 2024, effective February 1, 2026. The first comprehensive state-level AI legislation.

Dimension Detail
Effective February 1, 2026
Scope “High-risk AI systems” that make consequential decisions
Requirements Risk management, impact assessments, consumer notification, opt-out rights
Penalties Enforcement under Colorado Consumer Protection Act
Notable Applies to both developers and deployers

California SB 1047 (Vetoed)

Governor Newsom vetoed SB 1047 in September 2024, calling its threshold-based approach potentially harmful to innovation. The bill would have required safety evaluations for large AI models exceeding certain computational thresholds. Despite the veto, the legislative effort signaled California’s direction and influenced subsequent proposals.

New York City Local Law 144

Effective July 5, 2023, NYC LL 144 requires employers using automated employment decision tools (AEDTs) to conduct annual bias audits by independent auditors and provide notice to candidates.

Dimension Detail
Effective July 5, 2023
Scope AEDTs used in hiring and promotion in NYC
Requirements Annual independent bias audit, candidate notice, results publication
Enforcement NYC Department of Consumer and Worker Protection
Penalties $500 per violation (first), $500-$1,500 per subsequent violation per day

Other Notable State Actions:

State Legislation Focus Status
Illinois AI Video Interview Act Consent for AI in video interviews In effect
Texas HB 2060 AI transparency in government In effect
Connecticut SB 2 AI inventory and impact assessment for state agencies In effect
Vermont AI Task Force Advisory recommendations for AI legislation Reported
Utah AI Policy Act AI disclosure, regulatory sandbox In effect
Tennessee ELVIS Act Protection against AI-generated voice clones In effect
Virginia High-Risk AI Developer Duty of Care Act Developer obligations Introduced 2025

United Kingdom

The UK has deliberately chosen a sector-specific, principles-based approach to AI regulation rather than a single comprehensive statute. This was articulated in the 2023 White Paper “A Pro-Innovation Approach to AI Regulation” and reinforced by subsequent policy developments.

Framework

Dimension Detail
Approach Sector-specific, principles-based (no single AI Act)
Central Body Department for Science, Innovation and Technology (DSIT)
Regulators Existing sector regulators (FCA, Ofcom, CMA, ICO, etc.)
Principles Safety, transparency, fairness, accountability, contestability
Legislation No single AI-specific binding law (as of Feb 2026)
AI Safety Institute Established 2023, conducting frontier model evaluations

Five Core Principles:

  1. Safety, security, and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

AI Safety Institute: Established in November 2023 (originally the Frontier AI Taskforce), the AI Safety Institute conducts pre-deployment safety evaluations of frontier AI models. It has agreements with major AI labs for pre-release access and has published evaluation methodologies.

Digital Markets, Competition and Consumers Act (2024): While not AI-specific, this Act empowers the Competition and Markets Authority (CMA) to regulate digital markets, including AI-driven platforms, through Strategic Market Status designations.

AI Bill (2025): Introduced into Parliament in late 2025, the AI (Regulation) Bill proposes establishing a statutory footing for AI regulatory oversight, creating an AI Authority, and granting sector regulators explicit AI oversight powers. Still under consideration as of February 2026.


China

China has enacted the most extensive suite of binding AI-specific regulations of any single country. Its approach is characterized by rapid legislative action, technology-specific rules, and deep integration with state security priorities.

Key Legislation

Algorithm Recommendation Regulations (March 2022)

Dimension Detail
Full Name Provisions on the Management of Algorithmic Recommendations
Effective March 1, 2022
Scope Algorithmic recommendation services
Requirements Algorithm transparency, user opt-out, anti-addiction for minors
Enforcer Cyberspace Administration of China (CAC)

Deep Synthesis Regulations (January 2023)

Dimension Detail
Full Name Provisions on the Management of Deep Synthesis
Effective January 10, 2023
Scope Deepfakes, AI-generated content
Requirements Watermarking, content labeling, user identity verification
Enforcer CAC + Ministry of Public Security

Generative AI Measures (August 2023)

Dimension Detail
Full Name Interim Measures for the Management of Generative AI Services
Effective August 15, 2023
Scope Generative AI services offered to the public in China
Requirements Content moderation, training data compliance, security assessments
Notable Must adhere to “core socialist values”
Enforcer CAC

AI Safety Governance Framework (2024)

Published by the National Technical Committee on AI Standardization. Non-binding but highly influential. Covers model development, data governance, content safety, and cross-border data flows.

Draft AI Law (2024-2025)

China began circulating a comprehensive AI law in 2024, intended to consolidate its technology-specific regulations into a single overarching framework. As of early 2026, the draft has undergone multiple consultation rounds. Expected to include provisions on foundation model governance, cross-border AI services, and AI safety assessments.


Japan

Approach

Japan has adopted a soft-law, principles-based approach emphasizing “agile governance.” The government has explicitly avoided binding AI-specific legislation, preferring voluntary guidelines and industry self-regulation.

Dimension Detail
Approach Non-binding guidelines, voluntary compliance
Central Body Ministry of Economy, Trade and Industry (METI)
Key Document AI Guidelines for Business (2024)
Hiroshima AI Process Led G7 AI governance framework
Copyright Permissive stance on AI training data

AI Guidelines for Business (2024): Published by METI and the Ministry of Internal Affairs and Communications (MIC), these guidelines cover governance, risk management, and responsible AI use. Voluntary compliance.

Hiroshima AI Process: Japan led the G7’s Hiroshima AI Process in 2023, resulting in the International Guiding Principles for Organizations Developing Advanced AI Systems and a voluntary Code of Conduct.

Copyright Stance: Japan’s copyright framework is notably permissive toward AI training. Article 30-4 of Japan’s Copyright Act allows the use of copyrighted works for “information analysis” purposes without permission, making Japan an attractive jurisdiction for AI model training.


Singapore

Model AI Governance Framework

Singapore has positioned itself as a global leader in AI governance through non-binding frameworks and voluntary certification.

Dimension Detail
Approach Voluntary frameworks with strong industry adoption
Central Body Infocomm Media Development Authority (IMDA) + PDPC
Key Framework Model AI Governance Framework (2nd Edition, 2020)
Testing AI Verify (open-source AI governance testing toolkit)
Generative AI Model AI Governance Framework for Generative AI (2024)

AI Verify: Launched in 2022, AI Verify is an open-source testing framework and software toolkit that enables organizations to validate their AI systems against governance principles through standardized testing. It has been adopted by organizations in over 30 countries.

Generative AI Framework (2024): Addresses foundation model providers, application developers, and deployers. Covers accountability, incident reporting, content provenance, and safety evaluation.


United Arab Emirates

AI Strategy and Governance

The UAE was the first country to appoint a Minister of State for AI (2017) and has pursued aggressive AI adoption alongside a developing governance framework.

Dimension Detail
Central Body AI, Digital Economy, and Remote Work Applications Office
Strategy UAE National AI Strategy 2031
Regulation Sector-specific approach through existing regulators
Free Zones ADGM, DIFC have AI-specific regulatory frameworks
Ethics National AI Ethics Guidelines (2023)

Abu Dhabi Global Market (ADGM): Issued a comprehensive regulatory framework for AI and digital activities, including governance, data protection, and disclosure requirements for AI-driven financial services.


Saudi Arabia

SDAIA and the Regulatory Landscape

Saudi Arabia’s AI governance is centralized under the Saudi Data and Artificial Intelligence Authority (SDAIA), established by royal decree in 2019. SDAIA serves as both the national data regulator (through its subsidiary, the National Data Management Office) and the AI policy authority.

Dimension Detail
Central Body SDAIA (Saudi Data & AI Authority)
Chairman Crown Prince Mohammed bin Salman
Key Law Personal Data Protection Law (PDPL) — effective September 2024
AI Ethics AI Ethics Principles (2023)
AI Company HUMAIN (PIF-owned national AI company)
Conflict SDAIA regulates AI; HUMAIN (PIF-owned) is the largest AI deployer; both under Crown Prince authority

Critical Structural Issue: SDAIA reports to the government led by the Crown Prince. HUMAIN is owned by the Public Investment Fund, chaired by the Crown Prince. This creates a regulatory structure where the regulator and the entity it regulates share the same ultimate authority. For detailed analysis, see our Saudi Arabia AI Regulation deep dive.

Personal Data Protection Law (PDPL): Saudi Arabia’s first comprehensive data protection law became fully effective in September 2024 after a compliance transition period. It governs personal data processing including AI-driven processing, with extraterritorial application to data processing of Saudi residents.


Brazil

AI Bill (PL 2338/2023)

Brazil’s AI regulatory framework is being developed through PL 2338/2023, one of the most advanced AI bills in Latin America.

Dimension Detail
Status Approved by Senate (December 2024), under Chamber review
Approach Risk-based framework (influenced by EU AI Act)
Scope AI systems developed, deployed, or used in Brazil
Authority Proposed independent regulatory body (ANPD as interim)
Penalties Up to 2% of Brazilian revenue (max R$50M per violation)
Rights Right to explanation, right to human review

Key Features:

  • Risk classification system (unacceptable, high, general risk)
  • Mandatory algorithmic impact assessments for high-risk systems
  • Transparency requirements including disclosure of AI interaction
  • Data protection integration with Brazil’s LGPD
  • Provisions for generative AI and foundation models

Canada

Artificial Intelligence and Data Act (AIDA)

AIDA was proposed as Part 3 of Bill C-27, the Digital Charter Implementation Act. As of early 2026, the bill has faced significant delays and parliamentary challenges.

Dimension Detail
Status Stalled in Parliament (as of Feb 2026)
Approach Risk-based, focused on “high-impact AI systems”
Key Requirements Risk assessments, mitigation measures, transparency
Penalties Up to CAD $25M or 5% of global revenue
Criticism Broad delegation to regulations, insufficient detail

Voluntary Code of Conduct: In the absence of binding legislation, Canada has promoted a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (2023), with signatories including major AI companies operating in Canada.


Australia

AI Regulation Approach

Australia has pursued a voluntary, principles-based approach while increasingly signaling movement toward binding legislation.

Dimension Detail
Approach Voluntary guidelines transitioning toward binding rules
Central Body Department of Industry, Science and Resources
Key Document AI Ethics Framework (2024 update)
Mandatory Proposed mandatory guardrails for high-risk AI (consultation 2024)
Timeline Binding legislation expected 2026-2027

Mandatory Guardrails Consultation (2024): The Australian government published a consultation paper on mandatory guardrails for AI in high-risk settings. Proposed requirements include transparency, accountability, testing, human oversight, and contestability for AI systems used in consequential decisions.

AI Ethics Framework: Originally published in 2019 and updated in 2024, the framework articulates eight AI ethics principles: human, societal, and environmental wellbeing; human-centered values; fairness; privacy and security; reliability and safety; transparency and explainability; contestability; accountability.


South Korea

AI Framework Act

South Korea passed the Act on the Promotion of AI Industry and Establishment of a Framework for Trustworthy AI in December 2024, making it one of the first Asian countries to enact comprehensive binding AI legislation.

Dimension Detail
Status Enacted December 2024, effective January 2026
Approach Risk-based with emphasis on high-impact AI
Key Requirements Impact assessments, transparency, human oversight
Enforcement AI Safety Institute (to be established)
Notable Balances promotion and regulation in single statute

Key Features:

  • Classification of high-impact AI systems requiring enhanced governance
  • Mandatory AI impact assessments
  • Establishment of an AI Safety Institute for testing and evaluation
  • Government support for AI industry development alongside regulatory requirements
  • Provisions for generative AI transparency and content labeling

India

AI Governance Approach

India has explicitly stated it will not regulate AI in the near term, preferring a pro-innovation stance while developing non-binding frameworks.

Dimension Detail
Approach No binding AI legislation; advisory principles only
Central Body Ministry of Electronics and Information Technology (MeitY)
Key Action Advisory issued to AI platforms (March 2024, later withdrawn)
IndiaAI Mission $1.2B initiative for AI infrastructure and development
Digital India Act Proposed comprehensive digital legislation (includes AI provisions)

Advisory Reversal: In March 2024, MeitY issued an advisory requiring government approval before deploying AI models in India. Following industry pushback, the advisory was significantly modified and the pre-deployment approval requirement was withdrawn.

Digital India Act: The proposed Digital India Act, intended to replace the Information Technology Act of 2000, includes provisions on AI governance, algorithmic transparency, and platform accountability. The bill remains under development.


Other Notable Jurisdictions

Israel

Israel has adopted a soft-regulation approach with non-binding principles published by the Israel Innovation Authority. Focus areas include AI in healthcare, defense, and cybersecurity. No comprehensive AI legislation as of early 2026.

Turkey

Turkey published a National AI Strategy (2021-2025) and has begun developing AI-specific governance frameworks through the Digital Transformation Office. Binding legislation is in early stages.

Mexico

Mexico introduced AI-related bills in 2024 focusing on transparency, non-discrimination, and accountability. The legislative process remains in early stages.

Nigeria

Nigeria published a National AI Strategy in 2024, becoming one of the first African countries to articulate a comprehensive AI policy. The National Information Technology Development Agency (NITDA) has issued non-binding ethical AI guidelines.

Kenya

The Kenya Data Protection Act (2019) applies to AI-driven data processing. The government has signaled interest in AI-specific governance frameworks but no legislation has been introduced.


International Frameworks

OECD AI Principles

Adopted in May 2019 and updated in 2024, the OECD AI Principles are the most widely endorsed international AI governance framework. Endorsed by over 45 countries including all G7 members.

Five Principles:

  1. Inclusive growth, sustainable development, and well-being
  2. Human-centered values and fairness
  3. Transparency and explainability
  4. Robustness, security, and safety
  5. Accountability

G7 Hiroshima AI Process

Launched under Japan’s 2023 G7 presidency, the Hiroshima AI Process produced International Guiding Principles for Organizations Developing Advanced AI Systems and a voluntary Code of Conduct. Extended in 2024 under Italy’s presidency with a focus on implementation.

Council of Europe Framework Convention on AI (2024)

The first legally binding international treaty on AI. Opened for signature in September 2024. Covers AI systems used by public authorities and private actors acting on their behalf. Establishes principles of human dignity, autonomy, equality, and democratic process.

Dimension Detail
Status Open for signature (September 2024)
Scope Public sector AI use and private actors working for governments
Requirements Human rights impact assessments, transparency, oversight
Signatories EU, US, UK, and others (as of Feb 2026)

UNESCO Recommendation on the Ethics of AI (2021)

Adopted by all 193 UNESCO member states, making it the most universally endorsed AI ethics framework. Non-binding but establishes global norms around fairness, transparency, and human oversight.

Global Partnership on AI (GPAI)

Multilateral initiative launched in 2020 with 29 member countries. Supports responsible AI development through working groups on responsible AI, data governance, future of work, and innovation and commercialization.


Penalty Comparison

Jurisdiction Maximum Financial Penalty Criminal Liability
EU AI Act EUR 35M or 7% global turnover No (but member states may add)
EU GDPR EUR 20M or 4% global turnover Varies by member state
Brazil (Proposed) R$50M or 2% Brazilian revenue Under discussion
Canada AIDA CAD $25M or 5% global revenue Yes (for reckless harm)
China Varies by regulation; license revocation Yes (for serious violations)
South Korea KRW 300M+ (tiered) Under development
NYC LL 144 $500-$1,500 per violation per day No
Colorado AI Act CCPA enforcement (varies) No

Key Compliance Deadlines (2025-2027)

Date Jurisdiction Milestone
Feb 2, 2025 EU AI Act prohibited practices ban takes effect
Aug 2, 2025 EU GPAI model obligations apply
Sep 2025 Saudi Arabia PDPL full enforcement
Jan 2026 South Korea AI Framework Act takes effect
Feb 1, 2026 Colorado AI Act takes effect
Aug 2, 2026 EU Full AI Act application (high-risk systems)
Dec 9, 2026 EU Revised Product Liability Directive transposition deadline
Aug 2, 2027 EU Extended deadline for high-risk AI in regulated products
2026-2027 Australia Expected mandatory guardrails legislation
2026-2027 Brazil Expected final AI law enactment

Methodology

This tracker is compiled from primary legal sources, official government publications, regulatory body announcements, and verified reporting. We track:

  • Binding legislation: Laws, regulations, and directives with legal force
  • Executive actions: Executive orders, presidential decrees, royal decrees
  • Regulatory guidance: Agency-specific rules, enforcement actions, and guidance documents
  • Voluntary frameworks: Non-binding guidelines, principles, and codes of conduct
  • International instruments: Treaties, conventions, recommendations, and multilateral commitments

Status designations reflect the most recent available information as of the date noted. Legislative processes are inherently dynamic; readers should verify current status through official government sources before making compliance decisions.

For jurisdiction-specific deep dives, see our dedicated guides on the EU AI Act, US AI Regulation, Saudi Arabia AI Regulation, and AI Governance Frameworks Comparison.


This tracker is maintained by INHUMAIN.AI and updated continuously as new legislation, enforcement actions, and compliance deadlines emerge. It is not legal advice. Organizations should consult qualified legal counsel for jurisdiction-specific compliance guidance.