US AI Regulation: Complete Guide to American AI Law and Policy
Comprehensive guide to US AI regulation — Biden EO 14110, Trump administration modifications, CHIPS Act, NIST AI RMF, and the state-level legislative explosion from Colorado to California to New York City.
The United States does not have a comprehensive federal AI law. There is no American equivalent of the EU AI Act — no single statute that classifies AI systems by risk level, imposes binding compliance requirements across sectors, and establishes a dedicated enforcement authority with penalty powers.
What the United States has instead is a layered, fragmented, and rapidly evolving regulatory landscape. Federal AI governance operates primarily through executive orders, agency-specific enforcement actions, voluntary frameworks, and sector-specific regulation. Meanwhile, state legislatures have become the most active arena for binding AI legislation in the country, with over 200 AI-related bills introduced across state legislatures in 2024 alone.
This guide maps the entire landscape.
Federal AI Policy: The Executive Order Era
Executive Order 14110 — Biden Administration (October 2023)
On October 30, 2023, President Biden signed Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” It was the most comprehensive presidential action on AI in American history, spanning 111 pages and directing action across virtually every federal agency.
Key provisions:
Safety and Security:
- Developers of dual-use foundation models trained above certain computational thresholds were required to report safety test results (including red-team results) to the federal government before public deployment
- NIST was directed to develop standards, guidelines, and best practices for AI safety, including red-teaming methodologies
- The Department of Energy was directed to address AI-related risks to critical infrastructure
- The Department of Commerce was directed to develop standards for watermarking and content authentication of AI-generated content
Equity and Civil Rights:
- Agencies were directed to prevent AI from being used to exacerbate discrimination
- The Department of Justice was directed to address AI-driven discrimination in the criminal justice system
- Agencies were required to assess and mitigate algorithmic discrimination in federal programs
Government Use:
- Federal agencies were required to appoint Chief AI Officers
- OMB issued guidance on government AI use, including mandatory risk assessments for high-impact AI
- A government-wide AI talent surge was directed
- Procurement guidelines were developed for responsible government AI acquisition
Innovation and Competition:
- The National AI Research Resource (NAIRR) was directed to provide researchers access to AI training infrastructure
- Immigration policy was adjusted to attract and retain AI talent
- Small business support for AI adoption was expanded
International Engagement:
- The State Department was directed to lead international AI governance efforts
- Standards alignment with allies was prioritized
- Export control coordination was enhanced
Trump Administration Modifications (2025)
The incoming Trump administration in January 2025 revoked significant portions of EO 14110 and issued its own executive actions on AI. The policy shift reflected a different philosophy: reducing regulatory burden on AI development, promoting American competitiveness, and removing requirements viewed as impeding innovation.
Key changes:
Revoked:
- Mandatory safety reporting requirements for foundation model developers
- Algorithmic discrimination assessments and directives
- Many agency-specific implementation deadlines under EO 14110
- Requirements viewed as creating undue compliance burden on industry
Retained or Modified:
- National security AI provisions (largely maintained)
- NIST AI safety standards work (continued but shifted to voluntary emphasis)
- Government AI modernization initiatives
- AI talent acquisition programs
New Emphasis:
- Promotion of American AI leadership and competitiveness
- Reduction of regulatory barriers to AI development and deployment
- Energy infrastructure buildout for AI data centers
- Industry-led governance and self-regulation
- Defense and intelligence AI capabilities
Practical impact: The revocation of mandatory safety reporting requirements removed the most significant federal constraint on foundation model developers. Companies that had been preparing to comply with EO 14110 reporting requirements were relieved of those obligations. However, the shift did not eliminate all federal AI governance — agency-specific enforcement actions continued, and the NIST framework remained available as a voluntary standard.
Federal Agency Enforcement
In the absence of comprehensive legislation, individual federal agencies have used existing statutory authority to regulate AI within their jurisdictions.
Federal Trade Commission (FTC)
The FTC has been the most aggressive federal enforcer on AI-related issues, using its authority under Section 5 of the FTC Act (unfair or deceptive acts or practices) to bring AI-related enforcement actions.
Operation AI Comply (2024): The FTC announced enforcement actions against companies making deceptive AI-related claims, including businesses falsely advertising AI capabilities and companies using AI in ways that harmed consumers.
Key enforcement areas:
- AI-washing: False or misleading claims about AI capabilities in products and services
- AI-driven discrimination: Uses of AI that result in discriminatory outcomes
- Data practices: Collection and use of personal data for AI training without adequate consent
- Algorithmic harm: AI systems that cause consumer harm through unfair practices
Remedies pursued: The FTC has ordered algorithmic disgorgement (requiring companies to delete AI models trained on improperly collected data), mandated bias audits, and imposed monetary penalties.
Securities and Exchange Commission (SEC)
The SEC has addressed AI in the financial markets through:
- AI-washing enforcement: Actions against investment advisers making misleading claims about their use of AI in investment strategies
- AI risk disclosure: Guidance requiring public companies to disclose material AI-related risks in SEC filings
- Market manipulation: Monitoring AI-driven trading strategies for manipulative practices
- Proposed rules: Requirements for broker-dealers and investment advisers to address conflicts of interest arising from AI and predictive data analytics
Equal Employment Opportunity Commission (EEOC)
The EEOC has issued guidance on AI in employment decisions, clarifying that existing civil rights laws apply to AI-assisted hiring, promotion, and termination decisions. Key positions:
- Title VII of the Civil Rights Act applies to AI-driven employment decisions that result in disparate impact
- The Americans with Disabilities Act requires reasonable accommodations when AI-based assessments disadvantage individuals with disabilities
- Employers remain liable for discriminatory outcomes produced by third-party AI tools
Food and Drug Administration (FDA)
The FDA has developed the most mature framework for AI regulation in a specific sector through its AI/ML-Based Software as a Medical Device (SaMD) framework.
Key elements:
- Pre-market review pathways for AI/ML medical devices
- Total Product Lifecycle (TPLC) approach allowing for continuous learning and adaptation
- Predetermined change control plans for AI systems that evolve post-deployment
- Over 800 AI/ML-enabled medical devices have received FDA authorization
Department of Defense (DOD)
The DOD has developed extensive AI governance through:
- The Responsible AI Strategy and Implementation Pathway
- Establishment of the Chief Digital and Artificial Intelligence Officer (CDAO)
- AI ethics principles (reliability, equitability, traceability, governability, necessity)
- Autonomous Weapons Systems Directive (DoDD 3000.09)
NIST AI Risk Management Framework
The National Institute of Standards and Technology published the AI Risk Management Framework (AI RMF 1.0) in January 2023. While voluntary, the AI RMF has become the de facto national standard for AI risk management and is referenced by federal agencies, state legislation, and private-sector organizations.
Four core functions:
- GOVERN: Cultivating and implementing a culture of AI risk management
- MAP: Establishing the context to frame risks related to an AI system
- MEASURE: Employing quantitative, qualitative, or mixed methods to analyze, assess, and track AI risks
- MANAGE: Allocating resources to address mapped and measured risks
NIST AI RMF Generative AI Profile (2024): An extension of the AI RMF specifically addressing risks associated with generative AI, including content provenance, confabulation (hallucination), data privacy, intellectual property, and environmental impact.
Practical significance: The AI RMF is not binding, but its voluntary adoption provides organizations with a defensible risk management framework. As state AI laws increasingly reference NIST standards, organizations following the AI RMF may be able to demonstrate compliance with emerging requirements.
CHIPS and Science Act (2022)
While not AI-specific, the CHIPS and Science Act directly shapes the AI ecosystem by addressing the semiconductor supply chain critical to AI computation.
Key provisions:
- $52.7 billion for semiconductor manufacturing and research incentives
- $200 billion authorized for scientific research and innovation, including AI
- CHIPS for America Fund for domestic semiconductor manufacturing
- National Semiconductor Technology Center for collaborative research
- Guardrails restricting recipients from expanding semiconductor manufacturing in countries of concern (primarily China) for 10 years
AI relevance: AI training and inference require enormous computational resources. The concentration of advanced semiconductor manufacturing in Taiwan has created a strategic vulnerability for AI development. CHIPS Act subsidies aim to diversify this supply chain by building domestic fabrication capacity.
State-Level AI Legislation
State legislatures have become the primary arena for binding AI legislation in the United States. The absence of comprehensive federal law has created a regulatory vacuum that states are filling — rapidly and unevenly.
Colorado AI Act (SB 24-205)
The Colorado AI Act, signed into law on May 17, 2024, is the first comprehensive state-level AI law in the United States. It takes effect on February 1, 2026.
Scope: Applies to developers and deployers of “high-risk AI systems” — systems that make or are a substantial factor in making “consequential decisions” concerning consumers.
Consequential decisions include:
- Education enrollment or opportunity
- Employment or employment opportunity
- Financial or lending services
- Essential government services
- Healthcare services
- Housing
- Insurance
- Legal services
Developer obligations:
- Use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination
- Provide deployers with documentation about the system’s capabilities, limitations, training data, and known risks
- Publish a statement on their website summarizing the types of high-risk AI systems they develop and how they manage risks
Deployer obligations:
- Implement a risk management policy and program
- Complete an impact assessment for each high-risk AI system before deployment
- Notify consumers when a consequential decision has been made by a high-risk AI system
- Provide consumers with an opportunity to appeal and access human review
- Report discovery of algorithmic discrimination to the Attorney General within 90 days
Enforcement: Colorado Attorney General under the Colorado Consumer Protection Act. No private right of action.
Affirmative defense: Developers and deployers have an affirmative defense if they discover and cure algorithmic discrimination in a timely manner, and if they maintain compliance with recognized risk management frameworks (including NIST AI RMF).
California SB 1047 (Vetoed)
California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) was passed by the California legislature in August 2024 but vetoed by Governor Newsom in September 2024.
What it would have required:
- Developers of large AI models exceeding specified computational thresholds (10^26 FLOPs or $100M+ training cost) to conduct safety evaluations before deployment
- Establishment of a Frontier Model Division within the Department of Technology
- Implementation of safety and security protocols, including kill switches
- Whistleblower protections for AI safety researchers
- Annual safety assessment certifications
Why it was vetoed: Governor Newsom cited concerns that the bill’s threshold-based approach was too blunt, potentially chilling innovation while failing to address risks from smaller, specialized models. He emphasized the need for a regulatory approach calibrated to actual risk rather than computational scale alone.
Ongoing impact: Despite the veto, SB 1047 shaped the national conversation about AI safety regulation and influenced subsequent legislative proposals. Multiple revised California AI bills were introduced in 2025.
New York City Local Law 144
NYC LL 144, effective July 5, 2023, is the earliest AI-specific law in the United States with active enforcement.
Scope: Employers and employment agencies in New York City that use automated employment decision tools (AEDTs) for hiring or promotion.
Requirements:
- Annual bias audit: Conducted by an independent auditor, testing for disparate impact based on race, ethnicity, and sex
- Public posting: Audit results must be publicly posted on the employer’s website
- Candidate notice: Employers must notify candidates at least 10 business days before using an AEDT, including the job qualifications being assessed and the data sources used
- Accommodation: Candidates may request alternative selection processes or accommodations
Penalties: $500 for a first violation; $500 to $1,500 per subsequent violation per day.
Enforcement experience: Early enforcement has revealed challenges. Compliance rates have been uneven, and the quality of bias audits has varied significantly. The law has nonetheless established an important precedent for AI transparency in employment.
Other Notable State Laws and Bills
Utah AI Policy Act (SB 149, 2024):
- Requires disclosure when AI is used in regulated professional interactions
- Creates an AI learning laboratory program (regulatory sandbox)
- Prohibits AI from representing itself as human in certain contexts
- Does not impose broad compliance requirements
Tennessee ELVIS Act (2024):
- Protects individuals’ voice and likeness from unauthorized AI replication
- Addresses AI-generated deepfakes of musicians and performers
- Creates a right of action for unauthorized use of voice replicas
Illinois AI Video Interview Act (2020):
- Requires employer notification and consent before using AI to analyze video interviews
- Mandates disclosure of how AI is used in evaluation
- Limits data sharing from AI video analysis
Virginia High-Risk AI Developer Duty of Care Act (2025):
- Proposed duty of care for developers of high-risk AI systems
- Requires impact assessments and risk mitigation
- Influenced by EU AI Act framework
The Federal-State Tension
The absence of comprehensive federal AI legislation creates two significant challenges:
Compliance fragmentation: Organizations operating across multiple states face a patchwork of different requirements, definitions, and enforcement mechanisms. What constitutes a “high-risk” AI system under Colorado’s law may differ from definitions in other states, creating conflicting obligations.
Preemption risk: If Congress enacts comprehensive federal AI legislation, state laws may be partially or fully preempted. Organizations investing in state-level compliance face uncertainty about whether those investments will remain relevant.
Industry response: Many technology companies and industry groups have advocated for federal preemption — a comprehensive federal law that would supersede state-level AI regulation. Others argue that state experimentation produces better policy outcomes and that federal preemption would eliminate important protections.
As of early 2026, no comprehensive federal AI bill has advanced significantly through Congress, and the state legislative pace shows no sign of slowing. Organizations must comply with applicable state laws as they take effect, while monitoring federal developments that could reshape the landscape.
Practical Compliance Guidance
For Organizations Operating Nationally
- Map your regulatory exposure: Identify which states you operate in and which AI laws apply to your systems
- Adopt the highest common standard: Where feasible, implement compliance measures that satisfy the most demanding state requirements across all operations
- Use NIST AI RMF as a baseline: The voluntary framework provides a defensible foundation and is referenced by multiple state laws
- Monitor agency enforcement: FTC, SEC, EEOC, and other agencies are using existing authority to enforce AI-related obligations
- Track state legislation: With 200+ AI bills introduced in 2024 alone, the landscape changes quarterly
For International Organizations
- Assess US exposure: If your AI systems produce output used by US residents, you may be subject to state-level requirements
- Coordinate with EU compliance: Organizations subject to both the EU AI Act and US state laws should identify overlapping requirements and build unified compliance programs
- Engage with NIST standards: The AI RMF provides a bridge between US and international AI governance frameworks
This guide is maintained by INHUMAIN.AI as an independent analysis of US AI regulation. For related coverage, see our Global AI Regulation Tracker, EU vs US vs China Comparison, and AI Governance Frameworks Comparison.