INHUMAIN.AI
The Watchdog Platform for Inhuman Intelligence
Documenting What Happens When Intelligence Stops Being Human
AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 | AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 |

AI in Finance Regulation: Trading, Banking, and Systemic Risk

Comprehensive guide to AI regulation in financial services — SEC enforcement, MiFID II algorithmic trading rules, Basel III operational risk, DORA digital resilience, and the systemic risks of AI-driven markets.

Financial services was among the first sectors to deploy AI at scale, and it is now among the first to confront the consequences of that deployment. Algorithmic trading systems execute the majority of equity market volume. AI-driven credit scoring determines who gets loans. Machine learning models detect fraud, price insurance, manage risk, and generate investment recommendations. And regulators are responding.

This guide covers the complete regulatory landscape for AI in financial services — from US securities enforcement to European prudential regulation to the systemic risks that AI introduces into the global financial system.


The Regulatory Landscape

AI in financial services is not governed by a single regulation but by overlapping layers of sector-specific rules, prudential requirements, consumer protection laws, and increasingly AI-specific provisions.

United States

Securities and Exchange Commission (SEC)

The SEC has emerged as one of the most aggressive regulators of AI in financial services, using its existing statutory authority under the Securities Act of 1933, the Securities Exchange Act of 1934, and the Investment Advisers Act of 1940.

AI-Washing Enforcement: The SEC has brought enforcement actions against investment advisers and broker-dealers that made false or misleading claims about their use of AI. These actions target firms that claim to use “AI-driven” or “machine learning-powered” investment strategies when the actual role of AI in their processes is minimal, non-existent, or materially different from what is disclosed to investors.

Proposed Predictive Data Analytics Rule: In July 2023, the SEC proposed rules that would require broker-dealers and investment advisers to identify and address conflicts of interest associated with the use of predictive data analytics (PDA) and similar technologies in investor interactions. The proposal would require firms to evaluate whether PDA technologies place the firm’s interests ahead of investors’ interests and to eliminate or neutralize such conflicts. The proposal generated significant industry opposition and, as of early 2026, has not been finalized.

Market Manipulation: AI-powered trading systems are subject to existing anti-manipulation provisions. The SEC monitors for AI-driven market manipulation, including spoofing, layering, and wash trading executed by algorithmic systems.

Disclosure Requirements: Public companies using AI in material business operations face disclosure obligations. The SEC has signaled that AI-related risks — including model risk, data risk, cybersecurity risk, and regulatory risk — should be disclosed in annual reports and registration statements when material.

Commodity Futures Trading Commission (CFTC)

The CFTC regulates AI in derivatives markets, including:

  • Algorithmic trading compliance requirements
  • Automated trading system risk controls
  • Anti-manipulation enforcement for AI-driven trading in futures and swaps markets
  • Proposed rules on automated trading and risk controls

Federal Reserve and Banking Regulators

US banking regulators (Federal Reserve, OCC, FDIC) have issued interagency guidance on model risk management (SR 11-7) that applies to AI/ML models used by banks. Key requirements:

Model Risk Management (SR 11-7):

  • Comprehensive model validation
  • Ongoing monitoring of model performance
  • Independent model risk management function
  • Documentation of model development, validation, and use
  • Board-level oversight of model risk

Fair Lending: AI-driven credit decisions are subject to the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act. The Consumer Financial Protection Bureau (CFPB) has affirmed that the use of AI does not excuse lenders from compliance with fair lending laws, including the requirement to provide specific, actionable reasons when denying credit.

Community Reinvestment Act (CRA): AI systems used in lending decisions must be evaluated for compliance with CRA obligations to serve the credit needs of all communities, including low- and moderate-income areas.

European Union

MiFID II and Algorithmic Trading

The Markets in Financial Instruments Directive II (MiFID II) and its associated regulation (MiFIR) establish the primary framework for algorithmic trading regulation in the EU.

Key requirements for algorithmic trading firms:

Requirement MiFID II Article Detail
Authorization Art. 17 Firms engaging in algorithmic trading must be authorized
Risk controls Art. 17(1) Effective systems and risk controls, including kill switches
Testing Art. 17(1) Algorithms must be tested in stress conditions
Business continuity Art. 17(1) Continuity arrangements for algorithmic trading failures
Record keeping Art. 17(2) Records of all orders, including cancelled and modified
Market making Art. 17(3-4) Obligations for algorithmic market makers
Direct electronic access Art. 17(5) Controls for provision of direct market access
Annual self-assessment RTS 6 Annual assessment of algorithmic trading systems

Regulatory Technical Standard 6 (RTS 6): Provides detailed requirements for algorithmic trading firms, including governance requirements, testing specifications, annual self-assessment reports, and risk control mandates.

High-frequency trading (HFT): MiFID II includes specific provisions for high-frequency algorithmic trading, defined by the use of infrastructure designed to minimize latency, system determination of order parameters, and high message intraday rates. HFT firms face additional authorization, risk control, and reporting requirements.

DORA: Digital Operational Resilience Act

The Digital Operational Resilience Act (Regulation 2022/2554) took full effect on January 17, 2025. While not AI-specific, DORA has significant implications for AI systems in financial services.

Scope: Banks, investment firms, insurance companies, payment institutions, and their critical ICT third-party service providers — including AI providers.

Key requirements relevant to AI:

  • ICT risk management: Financial entities must establish comprehensive ICT risk management frameworks covering AI systems
  • Incident reporting: Major ICT-related incidents, including AI system failures, must be reported to competent authorities
  • Digital operational resilience testing: Regular testing of ICT systems, including AI components, through threat-led penetration testing for significant entities
  • ICT third-party risk management: Due diligence and contractual requirements for AI providers as ICT third-party service providers
  • Critical ICT third-party oversight: Designation of critical ICT third-party service providers subject to direct oversight by European Supervisory Authorities

AI provider implications: AI vendors supplying models, infrastructure, or services to European financial institutions may be designated as critical ICT third-party service providers, subjecting them to direct oversight by EU authorities regardless of where they are headquartered.

Basel III and Operational Risk

The Basel III framework, as implemented in the EU through the Capital Requirements Regulation (CRR) and Capital Requirements Directive (CRD), addresses AI-related operational risk.

Operational risk capital: Banks must hold capital against operational risk, which includes losses resulting from inadequate or failed internal processes, people, and systems. AI system failures that result in financial losses contribute to operational risk capital calculations.

Model risk: The European Banking Authority (EBA) has issued guidelines on model risk in internal models, applicable to AI/ML models used for capital calculation, risk management, and financial reporting. Requirements include:

  • Model validation by independent teams
  • Regular back-testing of model performance
  • Documentation of model limitations and assumptions
  • Board-level model risk governance

EU AI Act Intersection

The EU AI Act classifies AI systems used for credit scoring and creditworthiness assessment as high-risk (Annex III, Category 5). This means AI systems used in lending decisions face the full suite of high-risk system requirements under the AI Act in addition to existing financial regulation.

AI systems used for risk assessment and pricing in life and health insurance are also classified as high-risk. Motor vehicle and property insurance AI are excluded from high-risk classification.


Systemic Risk: AI and Financial Stability

The most significant long-term concern about AI in financial services is systemic risk — the risk that AI systems could cause or amplify a financial crisis.

Herding and Procyclicality

When multiple financial institutions use similar AI models trained on similar data, they may converge on similar trading strategies. This herding behavior can amplify market movements: when AI systems collectively buy, prices rise; when they collectively sell, prices crash. The procyclical nature of this behavior means AI can amplify rather than dampen market volatility.

Speed and Cascading Failures

AI-driven trading systems operate at speeds far exceeding human reaction times. In a stress scenario, AI systems can interact with each other in feedback loops that cascade before any human operator can intervene. The 2010 Flash Crash demonstrated this risk with algorithmic trading systems. Modern AI systems, with more complex and opaque decision-making, may produce cascading failures that are even more difficult to predict, detect, and halt.

Model Risk Correlation

If multiple banks use the same or similar AI models for risk assessment, a systematic error in those models could simultaneously affect risk calculations across the financial system. A model that underestimates credit risk during benign conditions may fail simultaneously at multiple institutions when conditions deteriorate.

Opacity and Auditability

Financial regulators’ ability to supervise AI-driven activities depends on their ability to understand what AI systems are doing. The opacity of deep learning models used in trading, credit, and risk management can impede regulatory supervision. Regulators may be unable to assess whether AI systems comply with prudential requirements if the systems’ decision-making processes cannot be meaningfully explained.

Regulatory Responses to Systemic Risk

Financial Stability Board (FSB): Has published reports on AI and machine learning in financial services, highlighting systemic risk concerns and recommending regulatory approaches.

Bank for International Settlements (BIS): Has analyzed the systemic implications of AI in finance, including the potential for AI-driven market dynamics to create new forms of instemic risk.

European Systemic Risk Board (ESRB): Monitors AI-related systemic risks in the European financial system and provides macroprudential policy recommendations.


Sector-Specific AI Applications and Their Regulation

Algorithmic Trading

Regulatory status: Heavily regulated under MiFID II (EU), SEC/CFTC rules (US), and equivalent frameworks globally. Requires authorization, risk controls, kill switches, testing, and record-keeping.

Credit Scoring and Lending

Regulatory status: High-risk under EU AI Act (Annex III). Subject to fair lending laws (US), GDPR Article 22 (EU), and consumer credit regulation. Right to explanation for automated credit decisions.

Fraud Detection

Regulatory status: Generally less restricted. Anti-money laundering (AML) and counter-terrorist financing (CTF) obligations require transaction monitoring, which increasingly uses AI. Privacy regulations (GDPR) apply to personal data processing.

Insurance Pricing

Regulatory status: Life and health insurance AI classified as high-risk under EU AI Act. Subject to insurance-specific regulation on pricing discrimination. Consumer protection requirements.

Robo-Advisory

Regulatory status: Subject to investment adviser regulation (SEC in US, MiFID II in EU). Fiduciary duties apply to AI-generated investment advice. Suitability and know-your-customer requirements must be met.

Regulatory Technology (RegTech)

Regulatory status: AI systems used for compliance (AML, KYC, reporting) are generally subject to the same regulatory standards as the compliance processes they automate. Model validation requirements apply.


Compliance Guidance for Financial Institutions

Model Risk Management

  1. Establish a comprehensive model inventory covering all AI/ML models in production
  2. Implement independent model validation for all material AI models
  3. Monitor model performance continuously with automated drift detection
  4. Document model limitations and assumptions, including known failure modes
  5. Conduct regular stress testing under adverse scenarios
  6. Report model risk to board-level governance

EU AI Act Compliance

Financial institutions deploying high-risk AI systems (credit scoring, insurance pricing) must:

  1. Implement risk management systems per Article 9
  2. Ensure data governance per Article 10
  3. Prepare technical documentation per Article 11
  4. Implement record-keeping per Article 12
  5. Provide transparency information per Article 13
  6. Design for human oversight per Article 14
  7. Ensure accuracy, robustness, and cybersecurity per Article 15
  8. Complete conformity assessment before deployment

DORA Compliance

Financial institutions using AI must:

  1. Include AI systems in ICT risk management framework
  2. Ensure AI incident reporting capability
  3. Conduct operational resilience testing of AI systems
  4. Manage AI vendor risk under third-party oversight requirements
  5. Include AI in business continuity and disaster recovery planning

Cross-Regulatory Coordination

Financial institutions face overlapping AI-related requirements from multiple regulators. A single AI credit scoring system may be subject to the EU AI Act, GDPR, consumer credit regulation, EBA guidelines, DORA, and national supervisory requirements simultaneously. Organizations must develop integrated compliance programs that address all applicable requirements without creating conflicting processes.


This guide is maintained by INHUMAIN.AI. For related coverage, see our Global AI Regulation Tracker, EU AI Act Complete Guide, AI and GDPR, and AI Liability Guide.