INHUMAIN.AI
The Watchdog Platform for Inhuman Intelligence
Documenting What Happens When Intelligence Stops Being Human
AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 | AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 |

The AI Power Map: Who Controls Inhuman Intelligence

A comprehensive mapping of the AI-industrial complex — the companies, governments, and individuals who control the development, deployment, and governance of artificial intelligence worldwide.

The development of artificial intelligence is not a democratic process. It is controlled by a surprisingly small number of companies, governments, and individuals whose decisions will shape the trajectory of human civilization. Understanding who holds power — and how that power is concentrated, exercised, and contested — is the first step toward meaningful accountability.

This is the INHUMAIN.AI Power Map: a comprehensive guide to the entities that control the most consequential technology of the 21st century.

The Architecture of AI Power

AI power is not monolithic. It flows through six interconnected layers, each with its own bottlenecks, gatekeepers, and dynamics:

  1. Frontier Labs — The organizations building the most capable AI systems
  2. Chip Makers — The companies that design and manufacture the silicon that makes AI possible
  3. Cloud Providers — The infrastructure layer that delivers compute at scale
  4. Sovereign AI Programs — National governments building AI capacity as a strategic asset
  5. Data Holders — The entities that control the training data that shapes AI behavior
  6. Regulators — The governmental bodies attempting to govern AI development

What makes the current moment dangerous is not the existence of these layers — it is the degree to which they are collapsing into one another. Cloud providers are funding frontier labs. Chip makers are partnering with sovereign AI programs. Data holders are building their own models. The lines between competitor, supplier, investor, and regulator are blurring in ways that concentrate power and undermine accountability.


Layer 1: Frontier Labs

Frontier labs are the organizations building the most capable AI systems — the models that push the boundaries of what artificial intelligence can do. As of early 2026, this category is dominated by a handful of well-funded entities, most of them based in the United States.

OpenAI

Attribute Detail
Headquarters San Francisco, CA
CEO Sam Altman
Valuation ~$157B (as of early 2025 funding round)
Primary Investor Microsoft (~$13B cumulative)
Key Models GPT-4, GPT-4o, GPT-5, o1, o3, ChatGPT, DALL-E 3, Sora
Employees ~3,400+
Revenue (est.) $3.4B+ annualized (late 2024)

OpenAI is the most visible player in the AI industry and arguably the most controversial. Founded in 2015 as a non-profit research lab, it has undergone a dramatic transformation into a capped-profit entity now pursuing full for-profit conversion. Its partnership with Microsoft — worth over $13 billion in investment — gives it access to Azure infrastructure at scale, but also raises questions about independence and mission drift.

For the full profile, see OpenAI: From Non-Profit Mission to $157B Valuation.

Anthropic

Attribute Detail
Headquarters San Francisco, CA
CEO Dario Amodei
Valuation ~$61.5B (early 2025)
Primary Investors Amazon ($4B), Google ($2B)
Key Models Claude 3 Opus, Claude 3.5 Sonnet, Claude 3.5 Haiku
Employees ~1,500+
Revenue (est.) $1B+ annualized (2025)

Anthropic was founded in 2021 by former OpenAI VP of Research Dario Amodei and his sister Daniela Amodei, along with several other OpenAI alumni. The company positions itself as the “safety-first” frontier lab, pioneering Constitutional AI and publishing a formal Responsible Scaling Policy. Its dual investment from Amazon ($4B) and Google ($2B) gives it access to two competing cloud platforms — an unusual arrangement that preserves some independence but also creates complex allegiances.

For the full profile, see Anthropic: The Safety-First Lab Building Claude.

Google DeepMind

Attribute Detail
Headquarters London, UK / Mountain View, CA
CEO Demis Hassabis
Parent Company Alphabet (Google)
Key Models Gemini Ultra, Gemini Pro, Gemini Flash, AlphaFold
Employees ~3,000+
Revenue Integrated into Alphabet ($307B revenue, 2023)

Google DeepMind is the result of the 2023 merger between DeepMind (acquired by Google in 2014 for ~$500M) and Google Brain, the search giant’s internal AI research division. Under Demis Hassabis, the combined entity represents the world’s largest concentration of AI research talent. Its Gemini model family competes directly with OpenAI’s GPT series.

What makes Google DeepMind uniquely powerful — and uniquely concerning — is its integration with Google’s broader ecosystem: Search, YouTube, Android, Gmail, Google Cloud. No other frontier lab has access to comparable distribution or data assets.

For the full profile, see Google DeepMind: When the World’s Largest AI Lab Meets the World’s Largest Data Company.

Meta AI

Attribute Detail
Headquarters Menlo Park, CA
Chief AI Scientist Yann LeCun
VP of GenAI Ahmad Al-Dahle
Parent Company Meta Platforms ($1.5T+ market cap)
Key Models Llama 3, Llama 3.1 (405B), Llama 4
Strategy Open-weight models

Meta’s AI strategy diverges sharply from the rest of the frontier lab landscape. Under CEO Mark Zuckerberg’s direction, Meta has pursued open-weight releases of its Llama model family, making powerful AI models available for download and modification by anyone. The Llama 3.1 405B parameter model, released in mid-2024, demonstrated capabilities competitive with proprietary models at a fraction of the access cost.

Meta’s motivations are not purely altruistic. Open-weight models commoditize the model layer, which benefits Meta as an application company (Instagram, WhatsApp, Facebook) that can integrate AI without paying API fees to competitors. The strategy also builds ecosystem lock-in through a different mechanism: developer adoption and community dependency.

xAI

Attribute Detail
Headquarters Palo Alto, CA / Memphis, TN
CEO Elon Musk
Valuation ~$75B (pre-SpaceX merger)
Key Models Grok 2, Grok 3
Key Infrastructure Colossus supercomputer (Memphis, 100K+ H100 GPUs)
Notable Investment HUMAIN/PIF $3B Series E participation

Elon Musk’s xAI, founded in mid-2023, has moved with remarkable speed. The company built the Colossus supercomputer — reportedly one of the world’s largest GPU clusters — in Memphis, Tennessee in a matter of months. Its Grok models are integrated with Musk’s X (formerly Twitter) platform.

The most significant recent development is PIF/HUMAIN’s $3 billion participation in xAI’s Series E round, followed by reports of a potential merger between xAI and SpaceX that would create a $250 billion entity. This transaction would make the Saudi sovereign wealth fund a minority shareholder in a company controlling critical US space and defense infrastructure.

For the full investigation, see The HUMAIN-xAI-SpaceX Triangle.

Mistral AI

Attribute Detail
Headquarters Paris, France
CEO Arthur Mensch
Valuation ~$6.2B (2024)
Key Investors Microsoft, Andreessen Horowitz, General Catalyst
Key Models Mistral Large, Mistral Medium, Mixtral

Mistral is Europe’s leading frontier lab, founded by former Google DeepMind and Meta researchers. It has pursued a hybrid open/closed model strategy and received significant attention as a potential European counterweight to US AI dominance. Its relatively modest valuation compared to US competitors underscores the funding gap that European AI companies face.

Frontier Lab Power Concentration Analysis

The concentration of frontier AI development is extreme. As of early 2026:

  • Three US companies (OpenAI, Anthropic, Google DeepMind) account for the vast majority of frontier model capabilities
  • One additional US company (Meta) dominates the open-weight model landscape
  • Total frontier lab funding exceeds $50 billion across the top five entities
  • Geographic concentration: All major frontier labs are headquartered in the San Francisco Bay Area, with the exception of DeepMind’s London office and Mistral’s Paris base

This concentration creates systemic risk. A small number of technical decisions — made by a small number of people, in a small number of buildings, on a small peninsula in Northern California — will determine the trajectory of AI for the entire world.


Layer 2: Chip Makers

If frontier labs are the minds of AI, chip makers are its muscles. No AI model runs without specialized processors, and the market for AI accelerators is one of the most concentrated in all of technology.

NVIDIA

Attribute Detail
Headquarters Santa Clara, CA
CEO Jensen Huang
Market Cap $3T+ (as of early 2025)
AI GPU Market Share ~80-90% (estimated)
Key Products H100, H200, B200, GB200 NVLink
Revenue (FY2025 Q3) $35.1B (data center: $30.8B)

NVIDIA is, by virtually any measure, the most important company in the AI industry. Its GPUs power the training and inference of nearly every major AI model in the world. Its CUDA software ecosystem — built over nearly two decades — creates a moat that no competitor has successfully crossed.

The company’s relationship with sovereign AI programs, including HUMAIN in Saudi Arabia, raises significant questions about export controls, technology transfer, and the geopolitical implications of GPU allocation.

For the full profile, see NVIDIA: The Most Important Company in AI.

AMD

Attribute Detail
Headquarters Santa Clara, CA
CEO Lisa Su
Key AI Products MI300X, MI325X, MI350 (planned)
AI GPU Market Share ~10-15% (estimated)

AMD is NVIDIA’s closest competitor in the AI accelerator market, though the gap remains substantial. Its MI300X chip has gained traction with some cloud providers and enterprises, and the company’s ROCm software stack is improving but still trails CUDA in ecosystem maturity. AMD is a partner in HUMAIN’s $23B partnership portfolio.

Intel

Attribute Detail
Headquarters Santa Clara, CA
CEO Lip-Bu Tan (as of 2025)
Key AI Products Gaudi 3 accelerator
AI Accelerator Market Share <5%

Intel’s position in the AI accelerator market has been disappointing relative to its historical dominance of the broader semiconductor industry. The Gaudi line of accelerators has struggled to gain meaningful market share against NVIDIA. Intel’s foundry ambitions — building chips for others — remain strategically important but commercially uncertain.

TSMC (Taiwan Semiconductor Manufacturing Company)

Attribute Detail
Headquarters Hsinchu, Taiwan
Chairman Mark Liu
Market Cap $700B+
Global Foundry Market Share ~60% (advanced nodes: ~90%)

TSMC does not design AI chips, but it manufactures nearly all of them. NVIDIA’s H100, AMD’s MI300X, Apple’s M-series, and Qualcomm’s AI processors all rely on TSMC’s advanced fabrication nodes. This makes TSMC — and by extension, Taiwan — a single point of failure for the entire global AI supply chain. The geopolitical implications, particularly regarding cross-strait tensions with China, are profound.

Chip Maker Power Concentration

Company Role Estimated AI Market Share
NVIDIA GPU Design 80-90%
AMD GPU Design 10-15%
TSMC Fabrication ~90% (advanced nodes)
Samsung Foundry Fabrication ~8% (advanced nodes)
Intel Foundry Fabrication <3% (advanced nodes)

The AI chip supply chain is a study in extreme concentration. One company designs most of the chips (NVIDIA). One company manufactures most of them (TSMC). And that manufacturer is located on an island 100 miles off the coast of a country that claims sovereignty over it. This is not a resilient supply chain; it is a strategic vulnerability of historic proportions.


Layer 3: Cloud Providers

Cloud providers are the infrastructure layer of AI — the companies that operate the massive data centers where AI models are trained and deployed. The market is dominated by three US hyperscalers.

Amazon Web Services (AWS)

Attribute Detail
Parent Company Amazon
Cloud Market Share ~31%
AI Investments Anthropic (~$4B), custom Trainium chips
Key AI Services Bedrock, SageMaker

AWS is the world’s largest cloud provider and the primary investor in Anthropic. Its custom Trainium chips represent an attempt to reduce dependence on NVIDIA, while its Bedrock platform offers access to multiple third-party AI models.

Microsoft Azure

Attribute Detail
Parent Company Microsoft
Cloud Market Share ~25%
AI Investments OpenAI (~$13B)
Key AI Services Azure OpenAI Service, Copilot

Microsoft’s $13 billion investment in OpenAI has made Azure the exclusive cloud provider for OpenAI’s models (with limited exceptions). This arrangement gives Microsoft a significant competitive advantage in enterprise AI deployment. The integration of OpenAI’s models into Microsoft 365 Copilot extends this advantage across the company’s massive installed base.

Google Cloud Platform (GCP)

Attribute Detail
Parent Company Alphabet
Cloud Market Share ~11%
AI Investments Anthropic (~$2B), internal DeepMind
Key AI Services Vertex AI, TPUs

Google Cloud has a unique position: it is both a cloud provider and the parent of the world’s largest AI research lab (Google DeepMind). Its custom TPU (Tensor Processing Unit) chips offer an alternative to NVIDIA GPUs for both training and inference, though their adoption outside Google’s own workloads remains limited.

Cloud Power Dynamics

The cloud layer is where AI power concentration becomes most visible. Three US companies — Amazon, Microsoft, and Google — collectively control approximately 67% of the global cloud infrastructure market. All three have made multi-billion dollar investments in frontier labs:

Cloud Provider Frontier Lab Investment Amount
Microsoft OpenAI ~$13B
Amazon Anthropic ~$4B
Google Anthropic ~$2B
Google DeepMind (internal) N/A (acquired 2014)

These investments create a web of dependencies that blur the line between infrastructure provider and AI developer. When Amazon invests $4 billion in Anthropic and provides it with AWS credits, is Amazon a neutral infrastructure provider or a strategic partner with aligned interests?


Layer 4: Sovereign AI Programs

Perhaps the most significant development in AI geopolitics over the past two years has been the rise of sovereign AI programs — national governments treating AI capability as a strategic asset comparable to nuclear weapons or space programs.

HUMAIN (Saudi Arabia)

Attribute Detail
Owner Public Investment Fund (PIF)
PIF AUM $1.1 Trillion
Chairman Crown Prince Mohammed bin Salman
CEO Tareq Amin
Partnerships $23B+ (NVIDIA, AMD, Cisco, xAI, Amazon, Qualcomm, Groq)
Data Centers 11 facilities planned, multi-GW ambition

HUMAIN is the most ambitious and best-funded sovereign AI program in the world. Launched in May 2025, it is wholly owned by Saudi Arabia’s Public Investment Fund and chaired by Crown Prince Mohammed bin Salman. Its $23 billion in announced partnerships with major technology companies, combined with its massive data center buildout, position it as a potential fourth pole in the global AI landscape alongside the US, China, and Europe.

For the definitive profile, see HUMAIN: The Definitive Profile of Saudi Arabia’s AI Empire.

For analysis of HUMAIN OS, see HUMAIN OS: When an AI Operating System Claims to Understand Human Intent.

G42 (United Arab Emirates)

Attribute Detail
Headquarters Abu Dhabi, UAE
Chairman Sheikh Tahnoun bin Zayed
Key Partners Microsoft, OpenAI
Notable Divested Chinese partnerships under US pressure

G42, backed by Abu Dhabi’s royal family, was forced to choose between Chinese and American technology partnerships in 2024 under US government pressure. The company divested its Chinese holdings and deepened its relationship with Microsoft, which invested $1.5 billion. G42’s trajectory illustrates how sovereign AI programs can become vectors for great-power competition.

China’s National Champions

China’s AI ecosystem operates under a fundamentally different governance model than the West. Major players include:

Company Key Models Government Relationship
Baidu ERNIE Bot Close state ties, search monopolist
Alibaba Cloud Qwen series Cloud + commerce integration
ByteDance Doubao / Seed series TikTok parent, regulatory target
DeepSeek DeepSeek-V3, R1 Hedge fund-backed, efficiency-focused
Zhipu AI GLM-4 Tsinghua University spinoff
SenseTime SenseNova Surveillance AI roots

China’s approach to AI is characterized by massive state investment, integration with surveillance infrastructure, and increasingly sophisticated open-weight models. DeepSeek’s January 2025 release of efficient, high-performing models at dramatically lower training costs sent shockwaves through Western AI markets, briefly wiping nearly $600 billion from NVIDIA’s market capitalization.

European Approaches

The European Union has pursued a regulation-first approach to AI through the EU AI Act, the world’s most comprehensive AI legislation. However, Europe’s frontier lab capacity remains limited, with Mistral AI as the only European company approaching frontier capabilities.

France has emerged as Europe’s AI champion, with President Macron hosting the Paris AI Action Summit in February 2025 and backing Mistral. Germany, the UK, and the Nordic countries have their own AI strategies, but none match the investment scale of the US, China, or Saudi Arabia.


Layer 5: Data Holders

AI models are only as powerful as the data they are trained on. The control of training data — its collection, curation, licensing, and restriction — is an increasingly important axis of power in the AI industry.

Key Data Dynamics

Data Category Major Holders AI Significance
Web Crawl Data Common Crawl, Internet Archive Foundation of most LLM training
Social Media Data Meta, X, Reddit, TikTok Human conversation patterns
Search Data Google Query-response pairs, user intent
Enterprise Data Salesforce, SAP, Oracle Business process training
Scientific Data Elsevier, Springer, PubMed Specialized knowledge
Code GitHub (Microsoft), GitLab Programming capability
Video YouTube (Google), TikTok Multimodal training
Geospatial Data Google Maps, Planet Labs Spatial reasoning

The data layer is undergoing rapid consolidation and restriction. Major publishers and content platforms — including the New York Times, Reddit, and the Associated Press — have moved to either license their data to AI companies or sue to prevent its use. Reddit’s IPO was partly predicated on the value of its data licensing deal with Google.

This creates a two-tier system: well-funded frontier labs that can afford to license premium data, and everyone else who must rely on increasingly restricted public datasets.

The Synthetic Data Shift

As high-quality human-generated training data becomes scarce and expensive, frontier labs are increasingly turning to synthetic data — AI-generated content used to train subsequent AI models. This raises profound questions about data quality degradation, feedback loops, and the potential for AI systems to amplify their own biases through recursive self-training.


Layer 6: Regulators

The regulatory landscape for AI is fragmented, evolving, and in most jurisdictions, woefully inadequate to the scale of the challenge.

Key Regulatory Bodies and Frameworks

Jurisdiction Key Body/Framework Status
European Union EU AI Act Enacted; phased implementation through 2027
United States Executive Order 14110 (Biden, 2023) Rescinded by Trump administration (Jan 2025)
United States Various state laws (CA, CO) Fragmented, inconsistent
United Kingdom AI Safety Institute Operational but advisory
China Interim AI Regulations Enforced; content moderation focus
Saudi Arabia SDAIA + HUMAIN Minimal public framework
Canada AIDA (Artificial Intelligence and Data Act) Pending
Japan AI Guidelines Voluntary, light-touch

The Regulatory Gap

The most significant fact about AI regulation is the gap between the speed of AI development and the speed of regulatory response. The EU AI Act, the world’s most comprehensive AI law, took over three years to negotiate and will not be fully implemented until 2027. In that time, AI capabilities have advanced by orders of magnitude.

In the United States, the regulatory picture is particularly chaotic. The Biden administration’s Executive Order 14110, which established reporting requirements for frontier AI models, was rescinded by the Trump administration in January 2025. No comprehensive federal AI legislation has been enacted. The result is a patchwork of state-level laws and voluntary industry commitments.

In Saudi Arabia, where HUMAIN is building one of the world’s most ambitious AI programs, there is no publicly available comprehensive AI governance framework independent of the entities it would need to regulate. The Saudi Data and AI Authority (SDAIA) exists, but its independence from PIF and HUMAIN is unclear.


Cross-Cutting Analysis: Where Power Concentrates

Funding Flows

The flow of capital in the AI industry reveals its power structure. Below are the most significant funding relationships:

Investor Recipient Amount Implications
Microsoft OpenAI ~$13B Cloud lock-in, product integration
Amazon Anthropic ~$4B Cloud lock-in, Bedrock integration
Google Anthropic ~$2B Hedge against DeepMind
PIF/HUMAIN xAI ~$3B Sovereign wealth → US AI/space
HUMAIN Various (VC fund) $10B Saudi influence across AI ecosystem
SoftBank Various AI cos $100B+ (planned) Vision Fund successor plays
Tiger Global Multiple frontier labs Multi-billion Financial returns focus

Board Interlocks and Personnel Flows

The AI industry’s leadership is remarkably incestuous. Key personnel flows include:

  • OpenAI to Anthropic: Dario Amodei, Daniela Amodei, and multiple researchers left OpenAI to found Anthropic in 2021
  • Google DeepMind to Mistral: Arthur Mensch left DeepMind to co-found Mistral
  • OpenAI to xAI: Several researchers joined Musk’s venture
  • Microsoft board influence: Microsoft holds a non-voting board observer seat at OpenAI
  • Amazon influence: Amazon’s investment in Anthropic includes cloud commitment terms

These personnel and board connections create informal channels of influence that are difficult to track but significant in their effects on competition, safety norms, and strategic direction.

The Vertical Integration Problem

The most concerning trend in AI power concentration is vertical integration — companies that control multiple layers of the stack simultaneously:

Company Layers Controlled
Google/Alphabet Frontier Lab + Cloud + Data + Hardware (TPUs) + Distribution (Search, Android)
Microsoft Cloud + Frontier Lab (via OpenAI) + Distribution (Office, Windows) + Data (GitHub, LinkedIn)
Amazon Cloud + Frontier Lab (via Anthropic) + Distribution (Alexa, retail) + Data (commerce)
Meta Frontier Lab + Data (social media) + Distribution (Instagram, WhatsApp) + Hardware (Quest)
HUMAIN Sovereign backing + Infrastructure + Frontier Lab (ALLAM) + Distribution (HUMAIN OS)

Google’s position is particularly striking. It controls the world’s largest AI research lab, the world’s most popular search engine, the world’s most popular mobile operating system, the world’s largest video platform, a major cloud provider, and custom AI chip fabrication. No single entity has ever controlled so many inputs to AI development simultaneously.


The Geopolitical Dimension

AI power is not just a corporate phenomenon — it is a geopolitical one. The competition for AI supremacy is reshaping international relations in ways that will define the coming decades.

The US-China AI Race

The United States and China are engaged in an escalating competition for AI dominance. US export controls on advanced semiconductors, implemented in October 2022 and tightened in 2023 and 2024, represent the most significant technology restriction since the Cold War. China has responded with massive domestic investment in chip fabrication and an emphasis on training efficiency, as demonstrated by DeepSeek.

The Gulf State Wildcard

Saudi Arabia and the UAE have emerged as a third force in AI geopolitics, leveraging their sovereign wealth to acquire influence across the AI stack. HUMAIN’s $23 billion in partnerships with US technology companies, combined with PIF’s $3 billion investment in xAI, represent the most significant entry of sovereign wealth into the AI industry to date.

The question is whether Gulf state AI investments represent genuine capability-building or influence-buying — and whether the distinction matters.

Export Controls and Technology Transfer

The US government’s export control regime is the primary mechanism for restricting AI technology transfer. Key restrictions include:

  • Advanced GPU exports: H100 and successor chips restricted to China
  • Chip fabrication equipment: ASML (Netherlands) EUV lithography machines restricted
  • Country tiers: Proposed framework categorizing nations by AI technology access

The effectiveness of these controls is contested. Reports of advanced NVIDIA chips reaching China through intermediaries persist, and the DeepSeek breakthrough suggests that capability gaps may narrow even under export restrictions.


Accountability Gaps

This power map reveals several critical accountability gaps:

  1. No frontier lab has independent safety governance: Even Anthropic, which positions itself as safety-focused, has a board that is ultimately accountable to financial investors. OpenAI’s board crisis of November 2023 demonstrated how quickly safety governance can be overridden when it conflicts with commercial interests.

  2. Sovereign AI programs operate without independent oversight: HUMAIN is wholly owned by PIF, chaired by MBS, and operates without public-facing independent safety governance. G42 in the UAE faces similar accountability questions.

  3. Cloud providers face no AI-specific regulation: AWS, Azure, and GCP provide the infrastructure for AI training and deployment but face no regulatory requirements specific to their role in the AI supply chain.

  4. NVIDIA’s monopoly is unregulated: Despite controlling 80-90% of the AI GPU market, NVIDIA faces no antitrust action specific to its AI dominance. Its decisions about chip allocation, pricing, and export compliance have enormous geopolitical implications but minimal oversight.

  5. Data governance is fragmented: There is no comprehensive international framework for governing the collection, use, and licensing of AI training data.


How to Use This Power Map

This power map is not a static document. It is a framework for understanding the forces that will shape AI development and deployment. INHUMAIN.AI will update this map as the landscape evolves.

For deeper analysis of specific entities:

For analysis of the safety implications, see our Complete Guide to AI Safety.

For ongoing tracking of HUMAIN specifically, see the HUMAIN Tracker.

The concentration of AI power is not inevitable. It is the product of specific decisions by specific people at specific institutions. Understanding those decisions — and holding those people accountable — is why INHUMAIN.AI exists.