INHUMAIN.AI
The Watchdog Platform for Inhuman Intelligence
Documenting What Happens When Intelligence Stops Being Human
AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 | AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 |

AI Geopolitics: Who Controls Inhuman Intelligence Controls the Century

A comprehensive analysis of the global AI power struggle. US-China rivalry, Europe's sovereignty crisis, Gulf states' $100B+ gamble, semiconductor chokepoints, military AI, and the digital colonialism reshaping the Global South. The definitive geopolitical map of artificial intelligence.

The question of who controls artificial intelligence is no longer a technology question. It is the defining geopolitical question of the twenty-first century. Every major power on Earth has recognized that AI represents the most consequential dual-use technology since nuclear fission, and they are acting accordingly — with investment, regulation, espionage, sanctions, and in some cases, the deployment of autonomous weapons systems in active conflict zones.

This is not hyperbole. The United States has imposed three successive rounds of semiconductor export controls on China, each more restrictive than the last, explicitly framing advanced chips as national security assets equivalent to weapons components. China has responded with a $47 billion state semiconductor fund and a crash program to achieve chip self-sufficiency. The European Union has enacted the most comprehensive AI regulation in history while simultaneously lamenting its inability to produce a single frontier AI lab. The Gulf states — Saudi Arabia, the UAE, Qatar — have committed over $100 billion to AI infrastructure, attempting to buy their way to relevance in a technology race where they have no indigenous research base. India is positioning itself as the AI talent factory for the world while struggling to retain its best researchers. Israel is deploying AI-driven targeting systems in active warfare. Russia is integrating AI into its nuclear command architecture.

The map of AI power is not the map of the twentieth century. It does not track neatly onto Cold War alliances, NATO membership, or the Bretton Woods system. It is being drawn in real time by the flows of capital, talent, data, and semiconductors — and the chokepoints that control those flows.

This overview is the starting point for INHUMAIN.AI’s geopolitics coverage. It maps the major players, their strategies, their dependencies, and the fault lines along which the global AI order is fracturing.


The US-China Axis: A New Cold War in Silicon

The central axis of AI geopolitics is the rivalry between the United States and China. This is not a metaphor. Senior officials in both governments describe it in existential terms. The Biden administration’s October 2022 semiconductor export controls were described by one national security official as an attempt to “strangle” China’s AI capabilities. China’s State Council has designated AI as a “core national strategic technology” essential to the survival of the Communist Party’s governance model.

The rivalry operates across every dimension of AI capability: foundational research, compute infrastructure, data access, talent, and deployment. The United States holds decisive advantages in foundational model research (OpenAI, Anthropic, Google DeepMind, Meta AI), chip design (NVIDIA, AMD, Qualcomm), and the software ecosystem (CUDA, PyTorch, cloud platforms). China holds advantages in deployment scale, government data access, manufacturing capacity, and the sheer size of its AI workforce.

The critical chokepoint is semiconductors. The most advanced AI training chips — NVIDIA’s H100, A100, and successor architectures — are designed in the US, manufactured in Taiwan by TSMC, and rely on extreme ultraviolet lithography equipment made exclusively by ASML in the Netherlands. This supply chain gives the US and its allies extraordinary leverage over China’s AI ambitions. The export controls imposed in October 2022, tightened in October 2023, and expanded again in 2024 effectively cut China off from the most advanced training hardware.

But chokepoints cut both ways. China controls approximately 60% of the world’s rare earth processing capacity and dominates the production of several materials critical to semiconductor manufacturing. Taiwan, where over 90% of the world’s most advanced chips are fabricated, sits under the shadow of a potential Chinese military action that would simultaneously cripple both nations’ AI supply chains and the global economy.

For deep analysis, see: The US-China AI Race: A New Cold War in Silicon.


Europe: The Regulator That Cannot Compete

Europe occupies a paradoxical position in the AI geopolitics landscape. It has produced the most sophisticated and influential AI regulatory framework in the world — the EU AI Act, adopted in June 2024 and entering enforcement in August 2026 — while simultaneously failing to produce a single frontier AI laboratory capable of competing with American or Chinese institutions.

This is not for lack of talent. European universities produce world-class AI researchers. But those researchers overwhelmingly leave for the United States, drawn by compensation packages, compute access, and institutional cultures that European institutions cannot match. Yann LeCun (Meta’s Chief AI Scientist) is French. Demis Hassabis (Google DeepMind CEO) is British. The list continues at length.

The sole European contender at the frontier model level is Mistral AI, a French startup founded in 2023 that raised over $600 million within its first year. Mistral has produced competitive open-weight models and positions itself as the European alternative to American closed-source AI. But one startup does not constitute sovereignty. Mistral’s compute runs on American cloud infrastructure, its GPUs are American-designed and Taiwan-fabricated, and its training data comes predominantly from the English-language internet.

France has committed roughly $2 billion to sovereign AI development. Germany launched its LEAM initiative. The EU has various funding mechanisms. But the scale mismatch is staggering: Microsoft alone invested $13 billion in OpenAI, more than all European sovereign AI investments combined.

Europe’s actual power lies in regulation. The EU AI Act has become the de facto global template, as GDPR did for data privacy. Companies building AI systems for global deployment must comply with EU requirements regardless of where they are headquartered. This gives Europe significant influence over how AI is built and deployed, even if it has little influence over what AI is built.

For full analysis, see: Europe’s AI Sovereignty: Regulating What It Cannot Build?.


The Gulf States: Buying a Seat at the Table

The Gulf states — primarily Saudi Arabia and the UAE, with smaller plays from Qatar, Bahrain, and Kuwait — represent the most aggressive new entrants in the AI geopolitics landscape. They are attempting something unprecedented: purchasing frontier AI capability through sovereign wealth, partnerships, and infrastructure investment, without a meaningful indigenous research base.

Saudi Arabia’s play centers on HUMAIN, the national AI company launched in May 2025 under the Public Investment Fund (PIF), which manages approximately $1.1 trillion in assets. HUMAIN has announced partnerships with NVIDIA, AMD, Cisco, Qualcomm, Groq, and others, committed billions to data center construction, and made a $3 billion investment in Elon Musk’s xAI. The Saudi Data and AI Authority (SDAIA) provides the regulatory framework. NEOM, the $500 billion megacity project, is positioned as a testbed for AI deployment at city scale.

The UAE has pursued a more diversified strategy. G42, the Abu Dhabi-based AI holding company chaired by Sheikh Tahnoon bin Zayed (the national security advisor and brother of the UAE’s ruler), has partnerships with Microsoft, OpenAI, and multiple Chinese firms. The Technology Innovation Institute (TII), also in Abu Dhabi, developed Jais, one of the first Arabic-language large language models. The Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) is the world’s first graduate-level AI university.

The Gulf strategy has structural advantages: virtually unlimited capital, cheap energy for data centers, strategic geographic positioning between East and West, and political systems capable of rapid decision-making without democratic friction. It also has structural vulnerabilities: extreme heat requiring massive cooling energy, near-total dependency on Western technology partners, thin domestic talent pools, and human rights records that create reputational risk for international partners and complicate talent recruitment.

For comprehensive coverage, see: Gulf States AI: The $100 Billion Desert Bet on Inhuman Intelligence.


India: The Talent Factory With Sovereignty Ambitions

India occupies a unique position in the global AI landscape. It is simultaneously one of the largest exporters of AI talent to the United States and other Western nations, one of the largest potential markets for AI deployment, and an increasingly assertive player in AI geopolitics.

India produces more AI engineers and researchers than any country except the United States and China. The Indian Institutes of Technology (IITs) and the Indian Institute of Science (IISc) are globally recognized. But a disproportionate share of India’s top AI talent leaves the country. An estimated 25-30% of AI researchers at top US labs have Indian origins. Google, Microsoft, and IBM have all been led by CEOs of Indian descent.

The Indian government has responded with the IndiaAI mission, a roughly $1.3 billion initiative announced in 2024 to build sovereign compute infrastructure, develop India-specific AI models (particularly in Indian languages), and create a national AI marketplace. The initiative includes plans for 10,000+ GPU compute clusters and partnerships with domestic cloud providers.

India’s strategic advantage is data. With over 1.4 billion people, the world’s largest youth population, and rapidly expanding internet access, India generates data at a scale matched only by China. The question is whether India can translate that data advantage into AI capability before American and Chinese systems entrench themselves as the default infrastructure for India’s digital economy.

India has also positioned itself as a voice for the Global South in AI governance, advocating for data sovereignty frameworks that would prevent the extraction model that has characterized previous waves of digital globalization. This positions India as a potential counterweight to both Western and Chinese AI hegemony, though its capacity to deliver on that positioning remains uncertain.


Israel: The AI-Military Complex

Israel has become the most advanced practitioner of military AI deployment in the world, and the most controversial. The country’s technology sector, defense establishment, and intelligence apparatus have been deeply integrated for decades through the mandatory military service pipeline and the institutional culture of Unit 8200, the signals intelligence unit that has served as an incubator for much of Israel’s technology industry.

This integration has produced AI systems that have moved from theoretical capability to operational deployment faster than any other country’s military AI programs. The Lavender system, reported on extensively by journalists in 2024, uses machine learning to generate target lists in military operations. The Gospel system identifies buildings and infrastructure for strikes. Both were reportedly used in the Gaza conflict beginning in October 2023.

The deployment of these systems has raised profound questions about AI in warfare — questions that extend far beyond Israel. If AI systems can generate targeting recommendations faster than human operators can meaningfully review them, does the legal requirement for human oversight become a legal fiction? If AI reduces the cognitive burden of targeting, does it lower the threshold for the use of force? These are not theoretical questions. They are being answered, in real time, with real consequences.

For detailed analysis, see: AI at War: The Military Applications of Inhuman Intelligence.


Russia: AI as Asymmetric Weapon

Russia’s AI capabilities are modest compared to the United States or China, but its AI strategy is significant for two reasons: its integration of AI into military doctrine, and its willingness to deploy AI systems in information warfare at scale.

Russia’s National AI Strategy, updated multiple times since its initial 2019 publication, frames AI primarily as a tool for maintaining strategic parity with the United States despite Russia’s smaller economic base. This means prioritizing military applications — autonomous systems, electronic warfare, cyber operations — and information warfare capabilities, including deepfake generation, automated disinformation, and social media manipulation.

Russian defense companies, including Kalashnikov Group and Kronshtadt, have developed autonomous weapons platforms including armed drones and autonomous ground vehicles. The integration of AI into Russia’s nuclear command and control architecture — specifically, the Perimeter system (known colloquially as “Dead Hand”) — raises the most severe AI safety concerns of any military application on Earth. If automated systems can influence or initiate nuclear launch decisions without meaningful human oversight, the risks extend beyond any individual conflict to the survival of human civilization.

Russia’s primary constraint is compute. Western sanctions following the 2022 invasion of Ukraine have severely restricted Russia’s access to advanced semiconductors. Russia has attempted to source chips through intermediaries in Central Asia, the Middle East, and Southeast Asia, but the performance gap between what Russia can access and what is available to the US and China continues to widen.


Semiconductor Supply Chains: The Ultimate Chokepoint

No analysis of AI geopolitics is complete without understanding the semiconductor supply chain, because it is the single most concentrated chokepoint in the global technology landscape.

Supply Chain Node Dominant Player Market Share Location
Advanced chip design (AI training) NVIDIA ~80% of AI training GPUs United States
Advanced chip fabrication (<5nm) TSMC ~90% of advanced nodes Taiwan
EUV lithography equipment ASML ~100% Netherlands
Advanced memory (HBM) Samsung, SK Hynix ~95% combined South Korea
Chip design software (EDA) Synopsys, Cadence ~70% combined United States
Rare earth processing Various Chinese firms ~60% China

This table reveals the extraordinary concentration of AI’s critical supply chain in a handful of companies in a handful of countries. NVIDIA designs the chips. TSMC fabricates them. ASML makes the only machines capable of printing circuits at the necessary scale. Samsung and SK Hynix produce the high-bandwidth memory that AI chips require. Synopsys and Cadence make the software used to design the chips in the first place.

The geographic concentration creates catastrophic single points of failure. A Chinese military action against Taiwan would not merely disrupt the chip supply — it would effectively halt the production of advanced AI hardware worldwide. TSMC’s fabs in Arizona, currently under construction, represent a partial hedge, but they are years from full production and will initially produce chips one to two generations behind TSMC’s Taiwan facilities.

The US export controls exploit this concentration. Because virtually every node in the advanced chip supply chain passes through US-allied territory, the US can effectively veto China’s access to frontier AI hardware by restricting NVIDIA’s chip sales, pressuring ASML to halt EUV equipment deliveries to China, and monitoring third-country reexport channels.


AUKUS and NATO: Alliance AI

Western military alliances are increasingly incorporating AI cooperation into their strategic frameworks. The AUKUS pact (Australia, UK, US), originally focused on nuclear submarine technology, has expanded to include significant AI and autonomous systems cooperation under its “Pillar II” advanced capabilities framework.

NATO’s AI strategy, adopted in 2021 and updated since, commits member states to developing and deploying AI responsibly while maintaining the alliance’s technological edge. The NATO Defence Innovation Accelerator for the North Atlantic (DIANA) funds AI startups and research across member states. The NATO Innovation Fund, a $1.1 billion venture capital fund, invests in dual-use technologies including AI.

But alliance AI cooperation faces structural challenges. Intelligence sharing restrictions limit the flow of AI-relevant data between allies. Divergent regulatory approaches — the EU AI Act imposes requirements that may conflict with military AI development timelines — create friction. Industrial policy competition means allies are simultaneously cooperating on military AI and competing for commercial AI dominance. And the fundamental question of interoperability — can allied AI systems work together in real time? — remains largely unsolved.


The Global South: Subjects, Not Participants

For much of the world — Africa, Southeast Asia, Latin America, the Pacific Islands — the AI geopolitics described above is something that happens to them, not something they participate in.

The Global South’s relationship with AI is defined by three structural asymmetries. First, data extraction: the populations of the Global South generate enormous volumes of data that flow to American and Chinese platforms for AI training, with little or no value returned. Second, algorithmic imposition: AI systems trained primarily on English-language, Western data are deployed in non-Western contexts where they perform poorly and encode biases that disadvantage local populations. Third, labor exploitation: the human labor required to train AI systems — content moderation, data labeling, RLHF annotation — is disproportionately sourced from low-wage workers in Kenya, the Philippines, Venezuela, and other developing nations.

This dynamic has been described by scholars as “digital colonialism” — a twenty-first century extraction model where the resource being extracted is not minerals or labor (though labor is extracted too), but data, attention, and the cognitive capital of populations that receive little benefit from the systems their data builds.

Some nations and movements are pushing back. The African Union’s AI strategy emphasizes data sovereignty. India’s data localization requirements attempt to keep Indian data within Indian borders. Indigenous data sovereignty movements in New Zealand, Canada, and Australia assert the rights of indigenous peoples to control data about their communities, cultures, and territories.

For full coverage, see: AI and Digital Colonialism: When Silicon Valley Becomes the New Empire.


The AI Arms Race: Dynamics and Risks

The structure of the global AI competition displays several characteristics of a classic arms race, and this should concern everyone.

Speed pressure. Each major player perceives that being second in AI capability is strategically unacceptable, creating pressure to develop and deploy AI systems faster than safety testing can keep pace with. OpenAI’s rapid release cycle, China’s crash semiconductor program, and the Gulf states’ billion-dollar spending sprees all reflect this dynamic.

Security dilemma. Each nation’s AI investments, even those motivated by defensive or economic goals, are perceived as threatening by rivals, triggering counter-investments. The US develops military AI capabilities to maintain its edge; China sees this as threatening and accelerates its own programs; the US sees China’s acceleration as confirming the threat, and the cycle continues.

Dual-use ambiguity. AI is inherently dual-use. The same foundation model that writes poetry can generate propaganda. The same computer vision system that identifies cancer can identify military targets. This makes arms control agreements extraordinarily difficult to design and verify, because there is no bright line between civilian and military AI capability.

Proliferation risk. Unlike nuclear weapons, AI does not require rare physical materials or massive industrial infrastructure. The knowledge to build capable AI systems is widely distributed. Open-source models are freely available. A sufficiently funded non-state actor — a terrorist organization, a criminal syndicate, a rogue billionaire — could develop dangerous AI capabilities without any government’s permission or knowledge.

Governance vacuum. There is no international treaty governing AI weapons. The Convention on Certain Conventional Weapons (CCW) has held discussions on lethal autonomous weapons systems (LAWS) since 2014, but has produced no binding agreement. The Bletchley Declaration (November 2023) and the Seoul AI Summit (May 2024) produced statements of principles but no enforcement mechanisms. The UN Secretary General’s advisory body on AI has issued recommendations that remain aspirational.


The Power Map: Who Has What

Capability US China EU Gulf States India Israel Russia
Frontier model research Dominant Strong Weak None (buying) Moderate Moderate Weak
Compute infrastructure Dominant Strong Moderate Building rapidly Weak Moderate Weak (sanctions)
Chip design Dominant Developing Weak None Weak Moderate Weak
Chip fabrication Via Taiwan/allies Developing Via ASML None None None Weak
Data access Strong Dominant (population) Moderate (GDPR-constrained) Limited Massive (population) Limited Moderate
AI talent pool Dominant (via immigration) Strong Strong (but brain drain) Weak (importing) Strong (but exporting) Strong Moderate
Military AI deployment Advanced Advanced Limited Nascent Moderate Most advanced Moderate
Regulatory influence Strong Strong (domestic) Dominant (global template) Weak Growing Weak None
Capital availability Dominant Strong (state-directed) Moderate Dominant (SWFs) Moderate Strong (VC) Weak

What Comes Next

The AI geopolitics landscape is evolving faster than any previous technology competition. Several developments in 2026 and beyond will shape its trajectory:

EU AI Act enforcement begins in August 2026. For the first time, a major jurisdiction will actively enforce comprehensive AI regulation, including potential fines of up to 7% of global revenue. The impact on US and Chinese companies deploying AI in Europe will test whether regulation can function as a form of soft power. See: AI Regulation Tracker.

Gulf states’ infrastructure comes online. The data centers that Saudi Arabia, the UAE, and Qatar have committed to building will begin reaching operational capacity. Whether these become genuine AI capability centers or expensive white elephants will depend on whether the Gulf states can attract and retain the talent to use them.

Taiwan remains the fault line. Any Chinese military action against Taiwan — from blockade to invasion — would trigger the most severe AI supply chain disruption in history. The concentration of advanced chip fabrication in Taiwan is the single greatest vulnerability in the global AI infrastructure.

The open-source question. The proliferation of capable open-source models (Meta’s Llama series, Mistral, DeepSeek) is undermining the chokepoint logic that underlies US semiconductor export controls. If China or other adversaries can achieve near-frontier capability through open-source models running on less advanced hardware, the strategic value of chip restrictions diminishes.

HUMAIN’s trajectory. Saudi Arabia’s national AI company, backed by the largest sovereign wealth fund on Earth, represents the most ambitious attempt by a non-traditional AI power to buy its way to relevance. Its success or failure will determine whether AI capability can be purchased or must be grown. INHUMAIN.AI tracks every development: HUMAIN Watch.


INHUMAIN.AI Geopolitics Coverage

This overview is the entry point to INHUMAIN.AI’s geopolitics section. For deep analysis of specific dimensions of the global AI power struggle, see:

Bookmark this page. The geopolitics of AI are moving faster than any other domain in international relations, and the decisions being made today will determine the power structures of the next century. Someone needs to watch. That is what INHUMAIN.AI does.