About INHUMAIN.AI
The independent watchdog for inhuman intelligence. We document, question, and hold accountable the entities building AI beyond human oversight or control.
Mission
As the world’s most powerful institutions race to build inhuman intelligence, someone must document what they build, question why they build it, and hold them accountable for the consequences.
That is what INHUMAIN.AI does.
We are the independent editorial platform tracking artificial intelligence safety, regulation, corporate power, and the erosion of human agency in the age of machine intelligence. Our primary subject is HUMAIN — Saudi Arabia’s national AI company, backed by the Public Investment Fund ($1.1 trillion AUM), chaired by Crown Prince Mohammed bin Salman, and deploying $23 billion in technology partnerships with NVIDIA, AMD, Cisco, xAI, Amazon, and Qualcomm.
But our scope extends far beyond one company. We cover every frontier AI lab, every regulatory framework, every geopolitical power play, and every ethical question that arises when intelligence stops being human.
Why We Exist
The name says it all. Inhumain is the exact French lexical opposite of humain. It means inhuman, inhumane. It is a dictionary word that cannot be suppressed, cannot be bought, and cannot be redirected. It is the permanent, indelible, linguistically perfect watchdog domain for the most powerful AI entity on Earth.
We exist because:
- $1.1 trillion in sovereign wealth is being deployed to build AI infrastructure with minimal public oversight
- Gigawatt-scale data centers are being constructed in desert environments by governments with documented human rights concerns
- Agentic AI operating systems are being launched that claim to “understand human intent” and act autonomously
- Zero independent oversight exists for the most concentrated AI buildout in human history
- No one else is asking the questions that need to be asked
Editorial Principles
Independence. We accept no corporate sponsors. We take no sovereign wealth fund backing. We have no advertising relationships with AI companies we cover. Our analysis reflects our editorial judgment alone.
Accuracy. Every claim is sourced. Every data point is verified. Every quote is attributed. When we speculate, we say so. When we don’t know, we say that too.
Fairness. We are watchdogs, not attack dogs. We present evidence and let readers draw conclusions. When companies respond to our reporting, we publish their responses in full.
Transparency. Our funding model, our editorial team, and our methodology are public. We practice the transparency we demand from others.
Bilingual. We publish in English and French, capturing the entire francophone market for AI safety and ethics content — 300 million French speakers across 29 countries.
What We Cover
AI Safety & Existential Risk
The alignment problem, interpretability, autonomous weapons, bioweapon risks, cybersecurity threats, and the catastrophe scenarios that keep AI safety researchers awake at night.
Global AI Regulation
Every country, every law, every deadline. The EU AI Act. US federal and state patchwork. China’s layered approach. The UK’s pro-innovation framework. GCC self-governance experiments. We track compliance requirements so you don’t have to.
AI Industry Power Map
Who controls inhuman intelligence? Frontier labs, funders, chip makers, cloud providers, data holders, regulators. The complete map of the AI-industrial complex, with HUMAIN as the newest and most well-funded entrant.
Sector-by-Sector AI Impact
From Wall Street to the factory floor. Which industries face disruption first, which jobs are disappearing, and what the inhuman economy looks like for the humans who live in it.
AI Ethics & Philosophy
The trolley problem goes digital. Can machines be conscious? Whose values should AI follow? The deepest philosophical questions of our era, made urgent by the pace of AI development.
Geopolitics of AI
Who controls inhuman intelligence controls the century. The US-China AI cold war. Europe’s sovereignty question. The Gulf states’ $100 billion bet. And the developing world’s struggle to avoid digital colonialism.
Advisory Board
We are building an advisory board of independent AI safety researchers, ethicists, legal scholars, and journalists. If you work in AI safety and want to contribute to independent oversight, contact us.
Contact
- General inquiries: contact@inhumain.ai
- Press and media: press@inhumain.ai
- Encrypted tip line: See our secure contact page for Signal protocol references and PGP keys
- Partnership proposals: partnerships@inhumain.ai
INHUMAIN.AI is a publication of The Vanderbilt Portfolio AG.