INHUMAIN.AI
The Watchdog Platform for Inhuman Intelligence
Documenting What Happens When Intelligence Stops Being Human
AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 | AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 |

The Inhumain Manifesto: Why We Exist

A ten-point declaration on the unchecked rise of inhuman intelligence — and the principles that must govern it before it governs us. The INHUMAIN.AI safety manifesto.

Preamble

We are not opposed to artificial intelligence. We are opposed to artificial intelligence without accountability.

The distinction matters. Every transformative technology in human history — nuclear fission, genetic engineering, industrial chemistry — arrived with a period of unchecked enthusiasm followed by catastrophic failure followed by regulation written in the aftermath of preventable harm. We do not have to repeat that cycle. We choose not to repeat it.

This manifesto is a statement of ten principles that we believe must govern the development, deployment, and oversight of artificial intelligence systems operating beyond the threshold of human comprehension. We call these systems inhuman intelligence — not because they are evil, but because they are, by definition, no longer human. They operate at speeds, scales, and levels of abstraction that no individual human mind can audit, correct, or fully understand.

That is not a reason to stop building them. It is a reason to start governing them.


I. Intelligence Without Conscience Is Inhuman

The capacity to process information is not the capacity to understand its consequences. A system that can generate a legal brief in four seconds can also generate disinformation at the same speed. A system that can diagnose cancer from a radiology scan can also be trained to identify dissidents from surveillance footage.

Intelligence is a tool. Conscience is a choice. When we build systems that possess the former without any architecture for the latter, we are building instruments of power that serve whoever holds the switch.

In 2023, researchers at Stanford demonstrated that large language models could be fine-tuned to produce convincing bioweapon synthesis instructions in under forty-eight hours using publicly available data. The models had no mechanism to evaluate whether they should answer the question — only whether they could. That gap between capability and conscience is the gap this manifesto exists to close.

Principle: Every AI system deployed at scale must include documented ethical constraint architectures — not as afterthoughts, but as core design requirements subject to independent audit.


II. Speed Without Oversight Is Inhuman

Autonomous trading algorithms execute millions of transactions per second. Automated content moderation systems process billions of posts per day. Military autonomous weapons systems are designed to identify and engage targets faster than any human chain of command can authorize.

Speed is the defining advantage of machine intelligence. It is also its most dangerous property. When systems operate faster than human oversight can function, oversight ceases to exist in any meaningful sense. It becomes a legal fiction — a checkbox on a compliance form that no one reads because the decisions have already been made.

On May 6, 2010, the Dow Jones Industrial Average lost nearly 1,000 points in minutes during the Flash Crash — a cascading failure driven by high-frequency trading algorithms interacting in ways their designers never anticipated. The market recovered within minutes, but the lesson did not require a longer timeframe to understand: systems that operate beyond the speed of human correction will eventually produce failures beyond the speed of human repair.

Principle: No AI system should operate in a domain where its decision cycle is faster than any available mechanism for human review, correction, or override — unless the consequences of that decision are fully reversible.


III. Scale Without Accountability Is Inhuman

When a single algorithm determines the creditworthiness of 200 million people, the error rate is no longer a statistical abstraction. It is a number of human lives. A 0.5% false denial rate across 200 million credit applications means one million people wrongly denied access to housing, education, and economic participation — with no human being to appeal to, no decision-maker to confront, and no institutional memory of the error.

Scale is the multiplier that transforms minor flaws into systemic injustice. AI systems are being deployed at scales that no previous technology has matched, across populations that have no knowledge of the system’s existence, let alone its logic.

Australia’s Robodebt scandal provides a case study in algorithmic harm at scale. Between 2015 and 2019, an automated income-averaging system issued over 500,000 debt notices to welfare recipients, many of them incorrect. The system reversed the burden of proof, requiring citizens to demonstrate they did not owe money the algorithm claimed they did. A Royal Commission later found the scheme was unlawful from its inception — but only after hundreds of thousands of people had been subjected to financial distress, and multiple individuals had taken their own lives.

Principle: Any AI system making consequential decisions about individuals at population scale must maintain per-decision audit trails, provide accessible appeals mechanisms staffed by humans with override authority, and publish aggregate accuracy and error-rate data.


IV. Power Without Transparency Is Inhuman

The most powerful AI systems on Earth are proprietary. Their training data is undisclosed. Their model weights are trade secrets. Their decision-making processes are described, when they are described at all, in marketing language designed to reassure rather than inform. The public is asked to trust systems it cannot inspect, built by companies it cannot regulate, trained on data it may have generated but never consented to share.

This is not innovation. This is the construction of private infrastructure for public governance without public consent.

When the Netherlands deployed an algorithm called SyRI (System Risk Indication) to detect welfare fraud, a Dutch court struck it down in 2020, ruling that the system violated the European Convention on Human Rights. The court’s reasoning was direct: the government could not adequately explain how the system worked, what data it used, or why it flagged specific individuals. Transparency was not a feature the system lacked. Transparency was the right the system violated.

Principle: AI systems operating in public-facing domains — criminal justice, healthcare, education, financial services, immigration, employment — must publish their training data provenance, model architecture summaries, and decision-logic documentation in formats accessible to independent auditors and the affected public.


V. Profit Without Responsibility Is Inhuman

The AI industry operates under an economic model that privatizes capability and socializes risk. Companies capture the value of AI-generated productivity gains while externalizing the costs of displacement, misinformation, environmental degradation, and democratic erosion onto the public.

The global AI industry consumed an estimated 4.3 gigawatts of electricity for data center operations in 2024 — a figure projected to double by 2027. The water consumption required to cool these facilities in arid regions represents a direct transfer of scarce natural resources from public use to private computation. When HUMAIN plans gigawatt-scale data centers in the Arabian Peninsula, it is making a claim on water and energy resources in one of the most water-stressed regions on Earth.

Meanwhile, the economic displacement driven by AI automation is projected to affect 300 million jobs globally by the end of the decade, according to Goldman Sachs research. The companies building these systems have no legal obligation to mitigate the displacement they cause, fund retraining programs, or contribute to the social safety nets that will absorb the impact.

Principle: AI companies operating above defined revenue and deployment thresholds must fund displacement mitigation proportional to their market impact, disclose environmental resource consumption, and submit to mandatory social-impact assessments before deploying automation systems in new sectors.


Agentic AI systems — those designed to take actions in the world without per-action human authorization — represent a qualitative shift in the relationship between humans and machines. When an AI agent books a flight, it is a convenience. When an AI agent negotiates a contract, it is a delegation of authority. When an AI agent makes medical triage decisions, it is a transfer of moral responsibility to a system that cannot bear it.

HUMAIN launched its Agentic AI Operating System in early 2025, describing it as a platform that “understands human intent and acts accordingly.” The language is revealing. Understanding intent and acting accordingly is precisely the domain of human agency. When machines claim that function, the question is not whether they perform it well. The question is who authorized the transfer.

Principle: No AI system should take consequential autonomous action — action that creates legal obligations, alters rights, allocates resources, or affects physical safety — without prior, specific, informed, and revocable consent from the individuals affected.


VII. Concentration Without Constraint Is Inhuman

The AI industry is consolidating at a pace that dwarfs every previous technology concentration in history. Three cloud providers control over 65% of global cloud infrastructure. A single company — NVIDIA — supplies over 80% of the GPUs used to train frontier AI models. Sovereign wealth funds managing trillions of dollars are forming exclusive partnerships with a small number of frontier labs, creating vertically integrated AI supply chains that span hardware, data, compute, and deployment.

Saudi Arabia’s HUMAIN represents the most concentrated AI buildout ever attempted: $23 billion in announced partnerships, backed by a $1.1 trillion sovereign wealth fund, chaired by a head of state, with exclusive technology agreements spanning NVIDIA, AMD, Cisco, xAI, Amazon, and Qualcomm. This is not a market. It is a strategic asset controlled by a single decision-making authority.

Principle: AI infrastructure concentration must be subject to the same antitrust scrutiny applied to telecommunications, energy, and financial services — including structural separation requirements, interoperability mandates, and limits on vertical integration across the AI supply chain.


VIII. Surveillance Without Boundaries Is Inhuman

Every AI system requires data. The most powerful AI systems require the most data. The economic logic is inescapable: the incentive to collect, retain, and analyze human behavioral data scales directly with the capability of the systems that consume it. AI does not merely use surveillance infrastructure. AI is the economic justification for surveillance infrastructure.

China’s social credit system, which uses AI-powered facial recognition and behavioral analysis to assign trustworthiness scores to citizens, is often cited as the cautionary example. But the architecture of continuous surveillance is not unique to authoritarian states. Clearview AI scraped billions of facial images from public social media without consent and sold the resulting identification system to law enforcement agencies across democratic nations. The technology does not distinguish between authoritarian and democratic applications. Only governance does.

Principle: AI systems must be prohibited from mass biometric surveillance without judicial authorization, retroactive behavioral profiling without individual consent, and the construction of predictive behavioral models applied to populations rather than specific, judicially authorized investigations.


IX. Development Without Inclusion Is Inhuman

The people building AI systems do not represent the people affected by them. Frontier AI development is concentrated in a small number of institutions in the United States, China, the United Kingdom, and — increasingly — the Gulf states. The populations most affected by AI-driven automation, surveillance, and resource extraction are overwhelmingly located in the Global South, where they have no seat at the design table, no voice in governance frameworks, and no access to the economic gains.

When AI-powered hiring systems were found to systematically disadvantage women — as documented in Amazon’s abandoned recruiting tool in 2018, which penalized resumes containing the word “women’s” — the failure was not merely technical. It was structural. The teams that built the system did not include sufficient representation of the populations the system would evaluate.

Principle: AI governance frameworks must include mandatory representation from affected populations, particularly communities in the Global South, labor organizations, disability advocates, and civil society groups — not as consultants, but as voting participants in standards-setting bodies.


X. Progress Without Memory Is Inhuman

Every previous industrial revolution produced a retrospective literature of regret. Factory owners who acknowledged, decades later, that child labor was not a necessary feature of industrialization. Chemical companies that conceded, after generations of litigation, that dumping waste in rivers was not an acceptable cost of production. Tobacco executives who admitted, under oath, that they knew their product was lethal.

We do not have to wait for the retrospective. The AI industry is producing its harms in real time, at global scale, with full documentation. The question is not whether future generations will judge this moment. The question is whether this generation will act before the judgment is written.

The development of AI is not a force of nature. It is a series of human decisions made by identifiable people at identifiable institutions for identifiable reasons. Those decisions can be made differently. They must be made differently.

Principle: AI development institutions must maintain and publish institutional impact records — longitudinal documentation of deployment decisions, known harms, mitigation measures taken, and outcomes measured — so that the historical record of this technological transition is written by evidence, not by the marketing departments of the companies that built it.


A Final Word

This manifesto is not a call to stop building artificial intelligence. It is a call to stop building it in the dark.

The technology is extraordinary. The speed of progress is unprecedented. The potential for human benefit is real and significant. But potential is not destiny. The same systems that could cure diseases, reverse climate change, and expand human knowledge could also entrench autocracy, eliminate economic agency, and render democratic governance obsolete.

The difference is governance. The difference is oversight. The difference is whether the people building inhuman intelligence are accountable to the people who must live with it.

We believe they must be. That is why we exist.


This manifesto is a living document. It will be updated as the technology evolves, as new evidence emerges, and as the global conversation about AI governance matures. We invite researchers, policymakers, journalists, and citizens to cite, share, and build upon these principles.

To discuss these principles or propose amendments, contact us at manifesto@inhumain.ai.


Suggested citation: INHUMAIN.AI Editorial, “The Inhumain Manifesto: Why We Exist,” INHUMAIN.AI, February 26, 2026. https://inhumain.ai/manifesto/