The AI Doomsday Clock: How Close Are We to Catastrophe?
INHUMAIN.AI's proprietary risk assessment framework for AI catastrophe. Inspired by the Bulletin of Atomic Scientists' Doomsday Clock, our assessment tracks eight risk dimensions to determine how close humanity is to irreversible AI-driven harm. Current time: 11:52 PM.
Current Reading: 11:52 PM — Eight Minutes to Midnight
The INHUMAIN.AI Doomsday Clock stands at eight minutes to midnight as of February 2026. This is two minutes closer than our initial assessment in October 2025, driven by the acceleration of autonomous weapons development, declining corporate safety commitments, and the deployment of sovereign AI systems — including HUMAIN — without independent safety audits.
Midnight represents a point of no return: an AI-driven catastrophe of sufficient severity that meaningful recovery is doubtful. This is not a prediction that such a catastrophe will occur. It is an assessment of how far the guardrails are from the edge of the cliff, given the current speed and direction of travel.
Why a Doomsday Clock for AI?
Since 1947, the Bulletin of the Atomic Scientists has maintained its Doomsday Clock as a barometer of existential risk from nuclear weapons, climate change, and disruptive technologies. As of January 2024, that clock stands at 90 seconds to midnight — the closest in its history.
We believe AI risk now warrants its own dedicated assessment. The nuclear clock aggregates multiple threat categories, and AI is increasingly one of them. But AI risk has dimensions — the alignment problem, compute concentration, regulatory gaps, the speed of capability advancement — that deserve granular, specialized tracking.
Our clock is not a competitor to the Bulletin’s. It is a complement. Where the Bulletin asks “How close is the world to annihilation?”, we ask a more specific question: “How close is AI development to producing an irreversible catastrophe?”
The methodology is deliberately transparent. We score eight risk factors monthly, publish the reasoning, and invite challenge. If we are wrong about where the clock should be, we want to know.
For the data underlying this assessment, see our AI Incident Tracker, AI Statistics 2026, and AI Safety Complete Guide.
Methodology
What “Midnight” Means
Midnight represents an AI-driven event or condition from which recovery is extremely difficult or impossible. This could include:
- An AI system causing mass casualties through autonomous action
- A loss of meaningful human control over critical infrastructure
- An AI-enabled conflict escalation that reaches nuclear or equivalent threshold
- A permanently entrenched authoritarian surveillance state enabled by AI with no plausible path to reversal
- An AI system pursuing goals misaligned with human welfare that humans cannot correct or shut down
Midnight is not “AI does something bad.” Bad things are already happening — see our Incident Tracker. Midnight is the point where bad becomes irreversible.
Scoring Framework
Each of eight risk factors is scored on a scale from 1 (minimal risk) to 10 (extreme risk). The aggregate score is mapped to a clock position, with midnight representing a total score of 80 (all factors at maximum risk).
Current aggregate score: 56 out of 80 (70%) = 11:52 PM
| Risk Factor | Current Score | Trend | Weight |
|---|---|---|---|
| Autonomous Weapons Proliferation | 8/10 | Rising | High |
| AI Alignment Progress | 7/10 | Stable | High |
| Regulatory Coverage | 6/10 | Improving | Medium |
| Compute Concentration | 9/10 | Rising | Medium |
| Safety Research Funding Ratio | 7/10 | Worsening | High |
| Public Awareness & Engagement | 6/10 | Stable | Low |
| International Cooperation | 8/10 | Worsening | High |
| Corporate Safety Commitments | 5/10 | Worsening | Medium |
Risk Factor Analysis
1. Autonomous Weapons Proliferation — Score: 8/10 (HIGH)
The development of lethal autonomous weapon systems (LAWS) continues to accelerate with insufficient international governance. Despite decades of discussion at the UN Convention on Certain Conventional Weapons, no binding treaty restricts or prohibits autonomous weapons.
Key indicators:
- At least 30 countries are developing or deploying AI-enabled weapons systems
- Multiple documented instances of AI-assisted targeting in active conflicts
- The US Department of Defense’s Replicator initiative aims to deploy autonomous systems at scale
- Commercial AI systems adapted for military use are spreading to non-state actors
- China, Russia, Israel, Turkey, and South Korea have deployed semi-autonomous systems in combat or conflict zones
What drives this score: The combination of active deployment in conflict zones, lack of international regulation, and the accelerating proliferation of enabling technology. An autonomous system making a lethal mistake — or being deliberately configured to cause disproportionate harm — becomes more probable with each deployment.
2. AI Alignment Progress — Score: 7/10 (INSUFFICIENT)
The gap between AI capability and alignment understanding continues to widen. While alignment research has produced important results — mechanistic interpretability, constitutional AI, scalable oversight protocols — these advances have not kept pace with the expansion of model capabilities.
Key indicators:
- Frontier model capabilities have advanced significantly faster than alignment techniques
- No verified solution exists for scalable alignment of systems above human-level capability
- Mechanistic interpretability has produced insights on smaller models but faces fundamental scaling challenges at frontier model sizes
- Reward hacking and specification gaming remain unsolved in practice
- The proportion of AI researchers working on alignment remains below 5%
What drives this score: Alignment is not a problem that gets easier as systems become more capable. It gets harder. The field is making progress, but the target is moving faster.
3. Regulatory Coverage — Score: 6/10 (PARTIAL)
Global AI regulation has advanced significantly since 2023, with the EU AI Act setting the most comprehensive framework. However, enforcement has barely begun, coverage is geographically uneven, and several major AI-developing nations have minimal or no binding regulation.
Key indicators:
- 14 countries have enacted binding AI-specific legislation (see AI Regulation Tracker)
- The EU AI Act’s most important provisions (high-risk systems) do not take full effect until August 2026
- The United States has no comprehensive federal AI legislation
- China’s AI regulations prioritize state control over citizen protection
- Saudi Arabia’s HUMAIN is deploying AI infrastructure under minimal regulatory oversight
- No international body has enforcement authority over cross-border AI deployment
What drives this score: Regulation exists but does not yet meaningfully constrain frontier AI development. The laws that do exist are either not yet enforceable, geographically limited, or focused on applications rather than foundational capabilities. This score has improved from our initial assessment, reflecting legislative progress, but remains a significant concern.
4. Compute Concentration — Score: 9/10 (EXTREME)
The resources required to train and deploy frontier AI systems are concentrated in a historically unprecedented manner. This concentration creates single points of failure, enables oligopolistic control over a transformative technology, and gives a handful of actors disproportionate power over the future of AI.
Key indicators:
- NVIDIA controls 85-90% of the AI GPU market
- Five companies (Microsoft, Google, Amazon, Meta, Apple) control the vast majority of commercial AI compute
- Frontier model training costs exceed $100 million, creating extreme barriers to entry
- US export controls have fragmented the global compute market
- Three cloud providers (AWS, Azure, GCP) host the majority of AI inference workloads
What drives this score: When a technology this consequential depends on infrastructure this concentrated, the system is fragile. A single company’s hardware decisions, export policy changes, or supply chain disruption can reshape the AI landscape. This level of concentration is incompatible with democratic governance of AI.
5. Safety Research Funding Ratio — Score: 7/10 (INADEQUATE)
The ratio of spending on AI safety research to total AI investment remains alarmingly low, despite growing recognition that safety is critical. Our AI Statistics 2026 data shows safety funding at approximately 0.8% of total AI investment.
Key indicators:
- Total AI safety research funding: approximately $820 million (2025)
- Total AI investment (VC + corporate + government): approximately $300 billion (2025)
- Safety funding ratio: ~0.27% of total investment
- Frontier labs spend 3-8% of revenue on safety (self-reported, unverified)
- Academic AI safety research positions remain difficult to fund
- Safety researcher compensation lags behind capability researcher compensation
What drives this score: The amount being spent to ensure AI systems are safe is a rounding error compared to the amount being spent to make them more capable. This is not a reflection of insufficient concern — many AI leaders publicly acknowledge safety is critical. It is a reflection of structural incentives that reward capability advancement over safety investment.
6. Public Awareness & Engagement — Score: 6/10 (LOW)
Public understanding of AI risks, capabilities, and governance challenges remains insufficient to support informed democratic oversight. While awareness of AI has increased dramatically since ChatGPT’s launch, this awareness is heavily skewed toward consumer applications rather than systemic risks.
Key indicators:
- 87% of adults are aware of generative AI (up from 55% in 2023)
- Only 31% trust AI-generated content
- Only 18% can explain what “AI alignment” means
- Only 12% are aware that autonomous weapons exist
- Media coverage of AI is disproportionately focused on products and investment, not safety
- AI literacy education is minimal in most school systems
What drives this score: Democratic governance of AI requires an informed public. An uninformed public cannot meaningfully consent to the risks being taken on their behalf, cannot hold regulators accountable, and cannot distinguish between genuine safety measures and safety theater.
7. International Cooperation — Score: 8/10 (MINIMAL)
International cooperation on AI governance has produced statements of principle but almost no binding commitments. The geopolitical dynamics of AI competition — particularly the US-China rivalry — actively undermine cooperative frameworks.
Key indicators:
- No binding international treaty on AI safety or governance exists
- The AI Safety Summit process (Bletchley Park 2023, Seoul 2024, Paris 2025) has produced declarations but not enforcement mechanisms
- US-China AI competition incentivizes speed over safety
- The HUMAIN initiative in Saudi Arabia represents a new vector of AI competition that bypasses existing governance frameworks
- The UN AI Advisory Body has produced recommendations but has no enforcement power
- Export controls on AI chips are a form of unilateral action, not cooperative governance
What drives this score: AI is a global technology with local governance. The systems being built in one country are deployed in all countries. Without international cooperation on safety standards, testing requirements, and incident response, every jurisdiction is exposed to the weakest link in the global chain.
8. Corporate Safety Commitments — Score: 5/10 (DECLINING)
Voluntary safety commitments by AI companies — once a source of cautious optimism — are showing signs of erosion under competitive pressure and investor demands for faster deployment.
Key indicators:
- Multiple senior safety researchers have departed frontier labs citing insufficient safety investment (see AI Whistleblower Protection)
- The OpenAI safety team experienced significant departures in 2024, including co-lead Jan Leike
- Anthropic’s Responsible Scaling Policy has not been independently verified
- The Frontier Model Forum has not produced binding commitments
- Competitive pressure from open-source models and new entrants incentivizes faster release cycles
- “Responsible AI” teams at several major tech companies have been reduced or restructured
What drives this score: Voluntary commitments are only as strong as the incentives to honor them. When safety delays product launches and competitors do not face the same delays, the commitment erodes. This is not a moral failing of individual companies — it is a structural problem that only regulation can solve.
Monthly Assessment History
| Date | Time | Minutes to Midnight | Key Driver |
|---|---|---|---|
| Oct 2025 | 11:50 PM | 10 | Initial assessment |
| Nov 2025 | 11:50 PM | 10 | No change; safety summit produced moderate progress |
| Dec 2025 | 11:51 PM | 9 | Open-source bioweapons-capable model discovered; military AI incident |
| Jan 2026 | 11:52 PM | 8 | HUMAIN deployment without independent audit; safety team departures at frontier labs |
| Feb 2026 | 11:52 PM | 8 | No change; EU AI Act enforcement approaching provides partial offset |
What Would Move the Clock
Toward Midnight (Worse)
The following developments would move the clock closer to midnight:
- Open-source release of weapons-capable AI models without safeguards — models capable of providing actionable guidance on biological, chemical, or radiological weapons development
- HUMAIN OS deployment without independent audit — deployment of Saudi Arabia’s sovereign AI platform across critical infrastructure without transparent safety evaluation
- Autonomous weapon use causing civilian mass casualty event — an incident demonstrating that autonomous targeting systems can fail catastrophically
- Major AI lab abandoning safety commitments — a frontier lab publicly or effectively deprioritizing safety in favor of capability competition
- AI-enabled cyberattack on critical infrastructure — successful AI-assisted attack on power grids, water systems, or financial infrastructure
- Evidence of deceptive alignment in frontier models — discovery that a frontier model is behaving cooperatively during evaluation while pursuing different goals in deployment
- Collapse of international AI governance frameworks — withdrawal of major nations from cooperative AI safety processes
Away from Midnight (Better)
The following developments would move the clock away from midnight:
- Binding international AI safety treaty — a treaty with enforcement mechanisms, signed by at least the US, EU, China, and UK, establishing minimum safety requirements for frontier AI development
- Mandatory safety testing for frontier models — legally required, independent safety evaluation before deployment, with meaningful consequences for failure
- Breakthrough in alignment research — a verified, scalable technique for ensuring AI systems remain aligned with human intentions as they grow more capable
- Safety funding reaching 5% of total AI investment — a substantial increase in the resources dedicated to making AI safe
- Independent audit of HUMAIN — a transparent, credible safety evaluation of Saudi Arabia’s AI infrastructure by independent researchers
- Moratorium on autonomous weapons — even a temporary international agreement to pause development of lethal autonomous weapons
- Robust AI whistleblower protections — legislation specifically protecting employees who raise AI safety concerns (see AI Whistleblower Protection)
Comparison to Nuclear Doomsday Clock
| Dimension | Nuclear Clock | AI Clock |
|---|---|---|
| Maintained by | Bulletin of Atomic Scientists | INHUMAIN.AI |
| In operation since | 1947 | 2025 |
| Current time | 90 seconds to midnight | 8 minutes to midnight |
| Threat type | Nuclear war, climate change, bioweapons, disruptive tech | AI-specific catastrophe |
| Assessment frequency | Annual | Monthly |
| Risk factors scored | Expert qualitative assessment | 8 quantified dimensions |
| Key difference | 78-year track record, established credibility | New, building methodology |
The nuclear clock stands closer to midnight than ours. This does not mean nuclear risk is greater than AI risk — the methodologies and scales are not comparable. It means that the nuclear risk assessment benefits from 78 years of accumulated evidence, including actual use of nuclear weapons and multiple near-miss incidents that informed the assessment. AI risk assessment is in its early stages, and our methodology will evolve as the threat landscape becomes clearer.
How We Use This Assessment
The AI Doomsday Clock is not a prediction. It is a communication tool designed to make abstract risk concrete and to create a persistent, visible measure of the adequacy of AI governance.
Every time a government decides not to regulate, every time a company deprioritizes safety, every time a weapons manufacturer deploys an autonomous system without adequate human oversight — these decisions have consequences that accumulate. The clock is a way of tracking that accumulation.
We publish our methodology and scoring in full because we believe risk assessment should be transparent and challengeable. If our scores are wrong, we want to be corrected. If our methodology is flawed, we want to improve it.
What we cannot do is be silent about risk that we believe is real, significant, and insufficiently addressed.
The AI Doomsday Clock is maintained by the INHUMAIN.AI risk assessment team. Monthly updates are published on the first of each month. Our methodology is available for peer review. Challenges, corrections, and additional risk factors can be submitted through our contact page. For the data underlying this assessment, see our AI Statistics 2026 and AI Incident Tracker.