INHUMAIN.AI
The Watchdog Platform for Inhuman Intelligence
Documenting What Happens When Intelligence Stops Being Human
AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 | AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 |

AI Prediction Scorecard: Tracking Who Got It Right

A rigorous accounting of public predictions by AI leaders, researchers, and commentators. Who predicted what, when, with what deadline — and whether they were right. Tracking AGI timelines, capability claims, market forecasts, and risk assessments with verifiable outcomes.

Predictions are easy to make and hard to track. The AI industry is saturated with bold claims about timelines, capabilities, and consequences — claims that shape policy, investment, and public perception. Yet remarkably few of these predictions are ever systematically evaluated against reality.

This scorecard changes that. We track specific, verifiable predictions made by AI leaders, researchers, and public intellectuals. We record what was predicted, when it was said, what deadline was given (explicit or implicit), and whether the prediction turned out to be correct. Where predictions remain unresolved, we track them until they can be evaluated.

The goal is not to humiliate people who get things wrong. Prediction is hard, and the honest acknowledgment of uncertainty is a virtue. The goal is to build a public record that distinguishes careful forecasters from habitual hype merchants, and to create accountability for claims that influence billions of dollars in investment and the lives of billions of people.

For related data, see our AI Statistics 2026, AI Doomsday Clock, and AI Safety Complete Guide.


Methodology

Inclusion Criteria

A prediction is included if it meets all of the following:

  • Made publicly (interview, publication, social media, testimony)
  • Attributable to a specific individual
  • Contains a verifiable claim about a future event or state
  • Includes an explicit or strongly implied deadline

We do not include vague aspirational statements (“AI will transform everything”), hedged probabilities without specific outcomes, or private communications.

Evaluation Standards

Status Criteria
Correct The predicted event or condition has occurred within the specified timeframe
Wrong The deadline has passed and the predicted event has not occurred
Partially Correct Some aspects of the prediction have materialized but key elements have not
Pending The deadline has not yet passed
Unfalsifiable The prediction is too vague to evaluate (noted but not scored)

Scoring Notes

We evaluate predictions charitably where reasonable. If someone predicted “AGI by 2025” and we are assessing in February 2026, we treat the prediction as wrong — but note if progress toward the predicted outcome was substantial. Conversely, if someone predicted something would not happen and it did, we do not penalize them for cautious underestimates unless the prediction was clearly directionally wrong.


AGI Timeline Predictions

The most consequential predictions in AI are about when — if ever — artificial general intelligence will be achieved. These predictions directly influence investment decisions, regulatory urgency, and public preparedness. See our glossary for the definition of AGI.

Predictor Prediction Date Made Deadline Status Notes
Ray Kurzweil Human-level AI by 2029 2005 2029 Pending Restated consistently for 20 years; Kurzweil defines this as AI passing a valid Turing test
Ray Kurzweil Technological Singularity by 2045 2005 2045 Pending Recursive self-improvement leads to intelligence explosion
Elon Musk AGI by 2025 Dec 2023 2025 Wrong Musk defined AGI as “smarter than any single human”; no system achieved this by end of 2025
Elon Musk AI will be smarter than any single human by end of 2025 Apr 2024 End 2025 Wrong No system demonstrated comprehensive superiority across all cognitive domains
Elon Musk AGI by 2026, ASI by 2029 Nov 2024 2026/2029 Pending Revised from earlier 2025 prediction
Sam Altman AGI could be achieved by 2025 Nov 2023 2025 Wrong Altman used equivocal language (“could be”) but the claim shaped market expectations
Sam Altman AGI is “a few thousand days away” Sep 2024 ~2032 Pending Roughly 8-year timeline from statement
Dario Amodei “Powerful AI” (near-AGI) by 2026-2027 Oct 2024 2027 Pending Amodei avoids the term “AGI” but describes comparable capabilities
Demis Hassabis AGI within 5-10 years 2023 2028-2033 Pending Hassabis defines AGI conservatively; updated estimate multiple times
Demis Hassabis AI could solve major scientific problems by 2030 2024 2030 Pending Partially validated by AlphaFold; full claim awaits broader demonstration
Yann LeCun Current approaches will not achieve AGI 2023 Ongoing Pending LeCun argues autoregressive LLMs lack world models; prediction requires alternative approach to succeed or current approach to achieve AGI
Yann LeCun Human-level AI is decades away 2023 ~2043+ Pending Contrarian position relative to peers
Geoffrey Hinton AGI could happen within 5-20 years May 2023 2028-2043 Pending Wide range reflects genuine uncertainty; Hinton revised his timeline dramatically after leaving Google
Gary Marcus LLMs will hit a capability wall without fundamental new approaches 2022 Ongoing Partially Correct Scaling has continued to produce gains, but diminishing returns on benchmarks and persistent reliability issues support parts of this thesis
Ben Goertzel AGI by 2027 2023 2027 Pending Goertzel (SingularityNET) has a long history of optimistic AGI predictions
Shane Legg 50% probability of AGI by 2028 2023 2028 Pending DeepMind co-founder; one of the earliest quantified AGI predictions

Capability Predictions

Predictor Prediction Date Made Deadline Status Notes
Sam Altman AI will be able to do “most cognitive jobs” a human can 2023 ~2028 Pending Implies broad cognitive capability replacement
Jensen Huang AI will pass any human test within 5 years Mar 2024 2029 Pending NVIDIA CEO; extremely broad claim
Satya Nadella AI copilots will be standard in every knowledge worker’s workflow 2023 2025 Partially Correct Copilots are widely available but adoption is uneven; many workers still do not use them regularly
Mark Zuckerberg Meta will build general intelligence and open-source it Jan 2024 Unspecified Pending No timeline given; Llama models are open-weight but not AGI
Elon Musk Tesla robotaxis with no steering wheel by 2024 2019 2024 Wrong Tesla launched supervised FSD but not fully autonomous robotaxis
Elon Musk Optimus robot will be available for purchase for $20K-$25K 2024 ~2026 Pending Prototype demonstrated; no commercial sales
Sundar Pichai AI will be more transformative than fire or electricity 2023 Long-term Unfalsifiable Cannot be evaluated on any reasonable timeline
Ilya Sutskever LLMs may already be “slightly conscious” 2022 N/A Unfalsifiable No agreed-upon test for consciousness; included for its influence on discourse
Andrej Karpathy AI will write 80%+ of code within 5 years 2024 2029 Pending GitHub Copilot produces ~40-55% of code in some contexts, but full prediction requires much broader adoption

Market & Industry Predictions

Predictor Prediction Date Made Deadline Status Notes
Goldman Sachs Generative AI will add $7 trillion to global GDP Jun 2023 ~2034 Pending 10-year projection; currently on pace for significantly lower impact
McKinsey Generative AI will add $2.6-4.4 trillion annually Jun 2023 ~2030 Pending Annual value-add; current measurable impact is well below this range
IDC Worldwide AI spending will reach $300B by 2026 2024 2026 Pending Current trajectory supports this; see AI Statistics 2026
Sequoia Capital AI companies need $600B in annual revenue to justify infrastructure investment Sep 2024 ~2027 Pending The “AI’s $600B question”; current AI revenue is estimated at $100-150B
David Cahn (Sequoia) AI infrastructure is in a bubble 2024 ~2026 Pending Comparison to dot-com and telecom bubbles
Cathie Wood (ARK) AI will add $200 trillion to global GDP by 2030 2023 2030 Pending The most extreme market prediction tracked; most analysts consider this implausible
Sam Altman OpenAI will reach $100B in revenue 2024 ~2029 Pending OpenAI revenue was ~$3.4B in 2024
Various The AI bubble will burst by 2026 2023-2024 2026 Pending Multiple commentators; AI spending has continued to accelerate through early 2026

Risk & Safety Predictions

Predictor Prediction Date Made Deadline Status Notes
Geoffrey Hinton AI poses an existential threat within decades May 2023 ~2043 Pending Hinton’s departure from Google to warn about AI risk was a landmark event
Yoshua Bengio Without regulation, AI will cause catastrophic harm 2023 ~2030 Pending Turing Award winner; increasingly vocal about risk
Stuart Russell Autonomous weapons will be used in conflict within 5 years 2019 2024 Correct AI-assisted targeting has been documented in multiple conflicts
Gary Marcus A major AI-caused disaster will occur before AGI 2023 Before AGI Partially Correct Multiple serious incidents documented (see AI Incident Tracker) though no single “disaster” of the scale Marcus implies
Eliezer Yudkowsky AI alignment is not being solved fast enough; likely doom 2022 N/A Pending Yudkowsky’s extreme pessimism has been influential but is unfalsifiable without specified timeline
Timnit Gebru AI bias will cause systemic harm to marginalized communities 2020 Ongoing Correct Extensively documented; see AI Incident Tracker bias category
Max Tegmark Without a pause, AI development will produce uncontrollable systems within 10 years 2023 2033 Pending FLI founder; co-authored the 6-month pause letter
CAIS Statement signatories AI poses an extinction-level risk May 2023 N/A Unfalsifiable One-sentence statement signed by hundreds of researchers; no timeline or specific mechanism

Regulation & Policy Predictions

Predictor Prediction Date Made Deadline Status Notes
Various EU officials EU AI Act will be fully enforceable by 2026 2023 Aug 2026 Pending On track; prohibited practices provisions active since Feb 2025
Tech industry lobbyists EU AI Act will drive AI companies out of Europe 2023 2026 Wrong No major AI company has left the EU market; compliance costs have been manageable
Multiple US lawmakers Comprehensive federal AI legislation by 2025 2023 2025 Wrong No comprehensive federal AI law was enacted by end of 2025; only sector-specific measures
China analysts China will develop comprehensive AI regulation by 2025 2023 2025 Correct China enacted multiple AI regulations covering deepfakes, generative AI, and algorithmic recommendations
INHUMAIN.AI HUMAIN will deploy without independent safety audit Oct 2025 Feb 2026 Correct No independent audit has been published; see HUMAIN Tracker

INHUMAIN.AI Predictions

We believe accountability should apply to us as well. These are our own predictions, with explicit deadlines:

Prediction Date Made Deadline Status
The EU AI Act’s first enforcement fine will exceed EUR 10 million Feb 2026 Dec 2026 Pending
HUMAIN will deploy AI systems in Saudi critical infrastructure without publishing a safety assessment Oct 2025 Jun 2026 Pending
At least one frontier lab will experience a significant safety researcher exodus (10+ departures) in 2026 Feb 2026 Dec 2026 Pending
No binding international AI safety treaty will be signed in 2026 Feb 2026 Dec 2026 Pending
AI-generated deepfake content will be used to attempt to influence at least 5 national elections in 2026 Feb 2026 Dec 2026 Pending
Total documented AI incidents will exceed 1,200 by end of 2026 Feb 2026 Dec 2026 Pending
US federal AI legislation will remain patchwork and sector-specific through 2026 Feb 2026 Dec 2026 Pending
AGI (by any rigorous definition) will not be achieved in 2026 Feb 2026 Dec 2026 Pending
AI safety funding will remain below 2% of total AI investment through 2026 Feb 2026 Dec 2026 Pending
At least one autonomous weapons incident will cause civilian casualties and trigger international investigation in 2026 Feb 2026 Dec 2026 Pending

We will evaluate these predictions publicly on December 31, 2026, and update this page accordingly. If we are wrong, we will say so clearly and analyze where our reasoning failed.


Predictor Track Records

Cumulative Accuracy (Evaluated Predictions Only)

Predictor Correct Wrong Partially Correct Pending Accuracy
Stuart Russell 1 0 0 2 100% (small sample)
Timnit Gebru 1 0 0 0 100% (small sample)
China analysts (composite) 1 0 0 1 100% (small sample)
INHUMAIN.AI 1 0 0 10 100% (small sample)
Gary Marcus 0 0 2 1 N/A (no clear correct/wrong)
Satya Nadella 0 0 1 0 50% (partial credit)
Elon Musk 0 3 0 3 0% (evaluated only)
Sam Altman 0 1 0 3 0% (evaluated only)
Tech industry lobbyists 0 1 0 0 0%

Context: Most predictions are still pending because they concern events in the 2027-2035 range. Accuracy percentages on small samples should be interpreted cautiously. What the data does show is a pattern: those who predict rapid AGI timelines have consistently been wrong, while those who predict specific harms from current systems have been more accurate.


What This Tells Us

Three patterns emerge from the scorecard:

1. AGI timelines are consistently overestimated by those with financial or reputational incentives to do so. Elon Musk, Sam Altman, and other figures with significant investments in AI companies have repeatedly predicted AGI timelines that have not materialized. This does not mean AGI is impossible or distant — it means that the people making the loudest claims have the least reliable track records.

2. Harm predictions are consistently underestimated by the same people. The individuals who are most optimistic about AGI timelines are often the same individuals who are most dismissive of current AI harms. The data shows the opposite pattern: specific harms are materializing faster than predicted, while AGI remains elusive.

3. Uncertainty is the honest position. The predictors with the best track records are those who express genuine uncertainty, provide wide ranges, or focus on specific near-term developments rather than sweeping timeline claims. Geoffrey Hinton’s “5-20 years” range is more honest than Elon Musk’s “by next year.”

For how these predictions relate to our risk assessment, see the AI Doomsday Clock.


This scorecard is updated quarterly. New predictions are added as they are made publicly. Evaluation status is updated as deadlines pass. Corrections and additional documented predictions can be submitted through our contact page.