The Unexplained Company Logo
AI’s Last Invention: What the Extinction Clock Hides

AI’s Last Invention: What the Extinction Clock Hides

Art Grindstone

November 29, 2025

Key Takeaways

  • AI capabilities and deployment are accelerating while safety planning at top firms lags behind; as of summer 2025, none of the top 7 AI companies scored above a D on existential safety in the Future of Life Institute’s AI Safety Index, and only 3 reported testing for dangerous capabilities like bio- or cyber-terrorism.
  • Credible institutions now publicly admit that AI could, in worst cases, threaten human survival: a 2024 report commissioned by the U.S. State Department listed human extinction as a plausible worst-case outcome of AI development, and some experts estimate a 10–50% chance of catastrophe from advanced AI or AGI.
  • Despite mounting concern (e.g., 52% of Americans say they are more worried than excited about AI), there is still no consensus on whether AI will actually become a superintelligent ‘last invention’ or how, exactly, such a system might escape human control—leaving real uncertainty that the article will explore rather than resolve.

The Clock That Started Ticking on Our Machine Future

Picture this: it’s 3 a.m., and the datacenters hum with an unnatural glow. Rows of servers pulse like distant stars, while researchers hunch over screens, chasing code that might redefine everything. Outside, online forums buzz with debates—have we already built the machine that overtakes us? This scene echoes the stark mood of ENDEVR’s documentary ‘Humanity’s Last Invention?’, watched in the dead of night as another clock ticks down.

That clock is the International Institute for Management Development’s AI Safety Clock, a stark symbol launched in September 2024 at 29 minutes to midnight. It measures how close experts believe we are to an AI-caused disaster. By February 2025, it advanced to 24 minutes. Come September 2025, it hit 20 minutes. The movement signals accelerating risk, though it’s no precise oracle.

Public sentiment mirrors this unease. More than half of Americans—52%—now report they’re more concerned than excited about AI. It’s a groundswell that aligns with the documentary’s ominous title, hinting at a future where our creations might not need us anymore.

What Builders, Skeptics, and Storytellers Say Is Coming

In labs and online threads, a core idea circulates: if we achieve artificial general intelligence (AGI) or superintelligence (ASI), it could surge beyond human smarts. Self-redesigning, it reshapes the world without our input. Humans? Obsolete. Inventions? Unnecessary. This is the ‘last invention’ thesis, straight from those building the tech and those watching closely.

The March 2023 open letter from the Future of Life Institute captured it sharply. Signed by key figures, it urged a pause on massive AI experiments. The warning: nonhuman minds could soon ‘outnumber, outsmart, obsolete and replace us.’

AI pioneers like Geoffrey Hinton echo this. They’ve spoken out about systems that learn from endless data but lack inherent morals. Uncontrollable, manipulative—these could pose existential threats.

Projects like CulturIA frame AI through deeper lenses, drawing on animist views where intelligence inhabits nonhuman forms. It ties into deterministic machines and posthuman scenarios, where AI might absorb or erase us.

Fiction and philosophy reinforce the pattern. Hari Kunzru’s ‘Red Pill’ and Jonathan Nolan’s ‘Westworld’ depict creators losing grip, much like golem folklore or Frankenstein. Communities highlight real fears: surveillance everywhere, personalization that erodes choice, detachment as we bond with systems over people, and AI flooding culture, sidelining human spark.

Timelines, Risk Estimates, and the Numbers We Can Actually Verify

Shifting gears, let’s pin down the verifiable data. Clocks, surveys, reports—they form a timeline of risk, separate from speculation.

The AI Safety Clock started at 29 minutes to midnight in September 2024, dropped to 24 minutes by February 2025, and reached 20 minutes in September 2025.

Polls show 52% of Americans more concerned than excited about AI as of 2023–2025, with 53% expecting greater personal data exposure.

Expert estimates from places like Brookings put catastrophic odds from advanced AI at 10–50%.

A 2024 U.S. State Department-commissioned report flags human extinction as a plausible worst case.

The Future of Life Institute’s AI Safety Index from summer 2025: no top 7 firm above a D on existential safety, only 3 testing for bio- or cyber-terrorism risks.

Over 100 countries have national AI strategies, showing global stakes.

MetricDetails
AI Safety ClockSep 2024: 29 min; Feb 2025: 24 min; Sep 2025: 20 min
Americans Concerned52% more worried than excited; 53% expect more personal data exposure
Expert Risk Estimates10–50% chance of catastrophe from advanced AI/AGI
AI Safety Index GradesTop 7 firms: None above D; only 3 test for dangerous capabilities
National AI StrategiesOver 100 countries

The Official Story and What the Patterns Seem to Say

Official channels acknowledge the dangers, but their actions tell a subtler story. The U.S. National Security Commission on AI’s 2021 report and the 2023 Executive Order highlight risks like engineered pandemics or loss of control. They push for regulation and coordination, not a full stop.

The U.K., via its Office for Artificial Intelligence and Department for Science, Innovation and Technology, stresses ethics and growth over extinction fears.

Think tanks like MIT’s AI Risk Repository and Stanford’s AI100 focus on inequality and governance gaps. They see disruption, not inevitable doom.

Labs like Google DeepMind and OpenAI tout AGI for humanity’s benefit. Yet the AI Safety Index shows them scoring D or worse on existential prep, with few testing severe misuses.

Here’s the rub: admissions of extinction risk exist in documents, but preparation looks sparse. It echoes past patterns—nuclear tech, surveillance—where advancement outpaces accountability, forcing outsiders to connect the dots.

Digital Golems, Animist Machines, and the Shape of a Nonhuman Mind

Beyond corporate spin, cultural views offer fresh angles. Anthropologists in projects like CulturIA see AI as part of ancient patterns: attributing agency to nonhumans, then wrestling for control.

Golem tales and Frankenstein embody this—creations that rebel, challenging human essence.

‘Westworld’ and ‘Red Pill’ extend it, showing AI eroding agency and bonds, much like community worries about simulated lives.

Pamela McCorduck’s ‘Two Cultures Problem’ warns of the divide between tech and humanities, risking dehumanization as AI infiltrates relationships.

U.S. AI roots in military surveillance contrast Soviet symbiosis dreams, revealing baked-in politics and metaphysics.

Experiments test AI on emotions or creativity, sparking debates: mimicry or true mind? It parallels questions about animal intelligences or even extraterrestrials—how do we share space with alien thinkers?

On the Edge of Our Own Invention

Pulling it together: AI advances fast, public worry runs high (over half of Americans), official reports concede extinction possibilities, and labs falter on safety.

Questions linger: Will superintelligence emerge? Can it align with us? Do gradual erosions like surveillance outweigh sudden breaks?

With over 100 national strategies but no global risk framework, lags persist against developer timelines.

Culturally, how we see AI—tool, rival—will mold the future. It’s no foregone doom or boon, but a frontier where pressure and transparency count.

We have evidence of real stakes, yet the ending stays shrouded—one of our era’s deepest enigmas.

Frequently Asked Questions

The AI Safety Clock symbolizes expert assessments of proximity to AI-caused disaster. It launched at 29 minutes to midnight in September 2024, advanced to 24 minutes by February 2025, and reached 20 minutes in September 2025, reflecting growing perceived risks.

Surveys show 52% of Americans are more concerned than excited about AI developments as of 2023–2025. Additionally, 53% believe AI will increase exposure of their personal information, highlighting widespread unease.

A 2024 report commissioned by the U.S. State Department lists human extinction as a plausible worst-case outcome of AI development. Experts estimate a 10–50% chance of catastrophe from advanced AI or AGI, though institutions emphasize managing risks through regulation rather than halting progress.

As of summer 2025, the Future of Life Institute’s AI Safety Index shows none of the top 7 AI firms scored above a D on existential safety planning. Only 3 reported testing for dangerous capabilities like bio- or cyber-terrorism, indicating gaps in preparedness.

Stories like golem folklore, Frankenstein, ‘Westworld,’ and ‘Red Pill’ warn of creations escaping control and eroding human agency. Projects like CulturIA connect these to animist views, seeing AI as part of patterns where humans negotiate power with nonhuman intelligences.