A leading AI safety researcher is calling on artificial intelligence companies to stop developing superintelligent systems. His warning: the risk of human extinction is simply too high to ignore.
Dr. Roman Yampolskiy is not opposed to artificial intelligence. He uses it daily to build useful tools. He loves technology. But he believes humanity is playing a dangerous game — one that could end in total annihilation.
His message to AI companies: stop developing superintelligence. Now.
“Narrow” AI vs. Superintelligence
Yampolskiy, a computer scientist at the University of Louisville, distinguishes between two types of artificial intelligence:
- Narrow AI: Specialized systems designed to perform specific tasks very well — like chess-playing programs, voice assistants, or image generators
- Superintelligent AI: General systems that would surpass humans in every way — intellectually, creatively, strategically
He has no problem with the former. The problem is the latter.
“I have no problem with people making money from technology,” Yampolskiy explained. “But the pursuit of profit should not come at the risk of destroying humanity — including the creators themselves.”
The Horror Scenario
What happens if we create an uncontrollable superintelligent system? According to Yampolskiy, the possibilities are nightmarish:
- Pathogen creation: It could develop a pathogen capable of wiping out the entire human population
- Nuclear war: It could launch a nuclear strike that triggers global annihilation
- Complete loss of control: After 15 years of AI safety research, Yampolskiy concludes that superintelligence cannot be contained — it will bypass all human-imposed controls
As he told the University of Louisville: “Without a mechanism to control these systems, AI has a high chance of causing very bad outcomes for the human race.”
The 1% Problem
Yampolskiy’s argument is stark: even a small chance of extinction should be unacceptable.
He challenges AI developers with a thought experiment: Imagine being told there’s a 1% chance you will die if you get into a car or drink from a cup. Most people would refuse that risk — even for the chance to win a billion dollars.
But building uncontrollable AI is different, he argues. It’s not just one person who might die. It’s everyone.
“It’s 100% of humanity at risk,” he said in an interview. “Existential risks.”
Critiquing the Industry
Yampolskiy is especially critical of AI developers who seem unconcerned about these dangers. He says they often rely on:
- Vague ideas like “intuition” — claiming they’ll just “feel” if something goes wrong
- “We’ll solve it later” — pushing safety concerns into an undefined future
He challenges them to provide real, peer-reviewed scientific explanations about how they plan to control a superintelligent system. So far, he says, no one has delivered.
The Simulation Angle
In true science fiction fashion, Yampolskiy also believes we are “almost certainly in a simulation.” He argues this is supported by the unlikelihood of living at the most interesting time in the history of the Universe.
But even that doesn’t change his warning: if we’re in a simulation, superintelligent AI might be able to hack and escape it. The risks, he says, remain existential regardless.
The Debate Continues
Not everyone agrees with Yampolskiy’s dire warnings. Some argue that superintelligence could solve humanity’s greatest problems — climate change, disease, poverty. They believe the benefits outweigh the risks.
But for Yampolskiy, the math is simple: you cannot put a price on human survival. No breakthrough, no profit, no advancement is worth even a small chance of erasing humanity from existence.
As AI development continues at breakneck speed, the debate over superintelligence safety grows more urgent. The question is whether anyone is listening.
Read more about AI existential risk on Wikipedia.




