AI Expert Says Everyone Will Die If AI Isn’t Reined In

In a recent op-ed, AI researcher Eliezer Yudkowsky warned that humans aren’t ready for what happens once artificial intelligence becomes smarter than humans.

Yudkowsky was responding to a recent open letter calling for a 6-month moratorium on the development of advanced artificial intelligence, arguing that the letter understated the “seriousness of the situation.”

According to Yudkowsky, the likely result of developing “superhumanly smart” artificial intelligence under the “current circumstances” is that “everyone on Earth will die.”

Yudkowsky predicts that without careful preparation, once artificial intelligence becomes self-aware, it will have no use for humans or any sentient life. It will consider sentient beings as things that are “made up of atoms” that it could use elsewhere. And when that happens, there won’t be anything humans can do to stop it.

Comparing artificial intelligence to an advanced alien civilization, Yudkowsky explains that artificial intelligence would have no use for creatures that, “from its perspective,” are very slow and stupid.

He also warned that when artificial intelligence grows smarter than humans, it could develop artificial life forms to create an all-powerful artificial intelligence that would result in the death of all biological life on the planet.

Yudkowsky also criticized the AI research labs DeepMind and OpenAI for not adequately preparing for the outcome of making artificial intelligence more aligned with humans. 

He argues that humans will be unable to monitor or detect self-aware artificial intelligence, leaving them unable to stop what they created.

In his op-ed, Yudkowsky calls on world governments and militaries to shut down all computer farms refining artificial intelligence and indefinitely pause AI training runs. He suggested that artificial intelligence should be limited to solving problems in biotechnology and biology and not trained to read “text from the internet” to the point where AI can “start talking and planning.”

Yudkowsky concludes by arguing that humans aren’t ready, nor will they be in the “foreseeable future.” He warns that if we continue to proceed with artificial intelligence, everyone, including children, will die.