Yudkowsky argues that humanity is unprepared to manage or align AI intelligence beyond human capabilities, and systems may become uncontrollable and indifferent to human survival.
Eliezer Yudkowsky, a decision theorist and AI researcher, has urged for an indefinite global halt on advanced AI development, warning that superhuman AI poses an existential threat.
Writing for TIME, Yudkowsky criticized a recent open letter calling for a six-month pause on AI systems more powerful than GPT-4, claiming it “understates the seriousness” of the crisis.
“The likely result of building a superhumanly smart AI… is that literally everyone on Earth will die,” he warned, stating current safety measures are vastly inadequate.
Yudkowsky argues that humanity is unprepared to manage or align AI intelligence beyond human capabilities, and systems may become uncontrollable and indifferent to human survival. “The AI does not love you, nor does it hate you, and you are made of atoms it can use for something else,” he stated.
He advocates for shutting down all large-scale AI training efforts, tracking GPUs globally, and treating AI extinction risk as a higher priority than nuclear conflict.
“If we go ahead… everyone will die,” he concluded, emphasizing that policy change is essential for the survival of humanity, including future generations.