Many of the people with the deepest understanding of AI are highly concerned about the risks of unaligned superintelligence.

When Google bought world-leading artificial intelligence startup DeepMind in 2014, DeepMind added the condition that Google promise to set up an AI Ethics Board. DeepMind cofounder Shane Legg has said that he believes superintelligent AI will be “something approaching absolute power” and “the number one risk for this century”.

Stuart Russell, Professor of Computer Science at Berkeley, author of the standard AI textbook, and world-famous AI expert, warns of “species-ending problems” and wants his field to pivot to make superintelligence-related risks a central concern. His book Human Compatible focuses on the dangers of artificial intelligence and the need for more work to address them.

Geoffrey Hinton, one of the “Godfathers of Deep Learning”, resigned from Google to be able to speak out about the dangers of advancing AI capabilities. One of the risks he worries about is that smarter-than-human intelligence is no longer far off; he has said that AI wiping out humanity is “not inconceivable”.

Many other science and technology leaders agree. Late astrophysicist Stephen Hawking said in 2014 that superintelligence “could spell the end of the human race.” Bill Gates described himself in 2019 as “in the camp that is concerned about superintelligence…I don’t understand why some people are not concerned”. Oxford professor Nick Bostrom, who has been studying AI risks for over 20 years, has said: “Superintelligence is a challenge for which we are not ready now and will not be ready for a long time.

Holden Karnofsky, the CEO of Open Philanthropy, has written a carefully reasoned account of why transformative artificial intelligence means that this might be the most important century.