Intro to AI safety
AI might soon become smarter than humans — and that may cause human extinction.
Wait, really?
Yes. In recent years, AI has exceeded many people’s expectations in a wide variety of domains. Current AI systems can compose human-like text, play Go, write code, and model protein folding. It may not be long until we create AI systems that are as capable as humans at carrying out most intellectual tasks.
Eventually — perhaps soon after — AI could surpass human intelligence altogether, including in science, technology, economic competition, and strategic planning.
Such advanced AI could provide great benefits, but if designed unsafely or deployed recklessly, it could also cause unprecedented disasters — even human extinction.
Making sure that future, highly-advanced AI is beneficial to humanity will involve technical, political, and social challenges. A key technical component of AI safety
A research field about how to prevent risks from advanced artificial intelligence.
The topic of existential risk
A risk of human extinction or the destruction of humanity’s long-term potential.
If you're a newcomer to AI safety issues, exploring the links below or in the sidebar can give you a good overview of the field.