Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey
Objections and responses

Is smarter-than-human AI unrealistic?
Is AI alignment easy?
Flawed proposals for setting AI goals
Flawed proposals for controlling AI
Dealing with misaligned AGI after deployment
Other issues from AI
Morality
Objections to AI safety research
Miscellaneous arguments

Objections and responses

You've heard about existential risks

from AI, but you’re not convinced. Maybe you think that AGI is far-off or downright impossible, that misaligned AGI wouldn’t pose a major threat, or that AI alignment is easy. Maybe you think worries about existential risk distract from AI’s more concrete impacts, such as it causing job loss and amplifying bias, discrimination, or misinformation. Or maybe you just need arguments to convince your friends that this is serious. You’ve come to the right place!

The topic of existential risks from AI is complicated, and most simple objections to AI risk break down upon closer examination. The articles below discuss some of the most common reasons people are unsure about the risks, and why we think that this problem deserves our attention nonetheless.

Keep Reading

Start with the first entry in "Objections and responses"
How can an AGI be smarter than all of humanity?
Start
Or jump to a related question


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.