Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey
Objections and responses

Is smarter-than-human AI unrealistic?
Is AI alignment easy?
Flawed proposals for setting AI goals
Flawed proposals for controlling AI
Dealing with misaligned AGI after deployment
Other issues from AI
Morality
Objections to AI safety research
Miscellaneous arguments

Isn’t the real concern with AI something else?

There are many substantial risks surrounding current and future AI, and people should work to address each of them.

On this site, we focus on existential risk stemming from powerful AI in the future. Most of the content refers to takeover

scenarios where such an AI is misaligned with its creator’s intentions, but we also briefly cover existential risks from bad actors misusing AI. We think too few people are working directly on these risks relative to their plausibility and scope. The timelines are unknown, there is no robust plan to address them, and if we put off working on them until they happen, it may be too late.

The sidebar contains a list of pages focused on other specific concerns about AI.

Keep Reading

Continue with the next entry in "Objections and responses"
What about people misusing AI?
Next
Or jump to a related question


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—2025

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.