Answer to What are the differences between “AI safety”, “AGI safety”, “AI alignment” and “AI existential safety”?

From Stampy's Wiki
Plex's Answer to What are the differences between “AI safety”, “AGI safety”, “AI alignment” and “AI existential safety”?

AI alignment is the research field focused on trying to give us the tools to align AIs to specific goals, such as human values. This is crucial when they are highly competent, as a misaligned superintelligence could be the end of human civilization.

AGI safety is the field trying to make sure that when we build Artificial General Intelligences they are safe and do not harm humanity. It overlaps with AI alignment strongly, in that misalignment of AI would be the main cause of unsafe behavior in AGIs, but also includes misuse and other governance issues.

AI existential safety is a slightly broader term than AGI safety, including AI risks which pose an existential threat without necessarily being as general as humans.

AI safety was originally used by the existential risk reduction movement for the work done to reduce the risks of misaligned superintelligence, but has also been adopted by researchers and others studying nearer term and less catastrophic risks from AI in recent years.

Stamps: Damaged, plex
Show your endorsement of this answer by giving it a stamp of approval!

Tags: None (add tags)

Answer to

Canonical Answer Info
(edits welcome)
Original by: plex


Discussion