Answer to What is artificial general intelligence safety / AI alignment?

From Stampy's Wiki
Luca's Answer to What is artificial general intelligence safety / AI alignment?

AI alignment is a field that is focused on causing the goals of future superintelligent artificial systems to align with human values, meaning that they would behave in a way which was compatible with our survival and flourishing. This may be an extremely hard problem, especially with deep learning, and is likely to determine the outcome of the most important century. Alignment research is strongly interdisciplinary and can include computer science, mathematics, neuroscience, philosophy, and social sciences.

AGI safety is a related concept which strongly overlaps with AI alignment. AGI safety is concerned with making sure that building AGI systems doesn’t cause things to go badly wrong, and the main way in which things can go badly wrong is through misalignment. AGI safety includes policy work that prevents the building of dangerous AGI systems, or reduces misuse risks from AGI systems aligned to actors who don’t have humanity’s best interests in mind.

Stamps: plex
Show your endorsement of this answer by giving it a stamp of approval!


Answer to

Canonical Answer Info
(edits welcome)
Original by: LVCɅ (edits by plex, LVCɅ, ccstan99, RoseMcClelland)

Related questions


Discussion