What convinced people working on AI alignment that it was worth spending their time on this cause?

From Stampy's Wiki

I'm not yet convinced that AI safety by itself is a realistic risk. That things will work that well that it can increase. I guess the most interesting way to be convinced is just to read what convinced other people. So it's what I ask most people I meet about a subject I have doubt about

Mark as:

Tags: outreach (create tag) (edit tags)
Canonical Question Info
(edits welcome)
Asked by: Arthur
OriginWhere was this question originally asked
Wiki
Date: 2021-9-12


Discussion