|Main Question: What about AI concerns other than existential safety? (edit question) (write answer)|
There are many things in the world which need our attention, so a common response to the suggestion that we should focus on AI existential safety is "But what about working on <x> instead?". This tag collects responses to common values for x in that sentence, either within AI or related to other causes.
"The real concern" isn't a particularly meaningful concept here. Deep learning has proven to be a very powerful technology, with far reaching implications across a number of aspects of human existence. There are significant benefits to be found if we manage the technology properly, but that management means addressing a broad range of concerns, one of which is the alignment problem.
Unanswered non-canonical questions
Shouldn't we work on things other than than AI alignment, like climate change, global pandemics, factory farming, or social inequality?