Isn't it too soon to work on AGI safety?

From Stampy's Wiki

Non-Canonical Answers

It's true that AGI may be really many years ahead. But what worries a lot of people, is that it may be much harder to make powerful AND safe AI, than just a powerful AI, and then, the first powerful AIs we create will be dangerous.

If that's the case, the sooner we start working on AI safety, the smaller the chances of humans going extinct, or ending up in some Black Mirror episode.

Also Rob Miles talks about this concern in this video.

Stamps: None

Tags: timelines, agi (edit tags)

Canonical Question Info
(edits welcome)
Asked by: Anonymous
OriginWhere was this question originally asked
Wiki
Date: 2021-04-25T21:42


Discussion