How does the field of AI Safety want to accomplish its goal of preventing existential risk?

From Stampy's Wiki

Non-Canonical Answers

Alignment

Governance - e.g. By establishing best practises, institutions & processes, awareness, regulation, certification, etc?

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: existential risk, stub, institutions (create tag), government (create tag), regulation (create tag), governance (create tag) (edit tags)

Question Info
Asked by: Shikah
OriginWhere was this question originally asked
Wiki
Date: 2022/04/03


Discussion