Could we tell the AI to do what's morally right?

From Stampy's Wiki

Canonical Answer

This suggestion is not as simple as it seems because:

  1. Humanity as a group has yet to agree on what is right or moral
  2. We currently don't know how to make an AI do what we want

Philosophers have disagreed for a very long time on what is right or wrong, which has led to the field of ethics. Within the field of AI safety, Coherent Extrapolated Volition is an attempt to solve what is the right thing to do. The complexity of values is explored in Yudkowsky's complexity of wishes.

Even if we had a well defined objective, for example a diamond maximizer, we currently do not know how to fully describe it to an AI. For more info, see Why is AGI safety a hard problem.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: why not just, ethics (create tag) (edit tags)

Non-Canonical Answers

This suggestion is not as simple as it seems because:

  1. Humanity as a group has yet to agree on what is right or moral
  2. We currently don't know how to make an AI do what we want

Philosophers have disagreed for a very long time on what is right or wrong, which has led to the field of ethics. Within the field of AI safety, Coherent Extrapolated Volition is an attempt to solve what is the right thing to do. The complexity of values is explored in Yudkowsky's complexity of wishes.

Even if we had a well defined objective, for example a diamond maximizer, we currently do not know how to fully describe it to an AI. For more info, see Why is AGI safety a hard problem.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: why not just, ethics (create tag) (edit tags)

Canonical Question Info
(edits welcome)
Asked by: Nico Hill2
OriginWhere was this question originally asked
Wiki
Date: 2022/03/28

Related questions


Discussion