Aprillion's Answer to Might trying to build a hedonium-maximizing AI be easier and more likely to work than trying for eudaimonia?

From Stampy's Wiki
Aprillion's Answer to Might trying to build a hedonium-maximizing AI be easier and more likely to work than trying for eudaimonia?

Being able to prove alignment of a potential AGI to any objective expressed in human language seems to be an important stepping stone towards AGI alignment, even if it would be such a controversial objective as hedonium maximizing, which most people find undesirable.

Since hedonium maximization might be easier to model mathematically than more complex (and more desirable) objectives, it might be easier to optimize for. And development of optimization techniques with provable alignment might generalize to help us optimize for desirable objectives too.

Stamps: Damaged, Aprillion
Show your endorsement of this answer by giving it a stamp of approval!

Tags: None (add tags)

Answer to

Answer Info
Original by: Aprillion


Discussion