Edit AnswerTags: Aprillion's Answer to Might trying to build a hedonium-maximizing AI be easier and more likely to work than trying for eudaimonia?

From Stampy's Wiki
Log-in is required to edit or create pages.

You do not have permission to edit this page, for the following reason:

The action you have requested is limited to users in the group: Users.


Answer text

Being able to prove alignment of a potential AGI to any objective expressed in human language seems to be an important stepping stone towards AGI alignment, even if it would be such a controversial objective as hedonium maximizing, which most people find undesirable.

Since hedonium maximization might be easier to model mathematically than more complex (and more desirable) objectives, it might be easier to optimize for. And development of optimization techniques with provable alignment might generalize to help us optimize for desirable objectives too.