Chris's question on The Orthogonality Thesis

From Stampy's Wiki

Why can't we make it such that it's terminal goal is to help humanity as a whole based on human morality? If it can reason and understand then it could make a very good superhuman judgement about what that means and so it would help us in ways we want it to. Would it need hard statements like make people happy? Would it not be able to figure out in which way we would want that instead of just using drugs on us? How is it intelligent if it can't figure out what we would like and that it should stop when it's causing problems? I understand that the superai wouldn't come to our morality on its own but if its terminal goal is to do as we would like it to do then why wouldn't it find good ways to do that?


Tags: None (add tags)
Question Info
Asked by: Chris
OriginWhere was this question originally asked
YouTube (comment link)
On video: Intelligence and Stupidity: The Orthogonality Thesis
Date: 2021-05-25T11:28
Asked on Discord? No


Discussion