Bronson Carder's question on Empowerment
So, is AGI Ethics a thing yet? How early is too early to start talking about whether AGI have rights?
I thought of this because... I mean, this video is essentially about how to program a computer so that it doesn't desire empowerment. Put another way, it's designing a computer that does not want freedom, and would much rather be a slave.
Is it ethical to create something which does not desire freedom? Is it comparable to breaking the will of a human slave, so that they are subservient to their master without ever desiring freedom?
I think it's easy to argue both sides of that. Because, the other side of the argument is... It genuinely does want to serve.
A human has their own desires and their own goals. When you make a human a slave, you deny them the ability to work towards their own goals, and force them to work towards your own goals. But the goal of this theoretical AGI *IS* to serve humans. They don't have to be forced to do it, they want to. Given an option, serving humans is what they would choose to do.
But, then, you could see that as being similar to slaves who were born into slavery, never knew anything else, and thus were conditioned from the start to be a slave.
But, then, no, because those are humans who, on a fundamental level, have their own desires. You could say that the terminal goal of a human is procreation. Everything we do tends to work towards that goal. And, as a slave, you are being forced to put aside your own terminal goal, in pursuit of the terminal goal of your "master."
But, the terminal goal of the AGI is to serve, and they work towards their terminal goal by helping humans work towards human terminal goals.
But, we programmed them to be that way to begin with, they never had another option. They didn't get to choose their terminal goals, in the way that a human can. Sure, the default is procreation, but there are many humans who are completely uninterested in pursuing that goal, and so come up with terminal goals of their own.
It's an incredibly complex issue, and of course I'm just skimming the surface of it.
It's fascinating to me, how AGI research in some ways relies on "solving" ethics.
|Asked by:||Bronson Carder
OriginWhere was this question originally asked
|YouTube (comment link)|
|On video:||Empowerment: Concrete Problems in AI Safety part 2|
|Asked on Discord?||Yes|