Why can't we just make a "child AI" and raise it?
It's been proposed that we can give an AI human values by starting out with a child-like AI and raising it like a human child. However, making that strategy work is harder than it sounds.
Children can learn English or French or Chinese, but just as it isn’t in a cat’s nature to learn to speak English, it isn’t in a child’s nature to learn to speak binary. In much the same way, it won’t automatically be in the nature of an AI that we build to learn values the way a human child would.
The modules that do this work in humans are fragile, poorly understood towers built of evolutionary necessity. Even in an otherwise functional brain, a few changes in genes or environment can produce a sociopath capable of understanding the values of others but not intrinsically motivated to act on them. AIs have vastly different internals, and we can’t rely on them having exactly the right circuitry to absorb morality like a child.
Some organizations are currently working on ways to program agents that can cooperatively interact with humans to learn what humans want.