Why can't we just make a "child AI" and raise it?
A proposed solution to the problem of AI alignment is to create an AI that has the same values and morality as a human by creating a child-like AI and raising it like a human child. This suggestion sounds simpler than it is.
Children can learn English or French or Chinese, but just as it isn’t in a cat’s nature to learn to speak English, it isn’t in a child’s nature to learn to speak binary. In much the same way, it won’t automatically be in the nature of an AI that we build to learn values the way a human child would.
The modules that do this work in humans are fragile, poorly understood towers built of evolutionary necessity. Even in an otherwise functional brain, a few changes in genes or environment can produce a sociopath capable of understanding the values of others but not particularly motivated to act on them. AIs have vastly different internals, and we can’t rely on them having exactly the right circuitry to absorb morality like a child.
Some organizations are currently working on ways to program agents that can cooperatively interact with humans to learn what they want.