Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey
Objections and responses

Is smarter-than-human AI unrealistic?
Is AI alignment easy?
Flawed proposals for setting AI goals
Flawed proposals for controlling AI
Dealing with misaligned AGI after deployment
Other issues from AI
Morality
Objections to AI safety research
Miscellaneous arguments

Why can't we just make a "child AI" and raise it?

It's been proposed that we can give an AI human values by starting out with a child-like AI and raising it like a human child. However, making that strategy work is harder than it sounds.

Children can learn English or French or Chinese, but just as it isn’t in a cat’s nature to learn to speak English, it isn’t in a child’s nature to learn to speak binary. In much the same way, it won’t automatically be in the nature of an AI that we build to learn values the way a human child would.

The modules that do this work in humans are fragile, poorly understood towers built of evolutionary necessity. Even in an otherwise functional brain, a few changes in genes or environment can produce a sociopath capable of understanding the values of others but not intrinsically motivated to act on them. AIs have vastly different internals, and we can’t rely on them having exactly the right circuitry to absorb morality like a child.

Some organizations are currently working on ways to program agents

that can cooperatively interact with humans to learn what humans want.

Keep Reading

Continue with the next entry in "Objections and responses"
Is it possible to limit an AI's interactions with the Internet?
Next
Or jump to a related question


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—2025

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.