Answer to Is there a danger in anthropomorphizing AI’s and trying to understand them in human terms?

From Stampy's Wiki
Plex's Answer to Is there a danger in anthropomorphizing AI’s and trying to understand them in human terms? / (Redirected from Answer to Is there a danger in anthropomorphizing AI’s and trying to understand them in human terms?)

Using some human-related metaphors (e.g. what an AGI ‘wants’ or ‘believes’) is almost unavoidable, as our language is built around experiences with humans, but we should be aware that these may lead us astray.

Many paths to AGI would result in a mind very different from a human or animal, and it would be hard to predict in detail how it would act. We should not trust intuitions trained on humans to predict what an AGI or superintelligence would do. High fidelity Whole Brain Emulations are one exception, where we would expect the system to at least initially be fairly human, but it may diverge depending on its environment and what modifications are applied to it.

There has been some discussion about how language models trained on lots of human-written text seem likely to pick up human concepts and think in a somewhat human way, and how we could use this to improve alignment.

Stamps: Aprillion
Show your endorsement of this answer by giving it a stamp of approval!


Answer to

Canonical Answer Info
(edits welcome)
Original by: plex (edits by Aprillion)


Discussion