whole brain emulation
Main Question: What is "whole brain emulation"? (edit question) (edit non-canonical answer) |
Alignment Forum Tag |
Wikipedia Page |
Description
Canonically answered
What are the ethical challenges related to whole brain emulation?
Unless there was a way to cryptographically ensure otherwise, whoever runs the emulation has basically perfect control over their environment and can reset them to any state they were previously in. This opens up the possibility of powerful interrogation and torture of digital people.
Imperfect uploading might lead to damage that causes the EM to suffer while still remaining useful enough to be run for example as a test subject for research. We would also have greater ability to modify digital brains. Edits done for research or economic purposes might cause suffering. See this fictional piece for an exploration of how a world with a lot of EM suffering might look like.
These problems are exacerbated by the likely outcome that digital people can be run much faster than biological humans, so it would be plausibly possible to have an EM run for hundreds of subjective years in minutes or hours without having checks on the wellbeing of the EM in question.
Is there a danger in anthropomorphizing AI’s and trying to understand them in human terms?
Using some human-related metaphors (e.g. what an AGI ‘wants’ or ‘believes’) is almost unavoidable, as our language is built around experiences with humans, but we should be aware that these may lead us astray.
Many paths to AGI would result in a mind very different from a human or animal, and it would be hard to predict in detail how it would act. We should not trust intuitions trained on humans to predict what an AGI or superintelligence would do. High fidelity Whole Brain Emulations are one exception, where we would expect the system to at least initially be fairly human, but it may diverge depending on its environment and what modifications are applied to it.
There has been some discussion about how language models trained on lots of human-written text seem likely to pick up human concepts and think in a somewhat human way, and how we could use this to improve alignment.
What safety problems are associated with whole brain emulation?
It seems improbable that whole brain emulation (WBE) arrives before neuromorphic AI because a better understanding of the brain would probably help with the development of latter.
Even if WBE were to arrive first, there is some debate on whether it would be safer than synthetic AI. An accelerated WBE might be a safe template for an AGI as it would directly inherit the subject's way of thinking but some safety problems could still arise.
- We don't know how human psychology would react to being so far off distribution. As an intuition pump, very high IQ individuals are at higher risk for psychological disorders.
- A superintelligent WBE would get a large amount of power, which historically has tended to corrupt humans.
- High speed might make interactions with normal-speed humans difficult, as explored in Robin Hanson's The Age of Em.
- It is unclear whether WBE would be dynamically more predictable than AI engineered by competent safety-conscious programmers.
- Even if WBE arrives before AGI, Bostrom argues we should expect a second (potentially dangerous) transition to fully synthetic AGI due to their improved efficiency over WBE.
Nonetheless, Yudkowsky believes that emulations are probably better even if they are unlikely.
The degree to which an Artificial Superintelligence (ASI) would resemble us depends heavily on how it is implemented, but it seems that differences are unavoidable. If AI is accomplished through whole brain emulation and we make a big effort to make it as human as possible (including giving it a humanoid body), the AI could probably be said to think like a human. However, by definition of ASI it would be much smarter. Differences in the substrate and body might open up numerous possibilities (such as immortality, different sensors, easy self-improvement, ability to make copies, etc.). Its social experience and upbringing would likely also be entirely different. All of this can significantly change the ASI's values and outlook on the world, even if it would still use the same algorithms as we do. This is essentially the "best case scenario" for human resemblance, but whole brain emulation is kind of a separate field from AI, even if both aim to build intelligent machines. Most approaches to AI are vastly different and most ASIs would likely not have humanoid bodies. At this moment in time it seems much easier to create a machine that is intelligent than a machine that is exactly like a human (it's certainly a bigger target).
Non-canonical answers
Chris's question on Intro to AI Safety
Excellent question! This has been discussed under the term "uploads" or "Whole Brain Emulation". It could be a much safer path to AGI, but the main problem is that getting a sufficiently high-fidelity model of a human brain requires research which would allow neuromorphic AI (AI inspired by the human brain, but not close enough to the human brain that we would expect it to reliably have human-like values) to be created first, as explained here. A second major problem is that uploads don't come with any mathematical guarantees around alignment (which we could plausibly get from a system with a cleaner architecture), and basically amounts to turning someone into a god and hoping they do nice things.
Rob has another video on a different approach to making human-like AI called Quantilizers but unfortunately this is not likely to be practical, and is more relevant as a theoretical tool for thinking about more mild forms of optimization than utility maximizers.
What are the ethical challenges related to whole brain emulation?
The safety problems related to whole brain emulations are both with the process of uploading and when uploaded.
When uploading its important to have the technology to transfer what makes up a persons mind, since there is a difference between a copy of the mind and an identical mind.[1] When uploading the mind a risk might be creating a philosophical zombie, who can act like the person that was uploaded, while not being identical in all aspects. Whether the brain emulation has has become a philosophical zombie or not, there are questions about the legal personhood of emulations and how the brain emulation is in relation to the person or its relatives.[2] This can cause a conflict of interest, for example whether the brain emulation could decide that its time to pull the plug on the person, if sick.
After being uploaded, computer viruses or malware might be able change or erase brain emulations, including forcing them into experiments, this can also be used as a ransom.
What is "whole brain emulation"?
Whole Brain Emulation (WBE) or ‘mind uploading’ is a computer emulation of all the cells and connections in a human brain. So even if the underlying principles of general intelligence prove difficult to discover, we might still emulate an entire human brain and make it run at a million times its normal speed (computer circuits communicate much faster than neurons do). Such a WBE could do more thinking in one second than a normal human can in 31 years. So this would not lead immediately to smarter-than-human intelligence, but it would lead to faster-than-human intelligence. A WBE could be backed up (leading to a kind of immortality), and it could be copied so that hundreds or millions of WBEs could work on separate problems in parallel. If WBEs are created, they may therefore be able to solve scientific problems far more rapidly than ordinary humans, accelerating further technological progress.
See also:
- Sandberg & Bostrom, Whole Brain Emulation: A Roadmap
- Blue Brain Project
Could emulated minds do AI alignment research?
Emulated minds have the same behavior as conventional minds. This being so, they can do anything the human mind they are emulating can. Provided the mind that is being emulated is capable of learning how to do alignment research, the emulation of them would be, too. It should be noted that we do not currently have the technology to emulate human minds.
Unanswered non-canonical questions