whole brain emulation
|Main Question: What is whole brain emulation? (edit question) (edit non-canonical answer)|
|Alignment Forum Tag|
Whole Brain Emulation or WBE is a proposed technique which involves transferring the information contained within a brain onto a computing substrate. The brain can then be simulated, creating a machine intelligence. The concept is often discussed in context of scanning the brain of a person, known as Mind Uploading.
WBE is sometimes seen as an easy way to creating intelligent computers, as the only innovations necessary are greatly increased processor speed and scanning resolution. Advocates of WBE claim technological improvement rates such as Moore's law will make WBE inevitable.
The exact level of detail required for an accurate simulation of a brain's mind is presently uncertain, and will determine the difficulty of creating WBE. The feasibility of such a project has been examined in detail in Future of Humanity Institute's Whole Brain Emulation: A Roadmap. The Roadmap concluded that a human brain emulation would be possible before mid-century, providing that current technology trends kept up and providing that there would be sufficient investments.
Several approaches for WBE have been suggested:
- A brain could be cut into small slices, which would then be scanned into a computer.[#fn1 1]
- Brain-computer interfaces could slowly replace portions of the brain with computers and allow the mind to grow onto a computing substrate.[#fn2 2][#fn3 3]
- Resources such as personality tests and a person's writings could be used to construct a model of the person.[#fn4 4]
A digitally emulated brain could have several advantages over a biological one[#fn5 5]. It might be able to run faster than biological brains, copy itself, and take advantage of backups while experimenting with self-modification.
Whole brain emulation will also create a number of ethical challenges relating to the nature of personhood, rights, and social inequality. Robin Hanson proposes that an uploaded mind might copy itself to work until the cost of running a copy was that of its labour, vastly increasing the amount of wealth in the world but also causing mass unemployment[#fn6 6]. The ability to copy uploads could also lead to drastic changes in society's values, with the values of the uploads that got copied the most coming to dominate.
An emulated-brain populated world could hold severe negative consequences, such as:
- Inherent inability to have consciousness, if some philosophers are right [#fn7 7] [#fn8 8] [#fn9 9] [#fn10 10].
- Elimination of culture in general, due to an extremely increasing penalty for inefficiency in the form of flamboyant displays [#fn11 11]
- Near zero costs for reproduction, pushing most of emulations to live in a subsistence state. [#fn12 12]
- Economic consequences of AI and whole brain emulation
- Emulation argument for human-level AI
- Simulation hypothesis
- Neuromorphic AI
- The Singularity is near: When humans transcend biology by Ray Kurzweil
- Whole Brain Emulation: A Roadmap. Report by The Future of Humanity Institute.
- Hans Moravec's Estimation of Human Brain Processing Capacity
- A world survey of artificial brain projects, Part I: Large-scale brain simulations by Hugo de Garis, Chen Shuo, Ben Goertzel and, Lian Ruiting, 2010
- If Uploads Come First: The crack of a future dawn by Robin Hanson
- Whole Brain Emulation and the Evolution of Superorganisms
- International Journal of Machine Consciousness Special Issue on Mind Uploading
- A framework for approaches to transfer of a mind's substrate by Sim Bamford
- Coalescing Minds: Brain Uploading-related Group Mind Scenarios by Kaj Sotala and Harri Valpola
- Whole Brain Emulation: A Roadmap[#fnref1 ↩]
- Strout, J. Uploading by the Nanoreplacement Procedure. http://www.ibiblio.org/jstrout/uploading/nanoreplacement.html[#fnref2 ↩]
- Sotala, K., & Valpola, H. (2012). Coalescing minds: brain uploading-related group mind scenarios. International Journal of Machine Consciousness, 4(01), 293-312. http://singularity.org/files/CoalescingMinds.pdf[#fnref3 ↩]
- ROTHBLATT, M. (2012). THE TERASEM MIND UPLOADING EXPERIMENT. International Journal of Machine Consciousness, 4(01), 141-158. http://www.terasemcentral.org/docs/Terasem%20Mind%20Uploading%20Experiment%20IJMC.pdf[#fnref4 ↩]
- Sotala, K. (2012). Advantages of artificial intelligences, uploads, and digital minds. International Journal of Machine Consciousness, 4(01), 275-291. http://singularity.org/files/AdvantagesOfAIs.pdf[#fnref5 ↩]
- Hanson, R. (1994). If uploads come first. Extropy, 6(2), 10-15. http://hanson.gmu.edu/uploads.html[#fnref6 ↩]
- LUCAS, John. (1961) Minds, machines, and Gödel, Philosophy, 36, pp. 112–127[#fnref7 ↩]
- DREYFUS, H. (1972) What Computers Can’t Do, New York: Harper & Row.[#fnref8 ↩]
- PENROSE, Roger (1994) Shadows of the Mind, Oxford: Oxford University Press.[#fnref9 ↩]
- BLOCK, Ned (1981) Psychologism and behaviorism, Philosophical Review, 90, pp. 5–43.[#fnref10 ↩]
- BOSTROM, Nick.(2004) "The future of human evolution". Death and Anti‐Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy (Ria University Press: Palo Alto, California, 2004): pp. 339‐371. Available at: http://www.nickbostrom.com/fut/evolution.pdf[#fnref11 ↩]
Using some human-related metaphors (e.g. what an AGI ‘wants’ or ‘believes’) is almost unavoidable, as our language is built around experiences with humans, but we should be aware that these may lead us astray.
Many paths to AGI would result in a mind very different from a human or animal, and it would be hard to predict in detail how it would act. We should not trust intuitions trained on humans to predict what an AGI or superintelligence would do. High fidelity Whole Brain Emulations are one exception, where we would expect the system to at least initially be fairly human, but it may diverge depending on its environment and what modifications are applied to it.
There has been some discussion about how language models trained on lots of human-written text seem likely to pick up human concepts and think in a somewhat human way, and how we could use this to improve alignment.
The degree to which an Artificial Superintelligence (ASI) would resemble us depends heavily on how it is implemented, but it seems that differences are unavoidable. If AI is accomplished through whole brain emulation and we make a big effort to make it as human as possible (including giving it a humanoid body), the AI could probably be said to think like a human. However, by definition of ASI it would be much smarter. Differences in the substrate and body might open up numerous possibilities (such as immortality, different sensors, easy self-improvement, ability to make copies, etc.). Its social experience and upbringing would likely also be entirely different. All of this can significantly change the ASI's values and outlook on the world, even if it would still use the same algorithms as we do. This is essentially the "best case scenario" for human resemblance, but whole brain emulation is kind of a separate field from AI, even if both aim to build intelligent machines. Most approaches to AI are vastly different and most ASIs would likely not have humanoid bodies. At this moment in time it seems much easier to create a machine that is intelligent than a machine that is exactly like a human (it's certainly a bigger target).
Excellent question! This has been discussed under the term "uploads" or "Whole Brain Emulation". It could be a much safer path to AGI, but the main problem is that getting a sufficiently high-fidelity model of a human brain requires research which would allow neuromorphic AI (AI inspired by the human brain, but not close enough to the human brain that we would expect it to reliably have human-like values) to be created first, as explained here. A second major problem is that uploads don't come with any mathematical guarantees around alignment (which we could plausibly get from a system with a cleaner architecture), and basically amounts to turning someone into a god and hoping they do nice things.
Rob has another video on a different approach to making human-like AI called Quantilizers but unfortunately this is not likely to be practical, and is more relevant as a theoretical tool for thinking about more mild forms of optimization than utility maximizers.
Whole Brain Emulation (WBE) or ‘mind uploading’ is a computer emulation of all the cells and connections in a human brain. So even if the underlying principles of general intelligence prove difficult to discover, we might still emulate an entire human brain and make it run at a million times its normal speed (computer circuits communicate much faster than neurons do). Such a WBE could do more thinking in one second than a normal human can in 31 years. So this would not lead immediately to smarter-than-human intelligence, but it would lead to faster-than-human intelligence. A WBE could be backed up (leading to a kind of immortality), and it could be copied so that hundreds or millions of WBEs could work on separate problems in parallel. If WBEs are created, they may therefore be able to solve scientific problems far more rapidly than ordinary humans, accelerating further technological progress.
The safety problems related to whole brain emulations are both with the process of uploading and when uploaded.
When uploading its important to have the technology to transfer what makes up a persons mind, since there is a difference between a copy of the mind and an identical mind. When uploading the mind a risk might be creating a philosophical zombie, who can act like the person that was uploaded, while not being identical in all aspects. Whether the brain emulation has has become a philosophical zombie or not, there are questions about the legal personhood of emulations and how the brain emulation is in relation to the person or its relatives. This can cause a conflict of interest, for example whether the brain emulation could decide that its time to pull the plug on the person, if sick.
After being uploaded, computer viruses or malware might be able change or erase brain emulations, including forcing them into experiments, this can also be used as a ransom.
Unless there was a way to cryptographically ensure otherwise, whoever runs the emulation has basically perfect control over their environment and can reset them to any state they were previously in. This opens up the possibility of powerful interrogation and torture of digital people.
Imperfect uploading might lead to damage that causes the EM to suffer while still remaining useful enough to be run for example as a test subject for research. We would also have greater ability to modify digital brains. Edits done for research or economic purposes might cause suffering. See this fictional piece for an exploration of how a world with a lot of EM suffering might look like.
These problems are exacerbated by the likely outcome that digital people can be run much faster than biological humans, so it would be plausibly possible to have an EM run for hundreds of subjective years in minutes or hours without having checks on the wellbeing of the EM in question.
Unanswered canonical questions