What safety problems are associated with whole brain emulation?

It seems improbable that whole brain emulation (WBE) arrives before neuromorphic AI, because a better understanding of the brain would probably help with the development of the latter. This makes the research path to WBE likely to accelerate capabilities and reduce timelines.

Even if WBE were to arrive first, there is some debate about whether it would be less prone to produce existential risks than synthetic AI. An accelerated WBE might be a safe template for an AGI as it would directly inherit the subject's way of thinking but some safety problems could still arise.

  • This would be a very strange experience for current human psychology, and we are not sure how the resulting brain would react. As an intuition pump, very high IQ individuals are at higher risk for psychological disorders. This suggests that we have no guarantee that a process recreating a human brain with vastly more capabilities would retain the relative stability of its biological ancestors.

  • A superintelligent WBE might get a large amount of power, which historically has tended to corrupt humans.

  • High speed might make interactions with normal-speed humans difficult, as explored in Robin Hanson's The Age of Em.

  • It is unclear whether WBE would be more predictable than AI engineered by competent safety-conscious programmers.

  • Even if WBE arrives before AGI, Nick Bostrom argues we should expect a second (potentially dangerous) transition to fully synthetic AGI due to their improved efficiency over WBE.

Nonetheless, Eliezer Yudkowsky believes that emulations are probably safer even if they are unlikely.