what about

From Stampy's Wiki
What about
what about
Main Question: What about AI concerns other than misalignment? (edit question) (write answer)

Canonically answered

What safety problems are associated with whole brain emulation?

Show your endorsement of this answer by giving it a stamp of approval!

It seems improbable that whole brain emulation (WBE) arrives before neuromorphic AI because a better understanding of the brain would probably help with the development of latter.

Even if WBE were to arrive first, there is some debate on whether it would be safer than synthetic AI. An accelerated WBE might be a safe template for an AGI as it would directly inherit the subject's way of thinking but some safety problems could still arise.

  • We don't know how human psychology would react to being so far off distribution. As an intuition pump, very high IQ individuals are at higher risk for psychological disorders.
  • A superintelligent WBE would get a large amount of power, which historically has tended to corrupt humans.
  • High speed might make interactions with normal-speed humans difficult, as explored in Robin Hanson's The Age of Em.
  • It is unclear whether WBE would be dynamically more predictable than AI engineered by competent safety-conscious programmers.
  • Even if WBE arrives before AGI, Bostrom argues we should expect a second (potentially dangerous) transition to fully synthetic AGI due to their improved efficiency over WBE.

Nonetheless, Yudkowsky believes that emulations are probably better even if they are unlikely.

Non-canonical answers

Isn't the real concern technological unemployment?

Show your endorsement of this answer by giving it a stamp of approval!

"The real concern" isn't a particularly meaningful concept here. Deep learning has proven to be a very powerful technology, with far reaching implications across a number of aspects of human existence. There are significant benefits to be found if we manage the technology properly, but that management means addressing a broad range of concerns, one of which is the alignment problem.

Isn't the real concern autonomous weapons?

Show your endorsement of this answer by giving it a stamp of approval!

Autonomous weapons, especially a nuclear arsenal, being used by an AI is a concern, but this seems downstream of the central problem of giving an unaligned AI any capabilities to impact the world.

Triggering nuclear war is only one of many ways a power seeking AI might choose to take control. This seems unlikely, as resources the AI would want to control (or the AI itself) would likely be destroyed in the process.

Isn't the real concern AI being misused by terrorists or other bad actors?

Show your endorsement of this answer by giving it a stamp of approval!

The key concern in regards to AGI is that if it passes human-level intelligence, it would likely become uncontrollable and we essentially hand our dominant position on the planet over to it. Whether the first human-level AI is deployed by terrorists, a government, or a major research organization does not make any difference for that fact. While the latter two might have more interest in deploying aligned AGI than terrorists, they won't be able to do that unless we solve the alignment problem.

As far as narrow AI is concerned: The danger of misuse by bad actors is indeed a problem. As the capabilities of narrow AI systems grow while we get closer to AGI, this problem will only grow more and more severe over the next years and decades.

However, leading experts expect that we are more than 50% likely to reach human-level AI by the end of this century. On the forecasting platform Metaculus, the current (September 2022) median forecast is as early as 2043.

Accordingly, we have no time to lose for solving the alignment problem, with or without the danger of terrorists using narrow AI systems.

Unanswered canonical questions