Isn't the real concern AI being misused by terrorists or other bad actors?
The key concern in regards to AGI is that if it passes human-level intelligence, it would likely become uncontrollable and we essentially hand our dominant position on the planet over to it. Whether the first human-level AI is deployed by terrorists, a government, or a major research organization does not make any difference for that fact. While the latter two might have more interest in deploying aligned AGI than terrorists, they won't be able to do that unless we solve the alignment problem.
As far as narrow AI is concerned: The danger of misuse by bad actors is indeed a problem. As the capabilities of narrow AI systems grow while we get closer to AGI, this problem will only grow more and more severe over the next years and decades.
However, leading experts expect that we are more than 50% likely to reach human-level AI by the end of this century. On the forecasting platform Metaculus, the current (September 2022) median forecast is as early as 2043.
Accordingly, we have no time to lose for solving the alignment problem, with or without the danger of terrorists using narrow AI systems.
OriginWhere was this question originally asked