Will there be a discontinuity in AI capabilities?
While researchers agree that AI capabilities could increase quickly, there are still debates around whether the increase would take the form of a continuous rise or of a (seemingly) discontinuous jump.
Arguments for continuous takeoff How quickly the first AI with roughly human-level intelligence leads to the first AI with vastly superhuman intelligence.
Paul Christiano believes that growth in AI capabilities will also lead to growth in economic productivity. He expects to see world GDP double in shorter and shorter periods of time, with AI contributions to AI R&D creating a feedback loop that results in hyperbolic growth. On this model, takeoff is continuous but still fast.
John Wentworth explored the possibility in the form of a story, that the enhancement of cognitive capabilities is not the true bottleneck to taking over the world. In this scenario, much more significant bottlenecks come in the form of coordinated human pushback and the need to acquire and deploy physical resources.1
As an example, an artificial superintelligence An AI with cognitive abilities far greater than those of humans in a wide range of important domains.
Arguments for discontinuous takeoff
Eliezer Yudkowsky expects AI to have relatively little effect on global GDP before a discontinuous "intelligence explosion". An argument for this is that superintelligent AIs can lie to us. If there exists an artificial general intelligence A transition from human-level AI to superintelligent AI that goes very quickly, giving us no time to react.
Yudkowsky also points to examples from evolution where the transition from chimps to humans led to (what feels like) a discontinuous gap in capabilities. A much more comprehensive public debate about the matter was held between Yudkowsky and Christiano, which is summarized here.
Different views on takeoff speeds and (dis)continuity have different implications for how best (and potentially whether) to work on AI safety
A research field about how to prevent risks from advanced artificial intelligence.
“On fusion power, for instance, at most a 100x speedup compared to the current human pace of progress is realistic, but most of that comes from cutting out the slow and misaligned funding mechanism. Building and running the physical experiments will speed up by less than a factor of 10. Given the current pace of progress in the area, I estimate at least 2 years just to figure out a viable design. It will also take time beforehand to acquire resources, and time after to scale it up and build plants - the bottleneck for both those steps will be acquisition and deployment of physical resources, not cognition. And that’s just fusion power - nanobots are a lot harder.” - Wentworth, John (2021), Potential Bottlenecks to Taking Over The World ↩︎