How might we get from artificial general intelligence to a superintelligent system?
Once a system is at least as capable as the most capable humans at AI research, it may become the driver of its own development and initiate a process of recursive self-improvement
Self-improvement that leads to further self-improvement in a self-reinforcing feedback loop.
A hypothetical scenario where machines become more intelligent very quickly, driven by recursive self-improvement.
There is much debate about whether there would be a substantial period when the AI would be partially driving its own development, with humans becoming gradually less important, or whether the transition to fully AI-automated AI capability research would be sudden. However, the core idea that there is some threshold of capabilities beyond which a system would begin to rapidly ascend is hard to dispute, and is a significant consideration for developing alignment strategies.