Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey
Predictions about future AI

Timelines
Compute and scaling
Nature of AI
Takeoff
Takeover
Relative capabilities
Good outcomes
Catastrophic outcomes

How might we get from artificial general intelligence to a superintelligent system?

Once a system is at least as capable as the most capable humans at AI research, it may become the driver of its own development and initiate a process of recursive self-improvement

known as the intelligence explosion, leading to an extremely powerful system. A general framing of this process is Holden Karnofsky’s concept of a Process for Automating Scientific and Technological Advancement (PASTA).

There is much debate about whether there would be a substantial period when the AI would be partially driving its own development, with humans becoming gradually less important, or whether the transition to fully AI-automated AI capability research would be sudden. However, the core idea that there is some threshold of capabilities beyond which a system would begin to rapidly ascend is hard to dispute, and is a significant consideration for developing alignment strategies.

Keep Reading

Continue with the next entry in "Predictions about future AI"
How long will it take to go from human-level AI to superintelligence?
Next
Or jump to a related question


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.