Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey

Predictions about future AI

Building a more accurate picture of how AI will develop lets us make better decisions, allowing us to increase the odds that AI transforms the world for the better and not for the worse.

One aspect of how AI will develop is “timelines”: how soon will advanced AI be created? “Advanced AI” can be further specified as “human-level AI”, “transformative AI”, etc. We can’t estimate timelines precisely, but we can get some hints from data about hardware and software progress, money and human capital going into AI research, biological brains and their evolution, past technological breakthroughs, the performance and scaling of current systems, and possible designs of future systems. An indirect way to get estimates is by aggregating the predictions of experts and forecasters.

Another aspect is “takeoff speed”: how quickly will advanced (e.g., human-level) systems be followed by much smarter (superintelligent) systems? This is another area where we don’t have precise predictions, but different estimates are informed by different models of how hard it is to design superintelligent systems, how much AI will help speed up AI research, how quickly initial successes will draw new investment, whether intelligence is more like one big idea or many small ideas, how gradual or sudden past breakthroughs were, and so on.

In addition to questions of when we’ll have advanced AI, there are questions of what the consequences will be — for example, how likely is it that advanced AI will result in an existential disaster? This is sometimes called “P(doom)”, short for “the probability of doom”, where “doom” refers to human extinction and similarly bad outcomes (without implying inevitability). The answer depends, among other things, on how hard it is to align AI to human values, on how hard we can expect people to try, and on whether we can recover from failure.

There are many other questions about the dynamics of advanced AI: Will there end up being one superintelligent system that can prevent any threats to its control, or many superintelligent systems that compete or collaborate? What kinds of AI will we develop, and will they act autonomously in the world or be used as tools? Will similar generally intelligent systems do many different tasks, or will there be specialized systems for each? What kind of groups (corporations, governments, university research groups, international institutions) will develop the most advanced future AI systems?

All these questions are related in complicated ways. For example:

  • A sudden takeoff might make it harder for humans to react, which might increase P(doom).

  • Timelines affect which actors are likely to build AGI, and vice versa.

  • Less agentic, less general, and more tool-like systems might make existential disasters from misalignment less likely.

  • A sudden takeoff might be more likely than a slow takeoff to result in a single superintelligent system taking control.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.