What evidence do experts usually base their timeline predictions on?
Machine learning
An approach to AI in which, instead of designing an algorithm directly, we have the system search through possible algorithms based on how well they do on some training data.
Some researchers have tried to model timelines more rigorously. For example:
-
Ajeya Cotra's "biological anchors" model projects the computing power used for future ML training runs based on advances in hardware and increases in willingness to spend. It then compares the results to several different "biological anchors": for example, the assumption that training a transformative AI takes as much computing power as a human brain uses during a lifetime, or as much computing power as all brains used in our evolutionary history, or more likely something in between.
-
A major part of the biological anchors model is a probability distribution for how much computing power it would take to build transformative AI with current algorithms and ideas. Daniel Kokotajlo has argued for a different way to estimate this quantity. Instead of analogies to the human brain, he bases his estimates on intuitions about what kind of AI systems could be built.
-
Tom Davidson's approach based on "semi-informative priors" looks at the statistical distribution of timelines for past inventions. These inventions are taken from a few reference classes, such as highly ambitious STEM R&D goals.
-
Robin Hanson has collected expert guesses of what fraction of the way to human level we have come in individual subfields.
-
Matthew Barnett has done calculations on when we can expect scaling laws
to take language models to the point where they generate sufficiently human-like text. The idea is that if AI text is hard enough to distinguish from human text, this implies at least human-like competence.Scaling lawsView full definitionThe relationship between a model’s performance and the amount of compute used to train it.
These approaches give very different answers to the question when we’ll first have advanced AI. Cotra’s model originally gave a median of 2050, but she later updated to 2040. The Colab notebook that uses Barnett’s direct method also shows (as of February 2023) a 2040 median. But on the shorter side, Kokotajlo has argued for a median before 2030. And on the longer side, Davidson's report gives only an 18% probability for AGI by 2100, and (based on estimates made between 2012 and 2017) Hanson’s method also “suggests at least a century until human-level AI”. Different experts put different weights
The parameters of a neural network. They are tuned during training and are mostly sufficient to implement the AI model.