What are the differences between AGI, transformative AI, and superintelligence?
These terms are all related attempts to define AI capability milestones — roughly, "the point at which artificial intelligence becomes truly intelligent" — but with different meanings:
-
AGI stands for "artificial general intelligence" and refers to AI programs that aren't just skilled at a narrow task (like playing board games or driving cars) but that have a kind of intelligence that they can apply to a similarly wide range of domains as humans. Some call systems like Gato AGI because they can solve many tasks with the same model. However, the term is more often used for systems with at least human-level general competence, so more typically AGI is still seen as a potential future development.1
The term AGI suffers from ambiguity, to the point where some people avoid using it. Still, it remains the most common term used to talk about the cluster of concepts used on this page. -
Transformative AI is any AI powerful enough to transform society. (The term is unrelated to the transformer architecture.) Holden Karnofsky defines it as AI that causes at least as big an impact as the Agricultural or Industrial Revolutions, which increased economic growth many times over. Ajeya Cotra's report mentions a "virtual professional", i.e., a program that can do most remote jobs, as an example of a system that would have such an impact.
-
Superintelligence is defined by Nick Bostrom as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". This is by far the highest bar out of all the concepts listed here, but it may be reached a short time after the others, e.g., because of an intelligence explosion.
Other terms which are sometimes used include:
-
Advanced AI is any AI that's much more powerful than current AI. The term is sometimes used as a loose placeholder for the other concepts here.
-
Human-level AI is any AI that can solve most of the cognitive problems an average human can solve. Current AI has a very different profile of strengths and weaknesses than humans, and this is likely to remain true of future AI: before AI is at least human-level at all tasks, it will probably be vastly superhuman at some important tasks while still being weaker at others.
-
Strong AI was defined by John Searle as the philosophical thesis that computer programs can have "a mind in exactly the same sense human beings have minds", but the term is sometimes used outside this context as more or less interchangeable with "AGI" or "human-level AI".
-
Seed AI is any AI with enough AI programming ability to set off a recursive self-improvement process that could take it all the way to superintelligence. An AI might not have to qualify as AGI initially to have sudden and dangerous impacts in this way.
-
Turing Test-passing AI is any AI smart enough to fool human judges into thinking it's human. The level of capability required depends on how intense the scrutiny is: current language models trained to imitate human text can already seem human to a casual observer, despite not having general human-level intelligence. On the other hand, imitating an intelligence can be harder than outperforming it (in the same way that it’s harder to walk exactly like a turtle than to walk faster than a turtle), so it's also possible for smarter-than-human AI to fail the Turing Test.
-
APS-AI is a term introduced by Joe Carlsmith in his report on existential risk from power-seeking AI. APS stands for Advanced, Planning, and Strategically aware. "Advanced" means it's more powerful than humans at important tasks; "Planning" means it's an agent that pursues goals by using its world models; "Strategically aware" means it has good models of its strategic situation with respect to humans in the real world. Carlsmith argues that these properties together create the risk of AI takeover.
-
PASTA is an acronym for "Process for Automating Scientific and Technological Advancement", introduced by Holden Karnofsky in a series of blog posts. His thesis is that any AI powerful enough to automate human R&D is sufficient for explosive impacts, even if it doesn't qualify as AGI.
-
Uncontrollable AI means an AI that can circumvent or counter any measures humans take to correct its decisions or restrict its influence. An uncontrollable AI doesn’t have to be an AGI or superintelligence. It could, for example, just have powerful hacking skills that make it practically impossible to shut it down or remove it from the internet. An AI could also become uncontrollable by becoming very skilled at manipulating humans.
-
The t-AGI framework, proposed by Richard Ngo, benchmarks the difficulty of a task by how long it would take a human to do it. For instance, an AI that can recognize objects in an image, answer trivia questions, etc., is a "1-second-AGI”, because it can do most tasks that would take a human one second to do, while an AI that can do things like develop new apps and review scientific papers is a "1-month-AGI."
The term AGI suffers from ambiguity, to the point where some people avoid using it. Still, it remains the most common term used to talk about the cluster of concepts used on this page. ↩︎