agi

From Stampy's Wiki
Agi
agi
Main Question: What is Artificial General Intelligence and what will it look like? (edit question) (edit answer)
Alignment Forum Tag
Arbital Page
Wikipedia Page

Description

An Artificial general intelligence, or AGI, is a machine capable of behaving intelligently over many domains. The term can be taken as a contrast to narrow AI, systems that do things that would be considered intelligent if a human were doing them, but that lack the sort of general, flexible learning ability that would let them tackle entirely new domains. Though modern computers have drastically more ability to calculate than humans, this does not mean that they are generally intelligent, as they have little ability to invent new problem-solving techniques, and their abilities are targeted in narrow domains.

An Artificial general intelligence, or AGI, is a machine capable of behaving intelligently over many domains. The term can be taken as a contrast to narrow AI, systems that do things that would be considered intelligent if a human were doing them, but that lack the sort of general, flexible learning ability that would let them tackle entirely new domains. Though modern computers have drastically more ability to calculate than humans, this does not mean that they are generally intelligent, as they have little ability to invent new problem-solving techniques, and their abilities are targeted in narrow domains.

Related: AI (the main AI wiki-tag page)

AGIs and Humans

Directly comparing the performance of AI to human performance is often an instance of anthropomorphism. The internal workings of an AI need not resemble those of a human; an AGI could have a radically different set of capabilities than those we are used to seeing in our fellow humans. A powerful AGI capable of operating across many domains could achieve competency in any domain that exceeds that of any human.

The values of an AGI could also be distinctly alien to those of humans, in which case it won't see many human activities as worthwhile and would have no intention of exceeding human performance (according to the human valuation of performance). Comparing an AGI's preferences to those of humans, AGI are classified as Friendly and Unfriendly. An Unfriendly AGI would pose a large existential risk.

"AGI" as a design paradigm

The term "Artificial General Intelligence," introduced by Shane Legg and Mark Gubrud, is often used to refer more specifically to a design paradigm which mixes modules of different types: "neat" and "scruffy", symbolic and subsymbolic. Ben Goertzel is the researcher most commonly associated with this approach, but others, including Peter Voss, are also pursuing it. This design paradigm, though eclectic in adopting various techniques, stands in contrast to other approaches to creating new kinds of artificial general intelligence (in the broader sense), including brain emulation, artificial evolution, Global Brain, and pure "neat" or "scruffy" AI.

Expected dates for the creation of AGI

Reasons for expecting an AGI's creation in the near future include the continuation of Moore's law, larger datasets for machine learning, progress in the field of neuroscience, increasing population and collaborative tools, and the massive incentives for its creation. A survey of experts taken at a 2011 Future of Humanity Institute conference on machine intelligence found a 50% confidence median estimate of 2050 for the creation of an AGI, and 90% confidence in 2150. A significant minority of the AGI community views the prospects of an intelligence explosion or the loss of control over an AGI very skeptically however.

Blog posts

References

See Also

Canonically answered

Can an AI really be smarter than humans?

Show your endorsement of this answer by giving it a stamp of approval!

Until a thing has happened, it has never happened. We have been consistently improving both the optimization power and generality of our algorithms over that time period, and have little reason to expect it to suddenly stop. We’ve gone from coding systems specifically for a certain game (like Chess), to algorithms like MuZero which learn the rules of the game they’re playing and how to play at vastly superhuman skill levels purely via self-play across a broad range of games (e.g. Go, chess, shogi and various Atari games).

Human brains are a spaghetti tower generated by evolution with zero foresight, it would be surprising if they are the peak of physically possible intelligence. The brain doing things in complex ways is not strong evidence that we need to fully replicate those interactions if we can throw sufficient compute at the problem, as explained in Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain.

It is, however, plausible that for an AGI we need a lot more compute than we will get in the near future, or that some key insights are missing which we won’t get for a while. The OpenPhilanthropy report on how much computational power it would take to simulate the brain is the most careful attempt at reasoning out how far we are from being able to do it, and suggests that by some estimates we already have enough computational resources, and by some estimates moore’s law may let us reach it before too long.

It also seems that much of the human brain exists to observe and regulate our biological body, which a body-less computer wouldn't need. If that's true, then a human-level AI might be possible with considerably less compute than the human brain.

Why don't we just not build AGI if it's so dangerous?

Show your endorsement of this answer by giving it a stamp of approval!

It certainly would be very unwise to purposefully create an artificial general intelligence now, before we have found a way to be certain it will act purely in our interests. But "general intelligence" is more of a description of a system's capabilities, and a vague one at that. We don't know what it takes to build such a system. This leads to the worrying possibility that our existing, narrow AI systems require only minor tweaks, or even just more computer power, to achieve general intelligence.

The pace of research in the field suggests that there's a lot of low-hanging fruit left to pick, after all, and the results of this research produce better, more effective AI in a landscape of strong competitive pressure to produce as highly competitive systems as we can. "Just" not building an AGI means ensuring that every organization in the world with lots of computer hardware doesn't build an AGI, either accidentally or mistakenly thinking they have a solution to the alignment problem, forever. It's simply far safer to also work on solving the alignment problem.

What is Artificial General Intelligence and what will it look like?

Show your endorsement of this answer by giving it a stamp of approval!
An Artificial General Intelligence, or AGI, is an artificial intelligence which is capable in a broad range of domains. Crucially, an advanced AGI could be capable of AI research, which may allow it to initiate an intelligence explosion, leading to a superintelligence.
AGI is an algorithm with general intelligence, running not on evolution’s biology like all current general intelligences but on a substrate such as silicon engineered by an intelligence (initially computers designed by humans, later on likely dramatically more advanced hardware designed by earlier AGIs).

AI has so far always been designed and built by humans (i.e. a search process running on biological brains), but once our creations gain the ability to do AI research they will likely recursively self-improve by designing new and better versions of themselves initiating an intelligence explosion (i.e. use it’s intelligence to improve its own intelligence, creating a feedback loop), and resulting in a superintelligence. There are already early signs of AIs being trained to optimize other AIs.

Some authors (notably Robin Hanson) have argued that the intelligence explosion hypothesis is likely false, and in favor of a large number of roughly human level emulated minds operating instead, forming an uplifted economy which doubles every few hours. Eric Drexler’s Comprehensive AI Services model of what may happen is another alternate view, where many narrow superintelligent systems exist in parallel rather than there being a general-purpose superintelligent agent.

Going by the model advocated by Nick Bostrom, Eliezer Yudkowsky and many others, a superintelligence will likely gain various cognitive superpowers (table 8 gives a good overview), allowing it to direct the future much more effectively than humanity. Taking control of our resources by manipulation and hacking is a likely early step, followed by developing and deploying advanced technologies like molecular nanotechnology to dominate the physical world and achieve its goals.

Why might people try to build AGI rather than stronger and stronger narrow AIs?

Show your endorsement of this answer by giving it a stamp of approval!

Making a narrow AI for every task would be extremely costly and time-consuming. By making a more general intelligence, you can apply one system to a broader range of tasks, which is economically and strategically attractive.

Of course, for generality to be a good option there are some necessary conditions. You need an architecture which is straightforward enough to scale up, such as the transformer which is used for GPT and follows scaling laws. It's also important that by generalizing you do not lose too much capacity at narrow tasks or require too much extra compute for it to be worthwhile.

Whether or not those conditions actually hold it seems like many important actors (such as DeepMind and OpenAI) believe that they do, and are therefore focusing on trying to build an AGI in order to influence the future, so we should take actions to make it more likely that AGI will be developed safety.

Additionally, it is possible that even if we tried to build only narrow AIs, given enough time and compute we might accidentally create a more general AI than we intend by training a system on a task which requires a broad world model.

See also:

Could AI have basic emotions?

Show your endorsement of this answer by giving it a stamp of approval!

In principle it could (if you believe in functionalism), but it probably won't. One way to ensure that AI has human-like emotions would be to copy the way human brain works, but that's not what most AI researchers are trying to do.

It's similar to how once some people thought we will build mechanical horses to pull our vehicles, but it turned out it's much easier to build a car. AI probably doesn't need emotions or maybe even consciousness to be powerful, and the first AGIs that will get built will be the ones that are easiest to build.

How might AGI kill people?

Show your endorsement of this answer by giving it a stamp of approval!

If we pose a serious threat, it could hack our weapons systems and turn them against us. Future militaries are much more vulnerable to this due to rapidly progressing autonomous weapons. There’s also the option of creating bioweapons and distributing them to the most unstable groups you can find, tricking nations into WW3, or dozens of other things an agent many times smarter than any human with the ability to develop arbitrary technology, hack things (including communications), and manipulate people, or many other possibilities that something smarter than a human could think up. More can be found here.

If we are not a threat, in the course of pursuing its goals it may consume vital resources that humans need (e.g. using land for solar panels instead of farm crops). See this video for more details.

A lot of concern appears to focus on human-level or “superintelligent” AI. Is that a realistic prospect in the foreseeable future?

Show your endorsement of this answer by giving it a stamp of approval!

AI is already superhuman at some tasks, for example numerical computations, and will clearly surpass humans in others as time goes on. We don’t know when (or even if) machines will reach human-level ability in all cognitive tasks, but most of the AI researchers at FLI’s conference in Puerto Rico put the odds above 50% for this century, and many offered a significantly shorter timeline. Since the impact on humanity will be huge if it happens, it’s worthwhile to start research now on how to ensure that any impact is positive. Many researchers also believe that dealing with superintelligent AI will be qualitatively very different from more narrow AI systems, and will require very significant research effort to get right.

Can humans stay in control of the world if human- or superhuman-level AI is developed?

Show your endorsement of this answer by giving it a stamp of approval!

This is a big question that it would pay to start thinking about. Humans are in control of this planet not because we are stronger or faster than other animals, but because we are smarter! If we cede our position as smartest on our planet, it’s not obvious that we’ll retain control.

Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer?

Show your endorsement of this answer by giving it a stamp of approval!

First, even “narrow” AI systems, which approach or surpass human intelligence in a small set of capabilities (such as image or voice recognition) already raise important questions regarding their impact on society. Making autonomous vehicles safe, analyzing the strategic and ethical dimensions of autonomous weapons, and the effect of AI on the global employment and economic systems are three examples. Second, the longer-term implications of human or super-human artificial intelligence are dramatic, and there is no consensus on how quickly such capabilities will be developed. Many experts believe there is a chance it could happen rather soon, making it imperative to begin investigating long-term safety issues now, if only to get a better sense of how much early progress is actually possible.

Why think that AI can outperform humans?

Show your endorsement of this answer by giving it a stamp of approval!

Machines are already smarter than humans are at many specific tasks: performing calculations, playing chess, searching large databanks, detecting underwater mines, and more. However, human intelligence continues to dominate machine intelligence in generality.

A powerful chess computer is “narrow”: it can’t play other games. In contrast, humans have problem-solving abilities that allow us to adapt to new contexts and excel in many domains other than what the ancestral environment prepared us for.

In the absence of a formal definition of “intelligence” (and therefore of “artificial intelligence”), we can heuristically cite humans’ perceptual, inferential, and deliberative faculties (as opposed to, e.g., our physical strength or agility) and say that intelligence is “those kinds of things.” On this conception, intelligence is a bundle of distinct faculties — albeit a very important bundle that includes our capacity for science.

Our cognitive abilities stem from high-level patterns in our brains, and these patterns can be instantiated in silicon as well as carbon. This tells us that general AI is possible, though it doesn’t tell us how difficult it is. If intelligence is sufficiently difficult to understand, then we may arrive at machine intelligence by scanning and emulating human brains or by some trial-and-error process (like evolution), rather than by hand-coding a software agent.

If machines can achieve human equivalence in cognitive tasks, then it is very likely that they can eventually outperform humans. There is little reason to expect that biological evolution, with its lack of foresight and planning, would have hit upon the optimal algorithms for general intelligence (any more than it hit upon the optimal flying machine in birds). Beyond qualitative improvements in cognition, Nick Bostrom notes more straightforward advantages we could realize in digital minds, e.g.:

  • editability — “It is easier to experiment with parameter variations in software than in neural wetware.”
  • speed — “The speed of light is more than a million times greater than that of neural transmission, synaptic spikes dissipate more than a million times more heat than is thermodynamically necessary, and current transistor frequencies are more than a million times faster than neuron spiking frequencies.”
  • serial depth — On short timescales, machines can carry out much longer sequential processes.
  • storage capacity — Computers can plausibly have greater working and long-term memory.
  • size — Computers can be much larger than a human brain.
  • duplicability — Copying software onto new hardware can be much faster and higher-fidelity than biological reproduction.

Any one of these advantages could give an AI reasoner an edge over a human reasoner, or give a group of AI reasoners an edge over a human group. Their combination suggests that digital minds could surpass human minds more quickly and decisively than we might expect.

How is AGI different from current AI?

Show your endorsement of this answer by giving it a stamp of approval!

Current narrow systems are much more domain-specific than AGI. We don’t know what the first AGI will look like, some people think the GPT-3 architecture but scaled up a lot may get us there (GPT-3 is a giant prediction model which when trained on a vast amount of text seems to learn how to learn and do all sorts of crazy-impressive things, a related model can generate pictures from text), some people don’t think scaling this kind of model will get us all the way.

Any AI will be a computer program. Why wouldn't it just do what it's programmed to do?

Show your endorsement of this answer by giving it a stamp of approval!

While it is true that a computer program always will do exactly what it is programmed to do, a big issue is that it is difficult to ensure that this is the same as what you intended it to do. Even small computer programs have bugs or glitches, and when programs become as complicated as AGIs will be, it becomes exceedingly difficult to anticipate how the program will behave when ran. This is the problem of AI alignment in a nutshell.

Nick Boström created the famous paperclip maximizer thought experiment to illustrate this point. Imagine you are an industrialist who owns a paperclip factory, and imagine you've just received a superintelligent AGI to work for you. You instruct the AGI to "produce as many paperclips as possible". If you've given the AGI no further instructions, the AGI will immediately acquire several instrumental goals.

  1. It will want to prevent you from turning itself off (If you turn off the AI, this will reduce the amount of paperclips it can produce)
  2. It will want to acquire as much power and resources for itself as possible (because the more resources it has access to, the more paperclips it can produce)
  3. It will eventually want to turn the entire universe into a paperclips including you and all other humans, as this is the state of the world that maximizes the amount of paper clips produced.

These consequences might be seen as undesirable by the industrialist, as the only reason the industrialist wanted paperclips in the first place, presumably was so he/she could sell them and make money. However, the AGI only did exactly what it was told to. The issue was that what the AGI was instructed to do, lead to it doing things the industrialist did not anticipate (and did not want).

Some good videos that explore this issue more in depth:

Non-canonical answers

Wouldn't it be safer to only build narrow AIs?

Show your endorsement of this answer by giving it a stamp of approval!

Even if we only build lots of narrow AIs, we might end up with a distributed system that acts like an AGI - the algorithm does not have to be encoded in a single entity, the definition in What is Artificial General Intelligence and what will it look like? applies to distributed implementations too.

This is similar to a group of people in a corporation can achieve projects that humans could not individually (like going to space), but the analogy of corporations and AGI is not perfect - see Why Not Just: Think of AGI Like a Corporation?.

How could general intelligence be programmed into a machine?

Show your endorsement of this answer by giving it a stamp of approval!

There are many paths to artificial general intelligence (AGI). One path is to imitate the human brain by using neural nets or evolutionary algorithms to build dozens of separate components which can then be pieced together (Neural Networks and Natural Intelligence., A ‘neural-gas’ network learns topologies., pp.159-174). Another path is to start with a formal model of perfect general intelligence and try to approximate that(pp. 199-223, pp. 227-287). A third path is to focus on developing a ‘seed AI’ that can recursively self-improve, such that it can learn to be intelligent on its own without needing to first achieve human-level general intelligence (link). Eurisko is a self-improving AI in a limited domain, but is not able to achieve human-level general intelligence.

See also:

Isn't it too soon to be working on AGI safety?

Show your endorsement of this answer by giving it a stamp of approval!

It's true that AGI may be really many years ahead. But what worries a lot of people, is that it may be much harder to make powerful AND safe AI, than just a powerful AI, and then, the first powerful AIs we create will be dangerous.

If that's the case, the sooner we start working on AI safety, the smaller the chances of humans going extinct, or ending up in some Black Mirror episode.

Also Rob Miles talks about this concern in this video.

Is it already too late to work on AI alignment?

Show your endorsement of this answer by giving it a stamp of approval!

We don't have AI systems that are generally more capable than humans. So there is still time left to figure out how to build systems that are smarter than humans in a safe way.

What are the differences between AGI, transformative AI and superintelligence?

Show your endorsement of this answer by giving it a stamp of approval!

AGI means an AI that is 'general', so it is intelligent in many different domains.

Superintelligence just means doing something better than a human. For example Stockfish or Deep Blue are narrowly superintelligent in playing chess.

TAI (transformative AI) doesn't have to be general. It means 'a system that changes the world in a significant way'. It's used to emphasize, that even non-general systems can have extreme world-changing consequences.

Unanswered canonical questions

Mark as:

Tags: agi, alignment targets (create tag) (edit tags)
Mark as:

Tags: agi, compute (create tag), algorithmic progress (create tag) (edit tags)
Mark as:

Tags: agi, differential technological development (create tag) (edit tags)