ai takeoff

From Stampy's Wiki
Ai takeoff
ai takeoff
Alignment Forum Tag

Description

AI Takeoff refers to the process of an Artificial General Intelligence going from a certain threshold of capability (often discussed as "human-level") to being super-intelligent and capable enough to control the fate of civilization. There has been much debate about whether AI takeoff is more likely to be slow vs fast, i.e., "soft" vs "hard".

AI Takeoff refers to the process of an Artificial General Intelligence going from a certain threshold of capability (often discussed as "human-level") to being super-intelligent and capable enough to control the fate of civilization. There has been much debate about whether AI takeoff is more likely to be slow vs fast, i.e., "soft" vs "hard".

See also: AI Timelines, Seed AI, Singularity, Intelligence explosion, Recursive self-improvement

AI takeoff is sometimes casually referred to as AI FOOM.

Soft takeoff

A soft takeoff refers to an AGI that would self-improve over a period of years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real-time. Possible methods that could deliver a soft takeoff, by slowly building on human-level intelligence, are Whole brain emulation, Biological Cognitive Enhancement, and software-based strong AGI [1]. By maintaining control of the AGI's ascent it should be easier for a Friendly AI to emerge.

Vernor Vinge, Hans Moravec and have all expressed the view that soft takeoff is preferable to a hard takeoff as it would be both safer and easier to engineer.

Hard takeoff

A hard takeoff (or an AI going "FOOM" [2]) refers to AGI expansion in a matter of minutes, days, or months. It is a fast, abruptly, local increase in capability. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. Unfriendly AI). It is one of the main ideas supporting the Intelligence explosion hypothesis.

The feasibility of hard takeoff has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom, and Michael Anissimov. It is widely agreed that a hard takeoff is something to be avoided due to the risks. Yudkowsky points out several possibilities that would make a hard takeoff more likely than a soft takeoff such as the existence of large resources overhangs or the fact that small improvements seem to have a large impact in a mind's general intelligence (i.e.: the small genetic difference between humans and chimps lead to huge increases in capability) [3].

Notable posts

External links

References

  1. http://www.aleph.se/andart/archives/2010/10/why_early_singularities_are_softer.html
  2. http://lesswrong.com/lw/63t/requirements_for_ai_to_go_foom/
  3. http://lesswrong.com/lw/wf/hard_takeoff/

Canonically answered

If the AI system was deceptively aligned (i.e. pretending to be nice until it was in control of the situation) or had been in stealth mode while getting things in place for a takeover, quite possibly within hours. We may get more warning with weaker systems, if the AGI does not feel at all threatened by us, or if a complex ecosystem of AI systems is built over time and we gradually lose control.

Paul Christiano writes an story of alignment failure which shows a relatively fast transition.

Non-canonical answers

Consider Codex or GPT-3.

Making a narrow AI is costly and time-consuming, and it's resources you're not spending elsewhere. By making a more general intelligence, you can have more leverage and reuse what you've made. There is another incentive in the sense that making an AI narrow means training it on specific dataset, and build in a lot of behaviour. Codex at the moment is mostly trained on Python, but a natural development would be to want it to be able to code in any language.

Of course, there are some conditions for that to apply. It would need to be fairly easy to scale up in terms of structure, for one. Which according to how throwing more computational power leads to better results with GPT does seem to be the case.
It also assumes that you do not lose too much capacity by making the training broader.

Ultimately however, it doesn't really matter whether those gains really exist or not, but whether people might perceive that there is one. There seems to be a lot of people who expects that with AGI, they might have a very advantageous part of the market.

That is probably true, there is no way to interact with GPT other than through Open AI's API and they decide of the pricing they want. The better their current AI is, the better they will improve, so even a short gap in achieving AGI could lead to having a significant advance to other competitors.

But even if it is not true, the fact that they expect to gain that advantage means they will try to attain it and that we should take the corresponding safety according to it, whatever they are.

Unanswered questions