recursive self-improvement

From Stampy's Wiki
Recursive self-improvement
recursive self-improvement
Main Question: How might we get from Artificial General Intelligence to a Superintelligent system? (edit question) (edit answer)
Alignment Forum Tag
Wikipedia Page

Description

Recursive self-improvement refers to the property of making improvements on one's own ability of making self-improvements. It is an approach to Artificial General Intelligence that allows a system to make adjustments to its own functionality resulting in improved performance. The system could then feedback on itself with each cycle reaching ever higher levels of intelligence resulting in either a hard or soft AI takeoff.

Recursive self-improvement refers to the property of making improvements on one's own ability of making self-improvements. It is an approach to Artificial General Intelligence that allows a system to make adjustments to its own functionality resulting in improved performance. The system could then feedback on itself with each cycle reaching ever higher levels of intelligence resulting in either a hard or soft AI takeoff.

An agent can self-improve and get a linear succession of improvements, however if it is able to improve its ability of making self-improvements, then each step will yield exponentially more improvements then the next one.

Recursive self-improvement and AI takeoff

Recursively self-improving AI is considered to be the push behind the intelligence explosion. While any sufficiently intelligent AI will be able to improve itself, Seed AIs are specifically designed to use recursive self-improvement as their primary method of gaining intelligence. Architectures that had not been designed with this goal in mind, such as neural networks or large "hand-coded" projects like Cyc, would have a harder time self-improving.

Eliezer Yudkowsky argues that a recursively self-improvement AI seems likely to deliver a hard AI takeoff – a fast, abruptly, local increase in capability - since the exponential increase in intelligence would yield an exponential return in benefits and resources that would feed even more returns in the next step, and so on. In his view a soft takeoff scenario seems unlikely: "it should either flatline or blow up. You would need exactly the right law of diminishing returns to fly through the extremely narrow soft takeoff keyhole."1.

Yudkowsky argues that there are several points which seem to support the hard takeoff scenario. Some of them are the fact that one improvement seems to lead the way to another, hardware overhang and the fact that sometimes- when navigating through problem space - one can find a succession of extremely easy to solve problems. These are all reasons for suddenly and abruptly increases in capability. On the other hand, Robin Hanson argues that there will be mostly a slow and gradual accumulation of improvements, without a sharp change.

Self-improvement in humans

The human species has made an enormous amount of progress since evolving around fifty thousand years ago. This is because we can pass on knowledge and infrastructure from previous generations. This is a type of self-improvement, but it is not recursive. If we never learned to modify our own brains, then we would eventually reach the point where making new discoveries required more knowledge than could be gained in a human lifetime. All human progress to date has been limited by the hardware we are born with, which is the same hardware Homo sapiens were born with fifty thousand years ago.

"True" recursive self-improvement will come when we discover how to drastically modify or augment our own brains in order to be more intelligent. This would lead us to more quickly being able to discover how to become even more intelligent.

Recursive self-improvement and Instrumental value

     Main article: Basic AI drives

Nick Bostrom and Steve Omohundro have separately[2] argued[3] that despite the fact that values and intelligence are independent, any recursively self-improving intelligence would likely possess a common set of instrumental values which are useful for achieving any kind of goal. As a system's intelligence continued modifying itself towards greater intelligence, it would be likely to adopt more of these behaviors.

Blog posts

See also

External links

Canonically answered

Can you stop an advanced AI from upgrading itself?

Show your endorsement of this answer by giving it a stamp of approval!

It depends on what is meant by advanced. Many AI systems which are very effective and advanced narrow intelligences would not try to upgrade themselves in an unbounded way, but becoming smarter is a convergent instrumental goal so we could expect most AGI designs to attempt it.

The problem is that increasing general problem solving ability is climbing in exactly the direction needed to trigger an intelligence explosion, while generating large economic and strategic payoffs to whoever achieves them. So even though we could, in principle, just not build the kind of systems which would recursively self-improve, in practice we probably will go ahead with constructing them, because they’re likely to be the most powerful.

What is Artificial General Intelligence and what will it look like?

Show your endorsement of this answer by giving it a stamp of approval!
An Artificial General Intelligence, or AGI, is an artificial intelligence which is capable in a broad range of domains. Crucially, an advanced AGI could be capable of AI research, which may allow it to initiate an intelligence explosion, leading to a superintelligence.
AGI is an algorithm with general intelligence, running not on evolution’s biology like all current general intelligences but on a substrate such as silicon engineered by an intelligence (initially computers designed by humans, later on likely dramatically more advanced hardware designed by earlier AGIs).

AI has so far always been designed and built by humans (i.e. a search process running on biological brains), but once our creations gain the ability to do AI research they will likely recursively self-improve by designing new and better versions of themselves initiating an intelligence explosion (i.e. use it’s intelligence to improve its own intelligence, creating a feedback loop), and resulting in a superintelligence. There are already early signs of AIs being trained to optimize other AIs.

Some authors (notably Robin Hanson) have argued that the intelligence explosion hypothesis is likely false, and in favor of a large number of roughly human level emulated minds operating instead, forming an uplifted economy which doubles every few hours. Eric Drexler’s Comprehensive AI Services model of what may happen is another alternate view, where many narrow superintelligent systems exist in parallel rather than there being a general-purpose superintelligent agent.

Going by the model advocated by Nick Bostrom, Eliezer Yudkowsky and many others, a superintelligence will likely gain various cognitive superpowers (table 8 gives a good overview), allowing it to direct the future much more effectively than humanity. Taking control of our resources by manipulation and hacking is a likely early step, followed by developing and deploying advanced technologies like molecular nanotechnology to dominate the physical world and achieve its goals.

Why can't we just turn the AI off if it starts to misbehave?

Show your endorsement of this answer by giving it a stamp of approval!

We could shut down weaker systems, and this would be a useful guardrail against certain types of problem caused by narrow AI. However, once an AGI establishes itself, we could not unless it was corrigible and willing to let humans adjust it. There may be a period in the early stages of an AGI's development where it would be trying very hard to convince us that we should not shut it down and/or hiding itself and/or recursively self-improving and/or making copies of itself onto every server on earth.

Instrumental Convergence and the Stop Button Problem are the key reasons it would not be simple to shut down a non corrigible advanced system. If the AI wants to collect stamps, being turned off means it gets less stamps, so even without an explicit goal of not being turned off it has an instrumental reason to avoid being turned off (e.g. once it acquires a detailed world model and general intelligence, it is likely to realise that by playing nice and pretending to be aligned if you have the power to turn it off, establishing control over any system we put in place to shut it down, and eliminating us if it has the power to reliably do so and we would otherwise pose a threat).

Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff?

Show your endorsement of this answer by giving it a stamp of approval!

Blindly following the trendlines while forecasting technological progress is certainly a risk (affectionately known in AI circles as “pulling a Kurzweill”), but sometimes taking an exponential trend seriously is the right response.

Consider economic doubling times. In 1 AD, the world GDP was about $20 billion; it took a thousand years, until 1000 AD, for that to double to $40 billion. But it only took five hundred more years, until 1500, or so, for the economy to double again. And then it only took another three hundred years or so, until 1800, for the economy to double a third time. Someone in 1800 might calculate the trend line and say this was ridiculous, that it implied the economy would be doubling every ten years or so in the beginning of the 21st century. But in fact, this is how long the economy takes to double these days. To a medieval, used to a thousand-year doubling time (which was based mostly on population growth!), an economy that doubled every ten years might seem inconceivable. To us, it seems normal.

Likewise, in 1965 Gordon Moore noted that semiconductor complexity seemed to double every eighteen months. During his own day, there were about five hundred transistors on a chip; he predicted that would soon double to a thousand, and a few years later to two thousand. Almost as soon as Moore’s Law become well-known, people started saying it was absurd to follow it off a cliff – such a law would imply a million transistors per chip in 1990, a hundred million in 2000, ten billion transistors on every chip by 2015! More transistors on a single chip than existed on all the computers in the world! Transistors the size of molecules! But of course all of these things happened; the ridiculous exponential trend proved more accurate than the naysayers.

None of this is to say that exponential trends are always right, just that they are sometimes right even when it seems they can’t possibly be. We can’t be sure that a computer using its own intelligence to discover new ways to increase its intelligence will enter a positive feedback loop and achieve superintelligence in seemingly impossibly short time scales. It’s just one more possibility, a worry to place alongside all the other worrying reasons to expect a moderate or hard takeoff.

Non-canonical answers

Why does there seem to have been an explosion of activity in AI in recent years?

Show your endorsement of this answer by giving it a stamp of approval!

In addition to the usual continuation of Moore's Law, GPUs have become more powerful and cheaper in the past decade, especially since around 2016. Many ideas in AI have been thought about for a long time, but the speed at which modern processors can do computing and parallel processing allows researchers to implement their ideas and gather more observational data. Improvements in AI have allowed many industries to start using the technologies, which creates demand and brings more focus on AI research (as well as improving the availability of technology on the whole due to more efficient infrastructure). Data has also become more abundant and available, and not only is data a bottleneck for machine learning algorithms, but the abundance of data is difficult for humans to deal with alone, so businesses often turn to AI to convert it to something human-parsable. These processes are also recursive, to some degree, so the more AI improves, the more can be done to improve AI.

How could general intelligence be programmed into a machine?

Show your endorsement of this answer by giving it a stamp of approval!

There are many paths to artificial general intelligence (AGI). One path is to imitate the human brain by using neural nets or evolutionary algorithms to build dozens of separate components which can then be pieced together (Neural Networks and Natural Intelligence., A ‘neural-gas’ network learns topologies., pp.159-174). Another path is to start with a formal model of perfect general intelligence and try to approximate that(pp. 199-223, pp. 227-287). A third path is to focus on developing a ‘seed AI’ that can recursively self-improve, such that it can learn to be intelligent on its own without needing to first achieve human-level general intelligence (link). Eurisko is a self-improving AI in a limited domain, but is not able to achieve human-level general intelligence.

See also: