incentives

From Stampy's Wiki
Incentives
incentives
Alignment Forum Tag
Wikipedia Page

Description

An Incentive is a motivating factor, such as monetary reward, the risk of legal sanctions, or social feedback. Many systems are best understood by looking at the incentives of the people with power over them.

An Incentive is a motivating factor, such as monetary reward, the risk of legal sanctions, or social feedback. Many systems are best understood by looking at the incentives of the people with power over them.

Inadequate Equilibria covers many problems that arise when there are poor incentives.

Related pages: Game Theory, Moloch, Moral Mazes

Canonically answered

We could, but we won’t. Each advance in capabilities which brings us closer to an intelligence explosion also brings vast profits for whoever develops them (e.g. smarter digital personal assistants like Siri, more ability to automate cognitive tasks, better recommendation algorithms for Facebook, etc.). The incentives are all wrong. Any actor (nation or corporation) who stops will just get overtaken by more reckless ones, and everyone knows this.

Non-canonical answers

Consider Codex or GPT-3.

Making a narrow AI is costly and time-consuming, and it's resources you're not spending elsewhere. By making a more general intelligence, you can have more leverage and reuse what you've made. There is another incentive in the sense that making an AI narrow means training it on specific dataset, and build in a lot of behaviour. Codex at the moment is mostly trained on Python, but a natural development would be to want it to be able to code in any language.

Of course, there are some conditions for that to apply. It would need to be fairly easy to scale up in terms of structure, for one. Which according to how throwing more computational power leads to better results with GPT does seem to be the case.
It also assumes that you do not lose too much capacity by making the training broader.

Ultimately however, it doesn't really matter whether those gains really exist or not, but whether people might perceive that there is one. There seems to be a lot of people who expects that with AGI, they might have a very advantageous part of the market.

That is probably true, there is no way to interact with GPT other than through Open AI's API and they decide of the pricing they want. The better their current AI is, the better they will improve, so even a short gap in achieving AGI could lead to having a significant advance to other competitors.

But even if it is not true, the fact that they expect to gain that advantage means they will try to attain it and that we should take the corresponding safety according to it, whatever they are.