Why might people try to build AGI rather than stronger and stronger narrow AIs?

From Stampy's Wiki

Why might people try to build AGI rather than stronger and stronger narrow AIs?

Non-Canonical Answers

Consider Codex or GPT-3.

Making a narrow AI is costly and time-consuming, and it's resources you're not spending elsewhere. By making a more general intelligence, you can have more leverage and reuse what you've made. There is another incentive in the sense that making an AI narrow means training it on specific dataset, and build in a lot of behaviour. Codex at the moment is mostly trained on Python, but a natural development would be to want it to be able to code in any language.

Of course, there are some conditions for that to apply. It would need to be fairly easy to scale up in terms of structure, for one. Which according to how throwing more computational power leads to better results with GPT does seem to be the case.
It also assumes that you do not lose too much capacity by making the training broader.

Ultimately however, it doesn't really matter whether those gains really exist or not, but whether people might perceive that there is one. There seems to be a lot of people who expects that with AGI, they might have a very advantageous part of the market.

That is probably true, there is no way to interact with GPT other than through Open AI's API and they decide of the pricing they want. The better their current AI is, the better they will improve, so even a short gap in achieving AGI could lead to having a significant advance to other competitors.

But even if it is not true, the fact that they expect to gain that advantage means they will try to attain it and that we should take the corresponding safety according to it, whatever they are.

Stamps:  


Question Info
Asked by: Severin
OriginWhere was this question originally asked
Wiki
Date: 2021-8-6


Discussion