|Alignment Forum Tag|
Automation is "the use of largely automatic equipment in a system of manufacturing or other production process".  This includes, but is not limited to, most of the processes involved in car manufacturing, and many ways that computer programs are applied.
: New Oxford American Dictionary
This depends on how we will program it. It definitely can be autonomous, even now, we have some autonomous vehicles or flight control systems and many more.
Even though it's possible to build such systems, it may be better if they actively ask humans for supervision, for example in cases where they are uncertain what to do.
Consider Codex or GPT-3.
Making a narrow AI is costly and time-consuming, and it's resources you're not spending elsewhere. By making a more general intelligence, you can have more leverage and reuse what you've made. There is another incentive in the sense that making an AI narrow means training it on specific dataset, and build in a lot of behaviour. Codex at the moment is mostly trained on Python, but a natural development would be to want it to be able to code in any language.
Of course, there are some conditions for that to apply. It would need to be fairly easy to scale up in terms of structure, for one. Which according to how throwing more computational power leads to better results with GPT does seem to be the case.
It also assumes that you do not lose too much capacity by making the training broader.
Ultimately however, it doesn't really matter whether those gains really exist or not, but whether people might perceive that there is one. There seems to be a lot of people who expects that with AGI, they might have a very advantageous part of the market.
That is probably true, there is no way to interact with GPT other than through Open AI's API and they decide of the pricing they want. The better their current AI is, the better they will improve, so even a short gap in achieving AGI could lead to having a significant advance to other competitors.
But even if it is not true, the fact that they expect to gain that advantage means they will try to attain it and that we should take the corresponding safety according to it, whatever they are.
Will superintelligence make a large part of humanity unemployable?
Some economists say human wants are infinite, and there will always be new and currently unimaginable kinds of jobs for people to do.
Others say this won't be true if AGI can do _anything_ human minds can do.