Answer to Why don't we just not build AGI if it's so dangerous?

From Stampy's Wiki
SlimeBunnyBat's Answer to Why don't we just not build AGI if it's so dangerous?

It certainly would be very unwise to purposefully create an artificial general intelligence now, before we have found a way to be certain it will act purely in our interests. But "general intelligence" is more of a description of a system's capabilities, and a vague one at that. We don't know what it takes to build such a system. This leads to the worrying possibility that our existing, narrow AI systems require only minor tweaks, or even just more computer power, to achieve general intelligence.

The pace of research in the field suggests that there's a lot of low-hanging fruit left to pick, after all, and the results of this research produce better, more effective AI in a landscape of strong competitive pressure to produce as highly competitive systems as we can. "Just" not building an AGI means ensuring that every organization in the world with lots of computer hardware doesn't build an AGI, either accidentally or mistakenly thinking they have a solution to the alignment problem, forever. It's simply far safer to also work on solving the alignment problem.

Stamps: plex
Show your endorsement of this answer by giving it a stamp of approval!


Answer to

Canonical Answer Info
(edits welcome)
Original by: SlimeBunnyBat (edits by plex)

Related questions


Discussion