Edit AnswerTags: Jeremyg's Answer to What milestones are there between us and AGI?

From Stampy's Wiki
Log-in is required to edit or create pages.

You do not have permission to edit this page, for the following reason:

The action you have requested is limited to users in the group: Users.


Tags:
Cancel

Answer text

Working out milestone tasks that we expect to be achieved before we reach AGI can be difficult. Some tasks, like "continuous learning" intuitively seem like they will need to be solved before someone builds AGI. Continuous learning is learning bit by bit, as you get more data. Current ML systems usually don't do this, instead learning everything at once from a big dataset. Because humans can do continuous learning, it seems like it might be required for AGI. However, you have to be careful with reasoning like this, because it is possible the first generally capable artificial intelligence will work quite differently to a human. It's possible the first AGI will be designed to avoid needing "continuous learning", maybe by being designed to do a big retraining process every day. This might still allow it to be as capable as humans at almost every task, but without solving the "continuous learning" problem.

Because of arguments like the above, it's not always clear whether a given task is "required" for AGI.

Some potential big milestone tasks might be:

  • ARC challenge (tests the ability to generate the "simplest explanation" for patterns)
  • Human level sample efficiency at various tasks (EfficientZero already does Atari games)


This metaculus question has four very specific milestones that it considers to be requirements for "weak AGI".