We don’t yet know which AI architectures are safe; learning more about this is one of the goals of FLI's grants program. AI researchers are generally very responsible people who want their work to better humanity. If there are certain AI designs that turn out to be unsafe, then AI researchers will want to know this so they can develop alternative AI systems.
We don't know yet. This sentiment has been influenced by the ebb and flow of new research, but the wind seems to be blowing in the direction of scaling being sufficient for AGI.
External opinions on this subject
- Gwern on scaling hypothesis
- Daniel Kokotajlo on what we could do with +12 OOMs of compute
- Rohin Shah on the likelihood of getting to AGI with current techniques
- Rich Sutton's "The Bitter Lesson" on why more computation beats leveraging existing human knowledge
- Gary Marcus on how LLMs are limited and how scaling will not help
- Evidence against current methods