I’d like to get deeper into the AI alignment literature. Where should I look?
The AGI Safety Fundamentals Course is a arguably the best way to get up to speed on alignment, you can sign up to go through it with many other people studying and mentorship or read their materials independently.
Other great ways to explore include:
- The AI Safety Papers database is a search and browsing interface for most of the transformative AI literature.
- Reading posts on the Alignment Forum can be valuable (see their curated posts and tags).
- Taking a deep dive into Yudkowsky's models of the challenges to aligned AI, via the Arbital Alignment pages.
- Signing up to the Alignment Newsletter for an overview of current developments, and reading through some of the archives (or listening to the podcast).
- Reading some of the introductory books.
- More on AI Safety Support's list of links, Nonlinear's list of technical courses, reading lists, and curriculums, Stampy's canonical answers list, and Vika's resources list.
OriginWhere was this question originally asked