Please set realname at Special:Preferences to update your displayed username.
None of this user's questions have been answered yet.
Questions by Anonymous which have been answered
AI Safety Support offers free calls to advise people interested in a career in AI Safety, so that's a great place to start. We're working on creating a bunch of detailed information for Stampy to use, but in the meantime check out these resources:
- EA Cambridge AGI Safety Fundamentals curriculum
- 80,000 Hours AI safety syllabus
- Adam Gleave's Careers in Beneficial AI Research document
- Rohin Shah's FAQ on career advice for AI alignment researchers
- AI Safety Support has lots of other good resources, such as their links page, slack, newsletter, and events calendar.
- Safety-aligned research training programs (under construction).
If you like interactive FAQs, you're in the right place already! Joking aside, some great entry points are the AI alignment playlist on YouTube, “The Road to Superintelligence” and “Our Immortality or Extinction” posts on WaitBuyWhy for a fun, accessible introduction, and Vox's “The case for taking AI seriously as a threat to humanity” as a high-quality mainstream explainer piece.
The free online Cambridge course on AGI Safety Fundamentals provides a strong grounding in much of the field and a cohort + mentor to learn with. There's even an anki deck for people who like spaced repetition!
There are many resources in this post on Levelling Up in AI Safety Research Engineering with a list of other guides at the bottom. There is also a twitter thread here with some programs for upskilling and some for safety-specific learning.