How can I work on public AI safety outreach?
The AI alignment
A research field about how to prevent risks from advanced artificial intelligence.
Rob Miles uses a standard for public outreach where, if he puts a video out, it has to be the best video on a particular topic. (It can be the best in some specific way, or for some specific audience.) You can use a somewhat lower standard for things like podcasts and assume it’ll be worth it as long as the podcast is quite good, and use a lower standard still for things like talks, where it doesn’t matter if it’s the best.
If you’re doing this kind of work, it’s important to have a strong understanding of the issues. Keep reading relevant materials and be familiar with the questions that people ask most often.
To find out if you’re good at it, and to get better, start with low-profile projects and ask trusted sources for feedback on how you could improve. If it seems to be going well, you can repeat this with increasingly high-profile projects over time.
One area that could have a significant impact would be the creation of materials about AI safety in non-English languages like Chinese or Russian.