How can I build a career in AI alignment?
We have articles with specific advice on building a career in AI alignment depending on which part of the problem you want to help solve.
To work on the theory of how to align an AI:
-
How can I do conceptual, mathematical, or philosophical work on AI alignment?
-
How can I use a background in the social sciences to help with AI alignment?
To use your engineering skills to advance AI alignment:
To work on coordinating to avoid AI existential risk:
To work on advocacy and outreach for AI safety ideas:
To work on supporting AI safety research and other efforts:
-
How can I do organizational or operations work around AI alignment?
-
I’d like to be a lead researcher / mentor / advisor. How can I do this?
-
How can I work on assessing AI alignment projects and distributing grants?
To prepare or orient yourself to work on AI safety:
-
How do I know whether I'm a good fit for work on AI safety?
-
What subjects should I study at university to prepare myself for alignment research?
-
What should I do with my idea for helping with AI alignment?
-
I'd like to help with AI safety, but after reading the other articles, I'm still not clear on how. How can I figure this out?