How and why should I form my own views about AI safety?
Forming an “inside view” of AI safety
A research field about how to prevent risks from advanced artificial intelligence.
A key part of forming inside views is taking time to think carefully about how the strategic landscape will unfold, and how different interventions might affect this. If you want to steer the future reliably, it’s worthwhile to invest the time in building well-grounded models, as they help both your decision making and the AI safety movement’s epistemics.
In AI safety, as in most areas, it helps to read other people’s ideas and compare your predictions and frames against alternatives to inform your own perspective. Reading through different researchers’ threat models and success stories can be particularly valuable. Some other points to explore:
-
The Machine Intelligence Research Institute’s "Why AI safety?" info page contains links to relevant research.
-
The Effective Altruism Forum has an article called "How I formed my own views on AI safety", which could be helpful.
-
There is also this article from Vox.
-
Below is a Robert Miles YouTube video that can be a good place to start.