What are astronomical suffering risks (s-risks)?
(Astronomical) suffering risks, also known as s-risks, are risks of the creation of intense suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.
S-risks are an example of existential risk (also known as x-risks) according to Nick Bostrom Philosopher who has done research on existential risk from AI and other causes. Formerly head of FHI at Oxford, which he founded. Author of the 2014 book Superintelligence: Paths, Dangers, Strategies.
Within the space of x-risks, we can distinguish x-risks that are s-risks, x-risks involving human extinction, x-risks that involve immense suffering and human extinction, and x-risks that involve neither. For example:
extinction risk | non-extinction risk | |
suffering risk | Misaligned AGI wipes out humans, simulates many suffering alien civilizations. | Misaligned AGI tiles the universe with experiences of severe suffering. |
non-suffering risk | Misaligned AGI wipes out humans. | Misaligned AGI keeps humans as "pets," limiting growth but not causing immense suffering. |
A related concept is hyperexistential risk, the risk of "fates worse than death" on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. But arguably all s-risks are hyperexistential, since "tiling the universe with experiences of severe suffering" would likely be worse than death.
There are two EA organizations with s-risk prevention research as their primary focus: the Center on Long-Term Risk (CLR) and the Center for Reducing Suffering. Much of CLR's work is on suffering-focused AI safety and crucial considerations. Although to a much lesser extent, the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks too.
Another approach to reducing s-risk is to "expand the moral circle" together with raising concern for suffering, so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.
See also
External links
- Reducing Risks of Astronomical Suffering: A Neglected Global Priority (FRI)
- Introductory talk on s-risks (FRI)
- Risks of Astronomical Future Suffering (FRI)
- Suffering-focused AI safety: Why "fail-safe" measures might be our top intervention PDF (FRI)
- Artificial Intelligence and Its Implications for Future Suffering (FRI)
- Expanding our moral circle to reduce suffering in the far future (Sentience Politics)
- The Importance of the Far Future (Sentience Politics)