Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey

How can I use a background in the social sciences to help with AI alignment?

Nora Ammann, in the post AI alignment as “navigating the space of intelligent behaviour”, describes “three epistemic strategies for making progress on the alignment problem: 1) tinkering, 2) idealization and 3) intelligence-in-the-wild”. Research in the social sciences, biology, philosophy, and other fields can inform alignment efforts by shedding light on “intelligence-in-the-wild”. (As illustrated by the examples below, such research often still involves mathematics as well.)

Some examples of approaches, taken from the post:

Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS) is a group that runs a summer research fellowship and has recommendations for books and videos.

Other such research agendas exist. You can consider these as examples of what alignment-relevant research with varying amounts of math and computer science could look like:

See also the EA Forum post Social scientists interested in AI safety should consider doing direct technical AI safety research, (possibly meta-research), or governance, support roles, or community building instead



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.