What is DeepMind's safety team working on?

DeepMind has both a machine learning safety team focused on near-term risks, and an alignment team working on risks from artificial general intelligence. The alignment team is pursuing many different research agendas.

Their work includes:

See Shah's comment for more research that they are doing, including a description of some that is currently unpublished.