Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey
Alignment research

Current techniques
Benchmarks and evals
Prosaic alignment
Interpretability
Agent foundations
Other alignment approaches
Organizations and agendas
Researchers

What is the AI control research agenda?

The AI control agenda1

is a safety research program, championed by Redwood Research, that aims to ensure that advanced AI systems remain under human control even as such systems become more capable. In contrast to safety proposals that attempt to align AI, it concentrates on ensuring an AI can’t cause harm even if it’s misaligned and scheming.

AI control involves evaluating whether an AI is deceptive based on adversarial training. For example, capable AI models could be monitored for misbehavior by models that are weaker but that we trust. Researchers could test whether these weaker models work by exposing them to synthetic examples of misbehavior and seeing whether they catch them.

Such evaluations could be implemented and tested with today’s AIs, but are intended to scale with AI capabilities up to human-level.2

These techniques are meant to work even on AIs that are “black boxes” (in the sense that we have no idea how they work on the inside) and that are actively trying to subvert the evaluation process.

Controlling AI is not meant to replace aligning AI, but to work as a second line of defense. Proponents believe that if we can use AI control to ensure that a human-level AI cannot cause harm, we could then use it to reduce risks from even smarter AIs — for instance, by having it increase cybersecurity or by having it help us align these smarter AIs.3

Some critics argue that using weaker AI to align stronger AI is a hopeless endeavor,4

which would mean that this agenda fails to address the most dangerous part of the problem. Others point out that this agenda might not work if AI develops dangerous manipulation capabilities before it becomes broadly human-level.


  1. The term “AI control” has sometimes been used more generally to refer to the problem of making advanced AI behave well as a whole (“the control problem”), or to any techniques that involve restricting its capabilities, such as putting it in a box. This article focuses on a particular research agenda proposed in recent years. “AI control” is also not to be confused with Control AI, an AI safety advocacy group. ↩︎

  2. Proponents acknowledge that these evaluations might no longer work once AI achieves broadly superhuman capabilities. ↩︎

  3. The strategy of using an AI to help with the alignment/control problem of even more powerful AI is, oddly enough, unnamed. However, there have been quite a few alignment proposals, including the ones pursued by the leading AI labs, along these lines. For instance, Iterated Distillation and Amplification, Superalignment, AI safety via debate, etc. ↩︎

  4. “Like asking Godzilla to prevent Mega-Godzilla from terrorizing Japan” —John Wentworth in Godzilla strategies. ↩︎

Keep Reading

Continue with the next entry in "Alignment research"
What is Sam Bowman researching?
Next
Or jump to a related question


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—2025

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.