What are the limitations of compute governance?

Although compute governance has some features that make it more promising than other approaches to AI governance, Sastry et al. list some ways in which it may cause harm:

  1. Monitoring compute can create risks of private personal data being leaked.

  2. Similarly, sensitive data might end up being shared in ways that aid the commercial or geopolitical rivals of the actors being monitored.

  3. Restricting access to compute makes it harder to use it for economically valuable activities. Preventing the creation and use of AI systems forgoes not only their risks but also their benefits.

  4. Centralizing control over compute creates technical security risks because control mechanisms and stores of information could be compromised. On a political level, it also makes it possible for governments to use these mechanisms toward bad ends.

The authors also list some ways compute governance may end up ineffective:

  1. Although training frontier models currently requires a lot of hardware, and the frontier will probably shift toward larger and larger training runs, progress in hardware technology and algorithms will increasingly allow smaller training runs to produce powerful models as well. New technology may also allow training runs to be more decentralized. If training runs don’t take place in large data centers, monitoring and regulating them will be much harder.

  2. There may be ways of using much less computing power that still lead to dangerous outcomes. These include: creating specialized models for protein folding or other applications, fine-tuning or building scaffolding around existing models, and running existing models that can be run using much less compute than was required to train them.

  3. Like the controls placed on nuclear and other hazardous materials, compute governance measures can be circumvented in various ways. In the longer term, stricter measures might create incentives to create an entirely new supply chain for hardware. They could also increase pressure to shift research toward more powerful algorithms.

The authors suggest (in subsection 5.B) some guardrails to mitigate the harms that compute governance could cause, such as only applying such governance to large-scale compute for AI and implementing privacy-preserving technologies.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.