What about AI-enabled surveillance?
AI-enabled surveillance and control might be a dangerous form of AI misuse.
Some AI ethics-focused groups and thinkers have raised concerns about contemporary AI applications such as facial recognition and predictive policing being used to exert social control, especially targeting marginalized communities. These capabilities are expected to increase in the future, and some civil liberties organizations such as the EFF have been reporting on these uses of AI in both democracies and autocratic regimes.
Security expert Bruce Schneier has argued that AI is enabling a shift from general surveillance (e.g., pervasive use of CCTV) to personalized surveillance of any citizen. For instance, AI can quickly and cheaply search all phone calls and CCTV footage in a city to form a detailed profile on one individual, which was previously only possible through laborious human effort.1 Furthermore, traditional spying could only gather information after the target was identified as a person of interest, whereas the combination of AI and mass recording allows for the inspection of a target’s behavior in the past.
In the future, more powerful AI surveillance, along with other AI-enabled technologies like autonomous weapons, might allow authoritarian or totalitarian states to make dissent virtually impossible, potentially enabling the rise of a stable global totalitarian state.
As of early 2024, access to the most advanced models is moderated through API access by Western corporations, which allows these corporations to restrict uses of their models that they do not condone. These corporations are incentivized not to collaborate with totalitarian governments, or to authorize use of their models by projects perceived as authoritarian, lest they face public backlash. As capabilities increase and powerful models become more accessible to smaller actors2, this state of affairs might change.
A number of prominent researchers who mainly focus on risks from misalignment (rather than misuse) nevertheless view AI-enabled surveillance as one of the most salient risks that could arise from near-term AI:
-
Daniel Kokotajlo has speculated that LLMs could be used as powerful persuasion tools to disproportionately aid authoritarian regimes.
-
Nick Bostrom has discussed the potential incentives for widespread surveillance systems augmented by AI, based on state responses to concerns about living in an extremely risky and “vulnerable” world.
-
Buck Shlegeris claims that risks of AI-enabled totalitarianism are “at least 10% as important as the risks [he] works on as an AI alignment researcher”.
-
Richard Ngo has claimed that outsourcing the task of maintaining control (e.g. through surveillance) to AI makes it easier to consolidate power, which in the limit leads to authoritarianism.
While it is important to undertake measures to mitigate these kinds of risks of AI misuse, it's not sufficient; even well-intentioned actors have the potential to accidentally pose an existential risk when deploying AGI because of misalignment.
Schneier calls this a shift from “mass surveillance” to “mass spying”, although other authors do not separate these two categories. ↩︎
This could happen for instance when powerful models are open-sourced. ↩︎