persuasion

From Stampy's Wiki
Persuasion
persuasion
Alignment Forum Tag

Description

AI which is highly capable of persuading people might have significant effects on humanity.

AI which is highly capable of persuading people might have significant effects on humanity.

Canonically answered

Is large-scale automated AI persuasion and propaganda a serious concern?

Show your endorsement of this answer by giving it a stamp of approval!

Language models can be utilized to produce propaganda by acting like bots and interacting with users on social media. This can be done to push a political agenda or to make fringe views appear more popular than they are.

I'm envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won't be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.

-- Wei Dei, quoted in Persuasion Tools: AI takeover without AGI or agency?

As of 2022, this is not within the reach of current models. However, on the current trajectory, AI might be able to write articles and produce other media for propagandistic purposes that are superior to human-made ones in not too many years. These could be precisely tailored to individuals, using things like social media feeds and personal digital data.

Additionally, recommender systems on content platforms like YouTube, Twitter, and Facebook use machine learning, and the content they recommend can influence the opinions of billions of people. Some research has looked at the tendency for platforms to promote extremist political views and to thereby help radicalize their userbase for example.

In the long term, misaligned AI might use its persuasion abilities to gain influence and take control over the future. This could look like convincing its operators to let it out of a box, to give it resources or creating political chaos in order to disable mechanisms to prevent takeover as in this story.

See Risks from AI persuasion for a deep dive into the distinct risks from AI persuasion.

How might things go wrong with AI even without an agentic superintelligence?

Show your endorsement of this answer by giving it a stamp of approval!

Failures can happen with narrow non-agentic systems, mostly from humans not anticipating safety-relevant decisions made too quickly to react, much like in the 2010 flash crash.

A helpful metaphor draws on self-driving cars. By relying more and more on an automated process to make decisions, people become worse drivers as they’re not training themselves to react to the unexpected; then the unexpected happens, the software system itself reacts in an unsafe way and the human is too slow to regain control.

This generalizes to broader tasks. A human using a powerful system to make better decisions (say, as the CEO of a company) might not understand those very well, get trapped into an equilibrium without realizing it and essentially losing control over the entire process.

More detailed examples in this vein are described by Paul Christiano in What failure looks like.

Another source of failures is AI-mediated stable totalitarianism. The limiting factor in current pervasive surveillance, police and armed forces is manpower; the use of drones and other automated tools decreases the need for personnel to ensure security and extract resources.

As capabilities improve, political dissent could become impossible, checks and balances would break down as a minimal number of key actors is needed to stay in power.

Non-canonical answers

How can I convince others and present the arguments well?

Show your endorsement of this answer by giving it a stamp of approval!

Things Skeptics Commonly Say, and links to refutations goes over most of the common objections, with some of the ways in which each is not fatal to the AI x-risk arguments.

Vael Gates's project links to lots of example transcripts of persuading senior AI capabilities researchers.

Should I engage in political or collective action like signing petitions or sending letters to politicians?

Show your endorsement of this answer by giving it a stamp of approval!

Perhaps. There is a chance that directly lobbying politicians could help, but there's also a chance that actions end up being net-negative. It would be great if we could slow down AI, but doing so might simple mean that a nation less concerned about safety produces AI first. We could ask them to pass regulations or standards related to AGI, but passing ineffective regulation might interfere with passing more effective regulation later down the track as people may consider the issue dealt with. Or the requirements of complying with bureaucracy might prove to be a distraction from safe AI.

If you are concerned about this issue, you should probably try learning as much about this issue as possible and also spend a lot of time brainstorming downside risks and seeing what risks other people have identified.