surveys

From Stampy's Wiki
Surveys
surveys
Alignment Forum Tag

Description

Surveys and polls of users of LessWrong and related communities, results, and analysis of the resulting data.

Surveys and polls of users of LessWrong and related communities, results, and analysis of the resulting data.

Canonically answered

How doomed is humanity?

Show your endorsement of this answer by giving it a stamp of approval!

The opinions from experts are all over the place, according to this 2021 survey. Someone has also collected a database of existential risk estimates.

On the pessimistic end you find people like Eliezer Yudkowsky, who said: "I consider the present gameboard to look incredibly grim, and I don't actually see a way out through hard work alone. We can hope there's a miracle that violates some aspect of my background model, and we can try to prepare for that unknown miracle; preparing for an unknown miracle probably looks like "Trying to die with more dignity on the mainline" (because if you can die with more dignity on the mainline, you are better positioned to take advantage of a miracle if it occurs)."

While at the optimistic end you have people like Ben Garfinkel who put the probability at more like 0.1-1% for AI causing an existential catastrophe in the next century, with most people lying somewhere in the middle.

Non-canonical answers

How long will it be until superintelligent AI is created?

Show your endorsement of this answer by giving it a stamp of approval!

Nobody knows for sure when we will have ASI or if it is even possible. Predictions on AI timelines are notoriously variable, but recent surveys about the arrival of human-level AGI have median dates between 2040 and 2050 although the median for (optimistic) AGI researchers and futurists is in the early 2030s (source). What will happen if/when we are able to build human-level AGI is a point of major contention among experts. One survey asked (mostly) experts to estimate the likelihood that it would take less than 2 or 30 years for a human-level AI to improve to greatly surpass all humans in most professions. Median answers were 10% for "within 2 years" and 75% for "within 30 years". We know little about the limits of intelligence and whether increasing it will follow the law of accelerating or diminishing returns. Of particular interest to the control problem is the fast or hard takeoff scenario. It has been argued that the increase from a relatively harmless level of intelligence to a dangerous vastly superhuman level might be possible in a matter of seconds, minutes or hours: too fast for human controllers to stop it before they know what's happening. Moving from human to superhuman level might be as simple as adding computational resources, and depending on the implementation the AI might be able to quickly absorb large amounts of internet knowledge. Once we have an AI that is better at AGI design than the team that made it, the system could improve itself or create the next generation of even more intelligent AIs (which could then self-improve further or create an even more intelligent generation, and so on). If each generation can improve upon itself by a fixed or increasing percentage per time unit, we would see an exponential increase in intelligence: an intelligence explosion.

Are expert surveys on AI safety available?

Show your endorsement of this answer by giving it a stamp of approval!

The organisation AI Impacts did a survey of AI experts in 2016, and another in 2022.