Experts' Predictions about the Future of AI

From Stampy's Wiki

Experts' Predictions about the Future of AI
Channel: Robert Miles
Published: 2018-03-31T12:12:37Z
Views: 63797
Likes: 3391
45 questions on this video!
QuestionYouTubeLikesAsked Discord?AnsweredBy
Bacon CheeseCake's question on Experts on the Future of AI288true
Twirlip Of The Mists's question on Experts on the Future of AI8false
J. Stronsky's question on Experts on the Future of AI1false
Scientious's question on Experts on the Future of AI1false
Carl Lewis's question on Experts on the Future of AI1false
Dennis Haupt's question on Experts on the Future of AI0false
Joshuawhere's question on Experts on the Future of AI0false
Gerry o sullivan's question on Experts on the Future of AI0false
Nulono's question on Experts on the Future of AI0false
Austin Glugla's question on Experts on the Future of AI0false
David Brosnahan's question on Experts on the Future of AI0false
Bjørn Gulliksen's question on Experts on the Future of AI0false
Plutonion2's question on Experts on the Future of AI0false
Maciek300's question on Experts on the Future of AI0false
sgatea74's question on Experts on the Future of AI0true
Marc Right's question on Experts on the Future of AI0true
Paul skirton's question on Experts on the Future of AI0true
Alex Martin's question on Experts on the Future of AI0false
Niemand Wirklich's question on Experts on the Future of AI0true
Shikogo's question on Experts on the Future of AI0true
RedPlayerOne's question on Experts on the Future of AI0true
peterbrehmj's question on Experts on the Future of AI0true
Sara L's question on Experts on the Future of AI0trueRobertskmiles's Answer to Experts on the Future of AI on 2020-11-09T06:23:59 by Sara L
Newmaidumosa's question on Experts on the Future of AI0true
ExaltedDuck's question on Experts on the Future of AI0false
Bloginton Blakley's question on Experts on the Future of AI0true
Syncrossus BAR's question on Experts on the Future of AI0true
Able Reason's question on Experts on the Future of AI0true
Julian Danzer's question on Experts on the Future of AI0true
Danielle Wilson's question on Experts on the Future of AI0false
Seth Moore's question on Experts on the Future of AI0false
Cucumber Fan's question on Experts on the Future of AI0false
Rahn127's question on Experts on the Future of AI0false
TimeMachine Bikes's question on Experts on the Future of AI0false
Rich While-Cooper's question on Experts on the Future of AI0false
Andew Tarjanyi's question on Experts on the Future of AI0false
Friendly Raid's question on Experts on the Future of AI0false
Wiktor Migaszewski's question on Experts on the Future of AI0false
Florian Matel's question on Experts on the Future of AI0false
Dojan5's question on Experts on the Future of AI0false
The Great Steve's question on Experts on the Future of AI0false
Jonas Thörnvall's question on Experts on the Future of AI0false
Jupiter Belic's question on Experts on the Future of AI0false
Daniel Houck's question on Experts on the Future of AI0true
Klaus Gartenstiel's question on Experts on the Future of AI0false

Description

When will AI systems surpass human performance? I don't know, do you? No you don't. Let's see what 352 top AI researchers think.

[CORRECTION: I mistakenly stated that the survey was before AlphaGo beat Lee Sedol. The 12 year prediction was for AI to outperform humans *after having only played as many games as a human plays in their lifetime*]


The paper: https://arxiv.org/pdf/1705.08807.pdf
The blogpost which has lots of nice data visualisations: https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/

The Instrumental Convergence video: https://www.youtube.com/watch?v=ZeecOKBus3Q
The Negative Side Effects video: https://www.youtube.com/watch?v=lqJUIqZNzP8

With thanks to my excellent Patrons at https://www.patreon.com/robertskmiles :

Jason Hise
Steef
Jason Strack
Chad Jones
Stefan Skiles
Jordan Medina
Manuel Weichselbaum
1RV34
Scott Worley
JJ Hepboin
Alex Flint
James McCuen
Richárd Nagyfi
Ville Ahlgren
Alec Johnson
Simon Strandgaard
Joshua Richardson
Jonatan R
Michael Greve
The Guru Of Vision
Fabrizio Pisani
Alexander Hartvig Nielsen
Volodymyr
David Tjäder
Paul Mason
Ben Scanlon
Julius Brash
Mike Bird
Tom O'Connor
Gunnar Guðvarðarson
Shevis Johnson
Erik de Bruijn
Robin Green
Alexei Vasilkov
Maksym Taran
Laura Olds
Jon Halliday
Robert Werner
Paul Hobbs
Jeroen De Dauw
Konsta
William Hendley
DGJono
robertvanduursen
Scott Stevens
Michael Ore
Dmitri Afanasjev
Brian Sandberg
Einar Ueland
Marcel Ward
Andrew Weir
Taylor Smith
Ben Archer
Scott McCarthy
Kabs Kabs
Phil
Tendayi Mawushe
Gabriel Behm
Anne Kohlbrenner
Jake Fish
Bjorn Nyblad
Jussi Männistö
Mr Fantastic
Matanya Loewenthal
Wr4thon
Dave Tapley
Archy de Berker
Kevin
Vincent Sanders
Marc Pauly
Andy Kobre
Brian Gillespie
Martin Wind
Peggy Youell
Poker Chen
Kees
Darko Sperac
Paul Moffat
Noel Kocheril
Jelle Langen
Lars Scholz

Transcript

hi there's a lot of disagreement about
the future of AI but there's also a lot
of disagreement about what the experts
think about the future of AI I sometimes
hear people saying that all of this
concern about AI risk just comes from
watching too much sci-fi and the actual
AI researchers aren't worried about it
at all when it comes to timelines some
people will claim that the experts agree
that AGI is hundreds of years away
prediction as they say is very difficult
especially about the future and that's
because we don't have data about it yet
but expert opinion about the future
exists in the present so we can do
science on it we can survey the experts
we can find the expert consensus and
that's what this paper is trying to do
it's called when will a I exceed human
performance evidence from AI experts so
these researchers from the future of
humanity Institute at the University of
Oxford the AI impact project and Yale
University ran a survey they asked every
researcher who published in ICML or nips
in 2015
those two are pretty much the most
prestigious AI conferences right now so
this survey got 352 of the top AI
researchers and asked them all sorts of
questions about the future of AI and the
experts all agreed that they did not
agree with each other and Robert Aumann
didn't even agree with that there was a
lot of variation in people's predictions
but that's to be expected
and the paper uses statistical methods
to aggregate these opinions into
something we can use for example here's
the graph showing when the respondents
think will achieve high level machine
intelligence which is defined as when
unaided machines can accomplish every
task better and more cheaply than human
workers so that's roughly equivalent to
what I mean when I say super
intelligence you can see these gray
lines show how the graph would look with
different randomly chosen subsets of the
forecasts and there's a lot of variation
there but the aggregate forecast in red
shows that overall the experts think
we'll pass 50% chance of achieving high
level machine intelligence about 45
years from now well that's from 2016 so
more like 43 years from now and they
give a 10% chance of it happening within
nine years which is seven years now so
it's probably not too soon to be
concerned about it a quick side point
about surveys by the way in a 2010 poll
44% of Americans said that they
supported homosexuals serving openly in
the military in the same poll 58% of
respondents said
they supported gay men and lesbians
serving openly in the military
implicitly fourteen percent of
respondents supported gay men and
lesbians but did not support homosexuals
something similar seems to be going on
in this survey because when the
researchers were asked when they thought
all occupations would be fully automated
all defined as for any occupation
machines could be built to carry out the
task better and more cheaply than human
workers they gave their 50% estimate at
a hundred and twenty two years compared
to forty five for high-level machine
intelligence these are very similar
questions from this we can conclude that
Aix PERTs are really uncertain about
this and precise wording in surveys can
have a surprisingly big effect on the
results figure two in the paper shows
the median estimates for lots of
different a AI milestones this is really
interesting because it gives a nice
overview of how difficult a AI
researchers expect these different
things to be for example human level
Starcraft play seems like it will take
about as long as human level laundry
folding also interesting here is the
game of go remember this is before
alphago the AI experts expected go to
take about 12 years and that's why
alphago was such a big deal it was about
eleven years ahead of people's
expectations but what milestone is at
the top what tasks do the AI researchers
think will take the longest to achieve
longer even than high-level machine
intelligence that's able to do all human
tasks that's right it's AI research
anyway on to questions of safety and
risk this section is for those who think
that people like me should stop making a
fuss about AI safety because the AI
experts all agree that it's not a
problem first of all the AI experts
don't all agree about anything but let's
look at the questions this one asks
about the expected outcome of high-level
machine intelligence the researchers are
fairly optimistic overall giving on
average a 25% chance for a good outcome
and a 20% chance for an extremely good
outcome but they nonetheless gave a 10%
chance for a bad outcome and 5% for an
outcome described as extremely bad for
example human extinction 5% chance of
human extinction level badness is a
cause for concern moving on this
question asks the experts to read
Stewart Russell's argument for why
highly advanced AI might pose a risk
this is very close
related to the arguments I've been
making on YouTube it says the primary
concern with highly advanced AI is not
spooky emergent consciousness but simply
the ability to make high quality
decisions here quality refers to the
expected outcome utility of actions
taken now we have a problem
one the utility function may not be
perfectly aligned with the values of the
human race which are at best very
difficult to pin down to any
sufficiently capable intelligent system
will prefer to ensure its own continued
existence and to acquire physical and
computational resources not for their
own sake but to succeed in its assigned
tasks a system that is optimizing a
function of n variables where the
objective depends on a subset of size K
less than n will often set the remaining
unconstrained variables to extreme
values if one of those unconstrained
values is actually something we care
about the solution may be highly
undesirable this is essentially the old
story of the genie in the lamp or The
Sorcerer's Apprentice or King Midas you
get exactly what you asked for not what
you want so do the AI experts agree with
that
well 11% of them think no it's not a
real problem 19 percent think no it's
not an important problem but the
remainder 70% of the AI experts agree
that this is at least a moderately
important problem and how much do the AI
experts think that society should
prioritize AI safety research well 48%
of them think we should prioritize it
more than we currently are and only 11%
think we should prioritize it less so
there we are
AI experts are very unclear about what
the future holds but they think the
catastrophic risks are possible and that
this is an important problem so we need
to do more AI safety research
I want to end the video by saying thank
you so much to my excellent patreon
supporters these people and in this
video I'm especially thanking Jason hice
who's been a patron for a while now
we've had some quite interesting
discussions over a patreon chat been fun
so thank you Jason and thank you all for
watching I'll see you next
[Music]