AI Safety at EAGlobal2017 Conference

From Stampy's Wiki

AI Safety at EAGlobal2017 Conference
Channel: Robert Miles
Published: 2017-11-16T19:21:00Z
Views: 15703
Likes: 1090

Description

I attended a charity conference to learn about AI Safety!

Correction: Alan Dafoe is funded by a grant from the Open Philanthropy Project, but does not work for them.

The conference's YouTube channel: https://www.youtube.com/channel/UCEfASxwPxzsHlG5Rf1-4K9w
The Website: https://www.eaglobal.org/events/ea-global-2017-uk/

Jobs at FHI: https://www.fhi.ox.ac.uk/vacancies/

My Concrete Problems in AI Safety series: https://www.youtube.com/watch?v=lqJUIqZNzP8&list=PLqL14ZxTTA4fEp5ltiNinNHdkPuLK4778

With thanks to my Patrons! (https://www.patreon.com/robertskmiles)
Steef
Sara Tjäder
Jason Strack
Chad Jones
Stefan Skiles
Katie Byrne
Ziyang Liu
Jordan Medina
Kyle Scott
Jason Hise
Heavy Empty
James McCuen
Richárd Nagyfi
Ammar Mousali
Scott Zockoll
Charles Miller
Joshua Richardson
Jonatan R
Michael Greve
robertvanduursen
The Guru Of Vision
Fabrizio Pisani
Alexander Hartvig Nielsen
Volodymyr
David Tjäder
Paul Mason
Ben Scanlon
Julius Brash
Mike Bird
Taylor Winning
Ville Ahlgren
Johannes David
Andrew Pearce
Gunnar Guðvarðarson
Shevis Johnson
Erik de Bruijn
Robin Green
Roman Nekhoroshev
Peggy Youell
Konsta
William Hendley
Adam Dodd
DGJono
Matthias Meger
Scott Stevens
Michael Ore
Robert Bridges
Dmitri Afanasjev
Brian Sandberg
Einar Ueland
Lo Rez
Stephen Paul
Marcel Ward
Andrew Weir
Pontus Carlsson
Taylor Smith
Ben Archer
Ivan Pochesnev
Scott McCarthy
Kabs Kabs Kabs
Phil
Christopher Askin
Tendayi Mawushe
Gabriel Behm
Anne Kohlbrenner
Jake Fish
David Rasmussen
Filip
Bjorn Nyblad
Stefan Laurie
Tom O'Connor
pmilian
Jussi Männistö
Cameron Kinsel
Matanya Loewenthal
Wr4thon
Dave Tapley
Archy de Berker

https://www.patreon.com/robertskmiles

Transcript

this weekend I went to Imperial College
London to attend the effective altruism
global conference the conference isn't
actually about AI it's about charity the
idea is like if you want to save human
lives and you've got a hundred pounds to
spend on that you have to make a
decision about which charity to give
that money to and they'll all say that
they're good but which charity is going
to save the most lives per pound on
average it's a difficult question to
answer but it turns out that there are
popular charities trying to solve the
same problem where one charity is a
hundred or a thousand times more
effective than the other it's kind of
insane but it can happen because apart
from these guys nobody's really paying
attention you know people don't really
do the work to figure out which
charities are actually effective or what
they're trying to do so that's pretty
interesting but it's not why I attended
see there's an argument that if people
like me are right about artificial
intelligence then giving money to help
fund AI safety research might actually
be an effective way to use charitable
donations to help the world not
everybody agrees of course but they take
the issue seriously enough that they
invited a bunch of experts to speak at
the conference to help people understand
the issue better so this charity
conference turns out to be a great place
to hear the perspectives of a lot of AI
safety experts Victoria Krakov nur from
deep mind safety team and a wine Evans
from the future of humanity Institute
gave a talk together about careers in
technical AI safety research which is
basically what this channel is about I'm
not going to include much from these
talks because they were professionally
recorded and they'll go live on YouTube
at some point I'll put a link in the
description as in when that happens but
yeah Vica talked about what the problems
are what the field involves and what
it's like to work in AI safety and a
wine talked about the places you can go
the things you should do you know what
things you'll need to study what
qualifications you might need or not
need if the case may be they answered
questions afterwards the sound I
recorded for this really sucks
but yeah the general consensus was there
are lots of interesting problems and
hardly anyone's working on them and we
need at least 10 times as many AI safety
researchers as we've got deepmind is
hiring the future of humanity Institute
is hiring actually there will be a link
in the description to a specific job
posting that they have right
and a wine is working on a new thing
called org which is an up yet but we'll
be hiring soon lots of opportunities
here o some people were there out doing
that if animals can experience suffering
in a way that's morally relevant then
maybe factory farming is actually the
biggest cause of preventable suffering
and death on earth and fixing that would
be an effective way to use our charity
money so I tried out their virtual
reality thing that lets you experience
the inside of a slaughterhouse from the
perspective of a cow worst we are
experience of my life seven point eight
out of ten Helen toner an analyst at the
open philanthropy project talked about
their work on artificial intelligence
analyzing how likely different scenarios
are and thinking about strategy and
policy you know how we can tackle this
problem as a civilization and how
they're helping to fund the technical
research that we'll need in the
questions she had some advice about
talking to people about this subject and
about doing the work yourself
here's Alan Defoe also from the open
philanthropy project who went into some
detail about their analysis of the
landscape for AI in the coming years I
really recommend this talk to help
people understand the difference between
when people are trying to tell
interesting stories about what might
happen in the future and when people are
seriously and diligently trying to
figure out what might happen in the
future because they want to be ready for
it
some really interesting things in that
talk and I'd strongly recommend checking
that out when it goes up online probably
my favorite talk was from shahara VIN
from the Center for the Study of
existential risk at the University of
Cambridge he was there talking about a
report that they're going to release
very soon about preventing and
mitigating the misuse of artificial
intelligence really interesting stuff
dr. Levine is very wise and correct
about everything to consume it in a more
video engaging way what miles has that's
all for now the next video will be the
next section of concrete problems in AI
safety scalable supervision so subscribe
and click the bell if you want to be
notified when that comes out and I'll
see you next time shoes is cashews
everywhere this is a great conference
I want to thank my wonderful patrons who
made this channel possible by supporting
me on patreon all of these excellent
people in this video I'm especially
thanking Kyle Scott who's done more for
this channel than just about anyone else
you guys should see some big
improvements to the channel over the
coming months and a lot of that is down
to Kyle so thank you so so much
okay well there's cashews here this is a
great conference