Sharing the Benefits of AI: The Windfall Clause

From Stampy's Wiki

Sharing the Benefits of AI: The Windfall Clause
Channel: Robert Miles
Published: 2020-07-06T16:53:53Z
Views: 59466
Likes: 5929
QuestionYouTubeLikesAsked Discord?AnsweredBy
Leonefoscolo's question on The Windfall Clause263true
ImpHax0r's question on The Windfall Clause20true
Joshua Hillerup's question on The Windfall Clause18true
Peter Smythe's question on The Windfall Clause15false
The Great of Beam's question on The Windfall Clause6true
Melon Collie's question on The Windfall Clause5true
Peter Bonnema's question on The Windfall Clause5false
Mera Flynn's question on The Windfall Clause5falseAprillion's Answer to Mera Flynn's question on The Windfall Clause
HuggyBearx64's question on The Windfall Clause4false
Michael Render's question on The Windfall Clause3false
Sudeep Ambati's question on The Windfall Clause2false
Upcycle Electronics's question on The Windfall Clause2false
Kirill Tsukanov's question on The Windfall Clause2true
Yooless's question on The Windfall Clause2true
Nick Hill's question on The Windfall Clause2true
Laplace's question on The Windfall Clause1false
Jalae Lain Casaus's question on The Windfall Clause1false
AkantorJojo's question on The Windfall Clause1false
David Valouch's question on The Windfall Clause1false
Sabelch's question on The Windfall Clause1true
Brotle1000's question on The Windfall Clause1false
Faustin Gashakamba's question on The Windfall Clause1false
Chris's question on The Windfall Clause1true
SunGod97's question on The Windfall Clause1true
Uni Realm's question on The Windfall Clause1false
Illesizs's question on The Windfall Clause1false
Traywor's question on The Windfall Clause1false
Boarattackboar's question on The Windfall Clause1truePlex's Answer to The Windfall Clause on 2020-07-07T07:10:36 by boarattackboar
Elijah's question on The Windfall Clause1false
Peter Smythe's question on The Windfall Clause1false
Simon Schouten's question on The Windfall Clause1false
Micheal Angelo's question on The Windfall Clause1false
Centauri's question on The Windfall Clause1false
QueenDaisy's question on The Windfall Clause1false
James Dodd's question on The Windfall Clause1false
Jon H's question on The Windfall Clause1false
Jared SS's question on The Windfall Clause1false
Bfece cadaei's question on The Windfall Clause1true
Robert The Wise's question on The Windfall Clause1false
Lemon Party's question on The Windfall Clause1false
Bob Ross's question on The Windfall Clause1false
Alex Potts's question on The Windfall Clause1false
Arkk0n's question on The Windfall Clause0false
Дмитрий Лжетцов's question on The Windfall Clause0false
Nullius in verba's question on The Windfall Clause0false
George Michael Sherry's question on The Windfall Clause0false
Physi ra's question on The Windfall Clause0false
Viola Buddy's question on The Windfall Clause0true
Joshua Coppersmith's question on The Windfall Clause0false
Clara Bisson's question on The Windfall Clause0false
... further results


AI might create enormous amounts of wealth, but how is it going to be distributed?

The Paper:
The Post:

With thanks to my excellent Patreon supporters:

Scott Worley
JJ Hepboin
Pedro A Ortega
Said Polat
Chris Canal
Jake Ehrlich
Kellen lask
Francisco Tolmasky
Michael Andregg
David Reid
Peter Rolf
Chad Jones
Teague Lasser
Andrew Blackledge
Frank Marsman
Brad Brookshire
Cam MacFarlane
Jason Hise
Erik de Bruijn
Alec Johnson
Clemens Arbesser
Ludwig Schubert
Bryce Daifuku
Allen Faure
Eric James
Matheson Bayley
Qeith Wreid
jugettje dutchking
Owen Campbell-Moore
Atzin Espino-Murnane
Phil Moyer
Jacob Van Buren
Jonatan R
Ingvi Gautsson
Michael Greve
Julius Brash
Tom O'Connor
Shevis Johnson
Laura Olds
Jon Halliday
Paul Hobbs
Jeroen De Dauw
Lupuleasa Ionuț
Tim Neilson
Eric Scammell
Igor Keller
Ben Glanton
anul kumar sinha
Sean Gibat
Duncan Orr
Cooper Lawton
Will Glynn
Tyler Herrmann
Tomas Sayder
Ian Munro
Jérôme Beaulieu
Nathan Fish
Taras Bobrovytsky
Vaskó Richárd
Benjamin Watkin
Euclidean Plane
Andrew Harcourt
Luc Ritchie
Nicholas Guyett
James Hinchcliffe
Oliver Habryka
Chris Beacham
Zachary Gidwitz
Nikita Kiriy
Andrew Schreiber
Dmitri Afanasjev
Marcel Ward
Andrew Weir
Ben Archer
Miłosz Wierzbicki
Tendayi Mawushe
Jannik Olbrich
Jake Fish
Jussi Männistö
Martin Ottosen
Archy de Berker
Andy Kobre
Poker Chen
Paul Moffat
Robert Valdimarsson
Anders Öhrt
Marco Tiraboschi
Michael Kuhinica
Fraser Cain
Robin Scharf
Klemen Slavic
Patrick Henderson
Oct todo22
Melisa Kostrzewski
Daniel Munter
Alex Knauth
Rob Dawson
Bryan Egan
Robert Hildebrandt
James Fowkes
Alan Bandurka
Ben H
Tatiana Ponomareva
Michael Bates
Simon Pilkington
Daniel Kokotajlo
Andreas Blomqvist
Bertalan Bodor
David Morgan
Ben Schultz
Daniel Eickhardt
Ihor Mukha
Jason Cherry
Igor (Kerogi) Kostenko
Thomas Dingemanse
Stuart Alldritt
Alexander Brown
Devon Bernard
Ted Stokes
Jesper Andersson
Jim T
Chris Dinant
Raphaël Lévy
Marko Topolnik
Johannes Walter
Matt Stanton
Garrett Maring
Mo Hossny
Anthony Chiu
Frank Kurka
Ghaith Tarawneh
Josh Trevisiol
Julian Schulz
Stellated Hexahedron
Scott Viteri
Clay Upton
Brent ODell
Conor Comiconor
Michael Roeschter
Georg Grass
Matthias Hölzl
Jim Renney
Michael V brown
Martin Henriksen
Edison Franklin
Daniel Steele
Piers Calderwood
Krzysztof Derecki
Mikhail Tikhomirov
Richárd Nagyfi
Richard Otto
Alston Sleet
Matt Brauer
Jaeson Booker
Mateusz Krzaczek
Artem Honcharov
Evan Ward
Michael Walters
Tomasz Gliniecki
Mihaly Barasz
Mark Woodward
Neil Palmere
Rajeen Nabid


since the beginning one of the main
goals of the field of artificial
intelligence has been to create very
capable AI systems to create systems
which match or exceed human capabilities
across a wide range of tasks given this
it's somewhat surprising just how
recently people have started to take the
possibility seriously and to ask what
would happen if we actually succeeded at
this challenge what if we managed it the
answer is it looks like we have a
problem see just because the system is
very capable just because it's able to
do very well at very difficult things
does not mean that it's trying to do
good things we could easily end up with
AI systems that are trying to do things
that we really don't want them to do an
AI system that works very effectively in
the service of values that are different
from human values might be hugely
destructive from the perspective of
human values AI safety is an attempt to
deal with this problem how do we create
AI systems that are aligned with our
goals that are robustly beneficial that
are trying to do what we want them to do
but then you might ask well okay what if
we succeed at that what if we manage to
create AI systems that are very capable
and also are trying to do what the
creator's wanted them to do rather than
destroying everything that humans value
as a none Alliance system might such
systems would presumably create enormous
amounts of value might we still have a
problem in that circumstance I think a
lot of people would say yeah we probably
do still have a problem because what
happens when you create enormous amounts
of wealth and that wealth all belongs to
a small number of people one aspect of
this is the possibility that automation
will result in large-scale unemployment
this has never really happened much in
the past but advanced AI might be an
exception if there are AI systems that
can do every task that a human can it
becomes difficult to employ humans to do
most tasks so then you have a situation
where you can think of the world as
having two types of people people who
make money by selling their labor and
people who make money by owning AI
systems and in this scenario you've
dramatically increased money making
ability for one of those types of people
while dramatically decreasing it for the
other type what happens to people who
work for a living
when companies can produce goods and
services without employing anyone what
happens to labor when capital has no
need for it hands up who's excited to
find out
yeah me neither this possible outcome of
artificial intelligence transforming the
world economy but in the process
creating massive wealth and income
inequality with its associated political
and social problems certainly seems
suboptimal better than extinction sure
but still not the outcome that we really
want how do we get the winners in this
scenario to share their newly created
wealth well some researchers at the
Center for the governance of AI at the
University of Oxford have an idea for
something that they think might help
which we're going to talk about in this
video it's called the windfall clause
they define it as an ex ante commitment
to share extreme benefits of AI so
basically it's a contract a company can
sign that says if at some point in the
future we make huge windfall profits of
the kind that you can really only get by
transforming the world economy with AI
we will share some significant
percentage of it so obvious questions
first what we mean by share it well that
could be a variety of things ranging
from having a charitable foundation that
uses the money to alleviate inequality
according to some set of principles to
just writing everyone a check the other
question is when does this actually
happen what counts as extreme profits
well setting an absolute number here
doesn't really work because who knows
how far into the future this might
happen and anyway what we really care
about is relative profits right so it's
defined as profits above a certain
percentage of the world's gross domestic
product say 1% in other words if your
profits are more than 1% of the world's
total economic output you agree to share
it this is the level of profitability
that's higher than any company in
history but is actually pretty plausible
if your company creates AGI and in
practice there would probably be several
levels of this where as profits go up as
a percentage of world GDP so does the
percentage of the profits that's shared
kind of like a progressive marginal
taxation scheme speaking of which why
not just do this with taxes what
advantages does this have over taxation
well the first thing to note is that
this isn't instead of taxes this is
something they'd agree to over and above
whatever taxes governments may impose
but it has some important advantages
over taxation firstly governments are
not actually great at spending money
effectively giving money to for example
the United States federal government is
not necessarily the most effective way
to spend money to improve the world this
isn't really controversial a 2011 poll
found that Republicans think 52% of
their federal tax money is wasted while
Democrats think it's 47% so if you had a
bunch of money and you were looking for
the best way to spend it to improve the
lives of Americans give it to the
federal government would be pretty low
on the list but actually it's worse than
that because this is a global issue not
a national one and tax money tends to
stay in the country it's collected in
countries like the US and the UK spend
less than 1% of their taxes on foreign
aid and much of that's military aid
suppose deepmind creates a GI and starts
accounting for more than 1% of the
world's gross domestic product making
just absurdly giant amounts of money
even if the UK taxes those profits very
heavily someone in India or China or the
USA is going to see basically none of
that money so you still have this
problem of enormous inequality and the
thing is if an AGI superintelligence
turns out to be misaligned and it
decides to kill everyone it's not going
to stop at the borders of the country it
was created in so everyone in the world
shares pretty much equally in the risks
from advanced artificial intelligence
but if you just use national taxes only
a small minority of people actually get
any share of the benefits does that seem
right to you that said one advantage of
taxes is that they're not voluntary you
can actually make companies pay them but
this isn't as big of an advantage as it
seems in practice it seems that getting
companies to pay their taxes is not that
easy it's possible that they'd be more
likely to pay something that they
actually chose to sign up to but then
why would they want to sign up in the
first place
I mean why volunteer to give away a load
of money one thing is the decision
makers might be human beings the
executives of these companies certainly
talk a big game about wanting to improve
the world and we can't rule out the
possibility that they might mean it what
about the shareholders though won't
shareholders have something to say about
the companies they've invested in
agreeing to give away huge amounts of
money corporate executives do have a
legal obligation to act in the interests
of their shareholders but legally at
least it would probably be fine the
windfall Clause is a form of corporate
philanthropy and when shareholders have
sued executives on the grounds that
their philanthropy was a violation of
their duty to shareholders
they've won those cases zero out of
seven times but actually they probably
wouldn't even want to because
in fact even if the executives and the
shareholders are all hypothetically
complete sociopaths they still have a
good reason to sign something like a
windfall clause namely appearing to not
be sociopaths this is sometimes also
public relations signing a windfall
clause is a clear and legally binding
show of goodwill it improves your
company's relationship with the public
and with public opinion which tech
companies certainly value it improves
your relationship with governments which
is very important for any large company
and it improves your relationship with
your employees who in this case actually
have a lot of bargaining power don't
forget that if you're a highly skilled
tech company employee as some of my
viewers are you have a surprisingly
large amount of power over the direction
your company takes look at things like
project maven for example so from the
perspective of a tech company executive
signing a windfall clause is a lot of
great PR the kind of thing you'd usually
have to pay a lot of money for but it's
all free for now at least it only costs
anything at all if you end up with giant
windfall profits which might never
happen and if it does it's probably a
long time in the future when you've
probably already retired now this is why
it's important that it's an ex anti
agreement that it's made before we know
how things are going to turn out it's
much easier to persuade people to agree
to give away something they don't have
and then something they do like two
people might agree to both buy lottery
tickets and to share the winnings if
either of them win this halves how much
they'd win but doubles their chance of
winning which might be something you'd
want to do depending on your risk
tolerance and marginal utility of money
but note that the commitment has to be
binding and it has to be made before the
lottery numbers are drawn right you
won't have much luck trying to set that
up afterwards but as long as we don't
know who if anyone is going to be making
these giant profits from AI it could be
in everyone's interest to sign this
thing and encourage others to sign it
too for an AI company in a world where
all of the other major AI companies have
agreed to something like this choosing
not to join them is effectively saying
screw you guys there's going to be one
winner here and it's going to be me I
intend to make absurd amounts of money
and not share any of it and you could do
that but if you do you might find that
others who don't want to cooperate with
you as much you might face boycotts and
maybe governments wouldn't feel like
giving you the contracts you'd like or
the regulatory environment you'd prefer
you might find it hard to get
good collaborators and to hire and keep
the best researchers you might find that
it's actually kind of hard to get things
done in the world when you've
effectively stuck a big sticky label on
your own forehead that reads I am a
[ __ ]
can I say there an issue so we want to
set this kind of thing up sooner rather
than later because the more uncertainty
there is about who if anyone is going to
get windfall profits the less likely it
is for any individual company to think
well I don't care about anyone's opinion
I can do this all by myself without the
cooperation of the rest of the world
hopefully we can get all of the major
players to agree to something like a
windfall clause and that should help
mitigate the inequality problems that
high level AI might bring so how do we
help make that happen well if I see
something online about 20th century
military history and I'm not sure what
to make of it I ask my uncle because
he's really into that stuff and I
respect his opinion on the subject I
think we've all got people like that for
various things and when it comes to AI
if you're the kind of person who watches
this channel you might be that person
for some of the people you know so if at
some point some AI company signs
something like a windfall Clause people
might ask you about it and you can tell
them that yeah it's pretty legit but
it's not just a publicity stunt I mean
it probably is a publicity stunt but
it's not just a publicity stunt right it
probably would be legally binding and
would actually help it's good for people
to understand that because the better
the reaction that first company gets the
more likely other companies will be to
follow suit and that's what we want
generally this channel focuses more on
the technical research that goes into
trying to make sure that advances in AI
result in good outcomes but there's also
a lot of research on the more human side
of things a high strategy AI policy and
AI governance research it's something I
don't know as much about but if there's
interest I can make more videos like
this one
exploring the research going on into
legal political and economic aspects of
the future of AI is that the kind of
thing you'd be interested in seeing let
me know in the comments
thank you so much to all my excellent
Patriots it's all these wonderful people
here in this video I'm especially
thanking Michael and Rick who actually
happened to bump into at a conference
recently we had a great talk about his
company which is developing special
optical computing hardware very
fascinating stuff anyway thank you
Michael and also thank you to : O'Keefe
the primary author of the you windfall
cross Peggy who was kind enough to have
a call with me and explained I've
uploaded that whole conversation for
patrons so do consider becoming one if
you want more in-depth information thank
you so much those who do and thank you
all for watching I'll see you next time