Alexander Kirko's question on 10 Reasons to Ignore AI Safety

From Stampy's Wiki
Alexander Kirko's question on 10 Reasons to Ignore AI Safety id:UgxbGPhHdjVT3b3WC5d4AaABAg

The problem, in my opinion, is that people have a very hard time imagining what an AGI would be like. We are used to human levels of intelligence and processing speed, maybe someone heard about the youtube algorithms or whatever. But it's hard to imagine what it would be like if we make a robot that's 5000 times smarter than the smartest human, connect it to the Internet and say "make sure I always get my coffee in the morning". It might start enslaving people or put you into cryostasis (which it will invent in 1.3 seconds) or anything else, really. People just imagine AGI as something anthropomorphic, and it has no reason to be that, not unless we develop it in a simulation where it will think that it's a human.


Tags: None (add tags)
Question Info
Asked by: Alexander Kirko
OriginWhere was this question originally asked
YouTube (comment link)
On video: 10 Reasons to Ignore AI Safety
Date: 2020-06-27T14:59
Asked on Discord? No


Discussion