Brian reese's question on 10 Reasons to Ignore AI Safety

From Stampy's Wiki

here's a question; how could we hope to expect a system that we set out with intial goals, to maintain those goals?
 
like just taking the naïve human example, the goals you had at 20 are not the goals you have at 30 or 50 or 80 and not just like simple goals, but i feel, true terminal goals have changed, even in my own life and im only in my late 20s; and an AI will be able to change its own goals or build a copy of itself with different goals, or pay someone else to build this copy, and we dont know what those goals would be. it seems reasonable to assume they will center around some sort of self preservation and expansion in some way; and if they dont then they must revolve around self destruction.

it seems like its fundamentally impossible to impose any kind of control on any sort of AI that is smarter then its creator, and that the only way to survive, is to make it benifital to them to keep humans around, and we might not like being kept in a zoo as pets.


Tags: None (add tags)
Question Info
Asked by: brian reese
OriginWhere was this question originally asked
YouTube (comment link)
On video: 10 Reasons to Ignore AI Safety
Date: 2021-01-24T21:12
Asked on Discord? Yes


Discussion