Answer questions

From Stampy's Wiki

These are unanswered questions, with manually added questions sorted to the top. See upcoming questions for the next questions which will be posted to the Discord

Answer to the best of your ability, your answers will be reviewed and stamped by others so don't worry about them not being perfect :)

If you're replying to a question from YouTube and want your reply to be posted on YouTube by Stampy rather than by hand by you, it's best to use the Discord interface (post "stampy, reply formatting" in #general for instructions). The top few questions here are generally manually added rather than from YouTube.

Alternatively go to a random unanswered question.

Top questions

If AI takes over the world, how could it create and maintain its hardware, its power supply and everything else that humans currently provide?
A friend who *really* doesn't believe in vast global conspiracies was recently suggesting to me that, at the moment, it very much *looks* to her like there's some kind of vast act of global coordination going on. This led her to propose that someone may have already created an AGI, which is in its early stages of putting things into place for whatever it's going to do next. I.e., a lot of humans sincerely believe themselves to be acting in various ways for their own ends, but this is being subtly coordinated for some non-human purposes. Can we rule this out?
Will superintelligence make a large part of humanity unemployable?

Some economists say human wants are infinite, and there will always be new and currently unimaginable kinds of jobs for people to do.

Others say this won't be true if AGI can do _anything_ human minds can do.
So, you're only mostly right when you say that modifying human values doesn't come up much. I can think of two examples in particular. First, the Bible passage which states, "The love of money is the root of all evil". (Not a Christian btw, just pointing it out). The idea here is that through classical conditioning, it's possible for people to start to value money for the sake of money - which is actually a specific version of the more general case, which I will get to in a moment.

The second example is the fear of drug addiction. Which amounts to the fear that people will abandon all of their other goals in pursuit of their drug of choice, and is often the case for harder drugs. These are both examples of wireheading, which you might call a "Convergent Instrumental Anti-goal" and rests largely on the agent being self-aware. If you have a model of the world that includes yourself, you intuitively understand that putting a bucket on your head doesn't make the room you were supposed to clean any less messy. (Or if you want to flip it around, you could say that wireheading is anathema to goal-preservation)

I'm curious about how this applies to creating AGIs with humans as part of the value function, and if you can think of any other convergent anti-goals. They might be just as illuminating as convergent goals.

Edit: Interestingly, you can also engage in wireheading by intentionally perverting your model of reality to be perfectly in-line with your values. (You pretend the room is already clean). This means that having an accurate model of reality is a part of goal-preservation.
Money and profits will become obsolete. The first company to discover AGI will not bother making/selling products. Imagine having a wish-granting genie with unlimited wishes. Why would you bother creating and selling products when you could just wish everything you want into existence?
Can you tell us more about how a world with a safe AGI would look like? Will the people to invent an AGI rule the world, outperforming everyone at stock trading for instance? Is it profitable to get second (or how big will the head start be when someone invents AGI second like a week later)? I would love to hear this kinds of things from you! But a good reference would make my day too. Keep up the good work!
Is it really that "safe AI is totally possible"?? How can you be so sure???

See more...