Answer questions

From Stampy's Wiki

The new version of this page is over on coda.

These are unanswered questions which we've reviewed and decided are within Stampy's scope. Feel free to answer them if you want to help out. Your answers will be reviewed, stamped, and possibly improved by others so don't worry about them not being perfect :)

There are 85 unanswered canonical questions! (but if we get through those there are 28 incoming ones to review, and 2595 from YouTube which could be raw material for more)

See also

Questions without an answer on-wiki


Many of the below answers already have a bulletpoint sketch by Rob over on this Google Doc, you are encouraged to turn those into full answers!

What is "logical decision theory"?
Reader UI
What is the "universal prior"?
Reader UI

There is a general consensus that any AGI would be very dangerous because not necessarilly aligned. But if the AGI does not have any reward function and is a pattern matcher like GPT, how would it go about to leading to X-risks/not being able to be put into a box/shut down?
I can definitely imagine it being dangerous, or it having continuity in its answer which might be problematic, but the whole going exponential and valuing its own survival does not seem to necessarilly apply?

See more...