Answer questions

From Stampy's Wiki
Revision as of 17:50, 14 January 2023 by 756254556811165756 (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

The new version of this page is over on coda.

These are unanswered questions which we've reviewed and decided are within Stampy's scope. Feel free to answer them if you want to help out. Your answers will be reviewed, stamped, and possibly improved by others so don't worry about them not being perfect :)

There are 85 unanswered canonical questions! (but if we get through those there are 28 incoming ones to review, and 2595 from YouTube which could be raw material for more)

See also

Questions without an answer on-wiki


Many of the below answers already have a bulletpoint sketch by Rob over on this Google Doc, you are encouraged to turn those into full answers!

I'm not yet convinced that AI safety by itself is a realistic risk. That things will work that well that it can increase. I guess the most interesting way to be convinced is just to read what convinced other people. So it's what I ask most people I meet about a subject I have doubt about

What is the "universal prior"?
Reader UI

There is a general consensus that any AGI would be very dangerous because not necessarilly aligned. But if the AGI does not have any reward function and is a pattern matcher like GPT, how would it go about to leading to X-risks/not being able to be put into a box/shut down?
I can definitely imagine it being dangerous, or it having continuity in its answer which might be problematic, but the whole going exponential and valuing its own survival does not seem to necessarilly apply?

What is "logical decision theory"?
Reader UI

See more...