Answer questions
The new version of this page is over on coda.
These are unanswered questions which we've reviewed and decided are within Stampy's scope. Feel free to answer them if you want to help out. Your answers will be reviewed, stamped, and possibly improved by others so don't worry about them not being perfect :)
There are 85 unanswered canonical questions! (but if we get through those there are 28 incoming ones to review, and 2595 from YouTube which could be raw material for more)
See also
- Questions
- Canonical questions
- Upcoming questions - next questions which will be posted to the Discord
- Random unanswered question.
Questions without an answer on-wiki
Many of the below answers already have a bulletpoint sketch by Rob over on this Google Doc, you are encouraged to turn those into full answers!
There is a general consensus that any AGI would be very dangerous because not necessarilly aligned. But if the AGI does not have any reward function and is a pattern matcher like GPT, how would it go about to leading to X-risks/not being able to be put into a box/shut down?
I can definitely imagine it being dangerous, or it having continuity in its answer which might be problematic, but the whole going exponential and valuing its own survival does not seem to necessarilly apply?