Here are the questions which have been added to the wiki and could be re-ordered, and have not been answered.
There are 134 unanswered questions to curate!
There is a general consensus that any AGI would be very dangerous because not necessarilly aligned. But if the AGI does not have any reward function and is a pattern matcher like GPT, how would it go about to leading to X-risks/not being able to be put into a box/shut down?
I can definitely imagine it being dangerous, or it having continuity in its answer which might be problematic, but the whole going exponential and valuing its own survival does not seem to necessarilly apply?
I'm not yet convinced that AI safety by itself is a realistic risk. That things will work that well that it can increase. I guess the most interesting way to be convinced is just to read what convinced other people. So it's what I ask most people I meet about a subject I have doubt about
Will superintelligence make a large part of humanity unemployable?
Some economists say human wants are infinite, and there will always be new and currently unimaginable kinds of jobs for people to do.
Others say this won't be true if AGI can do _anything_ human minds can do.
Is it hard like 'building a secure OS that works on the first try'? Hard like 'the engineering/logistics/implementation portion of the Manhattan Project'? Both? Some other option? Etc.
A friend who *really* doesn't believe in vast global conspiracies was recently suggesting to me that, at the moment, it very much *looks* to her like there's some kind of vast act of global coordination going on. This led her to propose that someone may have already created an AGI, which is in its early stages of putting things into place for whatever it's going to do next. I.e., a lot of humans sincerely believe themselves to be acting in various ways for their own ends, but this is being subtly coordinated for some non-human purposes. Can we rule this out?