Non-canonical answers to questions without a canonical answer

From Stampy's Wiki

Canonical answers may be served to readers by Stampy, so only answers which have a reasonably high stamp score should be marked as canonical. All canonical answers are open to be collaboratively edited and updated, and they should represent a consensus response (written from the Stampy Point Of View) to a question which is within Stampy's scope.

Answers to YouTube questions should not be marked as canonical, and will generally remain as they were when originally written since they have details which are specific to an idiosyncratic question. YouTube answers may be forked into wiki answers, in order to better respond to a particular question, in which case the YouTube question should have its canonical version field set to the new more widely useful question.

See also

Non-canonical answers to questions without a canonical answer

Answer to But superintelligences are very smart. Aren’t they smart enough not to make silly mistakes in comprehension?
Answer to Can we test a weak or human-level AI to make sure that it’s not going to do bad things after it achieves superintelligence?
Answer to Can we specify a code of rules that the AI has to follow?
Answer to Once we notice that the superintelligence working on calculating digits of pi is starting to try to take over the world, can’t we turn it off, reprogram it, or otherwise correct its mistake?
Answer to Even if hostile superintelligences are dangerous, why would we expect a superintelligence to ever be hostile?
Answer to Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world?
Answer to AIs aren’t as smart as rats, let alone humans. Isn’t it sort of early to be worrying about this kind of thing?
Answer to Isn’t it immoral to control and impose our values on AI?
Answer to Why might we expect a moderate takeoff?
Filip's Answer to Do you need a PhD to work on AI Safety?
Answer to Who is Professor Nick Bostrom?
Answer to How might a superintelligence technologically manipulate humans?
Plex's Answer to How long will it be until AGI is created?
Aprillion's Answer to Do we actually need AGI?
Filip's Answer to What are the differences between AGI, TAI, and Superintelligence?
Morpheus's Answer to Is humanity doomed?
Filip's Answer to People talk about "Aligning AI with human values" but which humans' values are we talking about?
Plex's Answer to What is the Stampy project?
Answer to Why does AI need goals in the first place? Can’t it be intelligent without any agenda?
CyberByte's Answer to How long will it be until AGI is created?
MIRI's Answer to Why work on AI safety early?
MIRI's Answer to How long will it be until AGI is created?
Answer to Can’t we just program the superintelligence not to harm us?
Answer to Can we teach a superintelligence a moral code with machine learning?
Answer to How could general intelligence be programmed into a machine?
Answer to What is Coherent Extrapolated Volition?
Answer to What can we expect the motivations of a superintelligent machine to be?
Answer to Can we just tell an AI to do what we want right now, based on the desires of our non-surgically altered brains?
Answer to Can we program the superintelligence to maximize human pleasure or desire satisfaction?
Answer to What is AI Safety?
Answer to Why is AI Safety important?
Answer to What would an actually good solution to the control problem look like?
Answer to Can we tell an AI just to figure out what we want, then do that?
Answer to If superintelligence is a real risk, what do we do about it?
Answer to Why does takeoff speed matter?
Answer to Why might we expect a fast takeoff?