Non-canonical answers to questions without a canonical answer

From Stampy's Wiki

Canonical answers may be served to readers by Stampy, so only answers which have a reasonably high stamp score should be marked as canonical. All canonical answers are open to be collaboratively edited and updated, and they should represent a consensus response (written from the Stampy Point Of View) to a question which is within Stampy's scope.

Answers to questions from YouTube comments should not be marked as canonical, and will generally remain as they were when originally written since they have details which are specific to an idiosyncratic question. YouTube answers may be forked into wiki answers, in order to better respond to a particular question, in which case the YouTube question should have its canonical version field set to the new more widely useful question.

See also

These 46 answers are answering a canonical question without a canonical answer.

Non-canonical answers to questions without a canonical answer

TJ6K's Answer to What beneficial things would an aligned superintelligence be able to do?
Aprillion's Answer to Might trying to build a hedonium-maximizing AI be easier and more likely to work than trying for eudaimonia?
Filip's Answer to Will AI learn to be independent from people or will it always ask for our orders?
Answer to Wouldn't a superintelligence be smart enough not to make silly mistakes in its comprehension of our instructions?
Aprillion's Answer to Can AI be creative?
Plex's Answer to What is a verified account on Stampy's Wiki?
Murphant's Answer to What is "metaphilosophy" and how does it relate to AI safety?
Murphant's Answer to What are the "win conditions"/problems that need to be solved?
Murphant's Answer to How can I contribute in the area of community building?
Murphant's Answer to How much resources did the processes of biological evolution use to evolve intelligent creatures?
Tinytitan's Answer to Could we get significant biological intelligence enhancements long before AGI?
Plex's Answer to What’s a good AI alignment elevator pitch?
Murphant's Answer to Could I contribute by offering coaching to alignment researchers? If so, how would I go about this?
Murphant's Answer to What are some important examples of specialised terminology in AI alignment?
Severin's Answer to How can I be a more productive student/researcher?
Severin's Answer to What are the leading theories in moral philosophy and which of them might be technically the easiest to encode into an AI?
Severin's Answer to Isn't the real concern AI being misused by terrorists or other bad actors?
QueenDaisy's Answer to Might an aligned superintelligence force people to "upload" themselves, so as to more efficiently use the matter of their bodies?
QueenDaisy's Answer to Are any major politicians concerned about this?
QueenDaisy's Answer to What could a superintelligent AI do, and what would be physically impossible even for it?
Sudonym's Answer to What does alignment failure look like?
's Answer to How quickly would the AI capabilities ecosystem adopt promising new advances in AI alignment?
Chlorokin's Answer to Could emulated minds do AI alignment research?
Chlorokin's Answer to What are "coherence theorems" and what do they tell us about AI?
Chlorokin's Answer to What if we put the AI in a box and have a second, more powerful, AI with the goal of preventing the first one from escaping?
Chlorokin's Answer to What is a "pivotal act"?
Chlorokin's Answer to Will superintelligence make a large part of humanity unemployable?
Casejp's Answer to Should I engage in political or collective action like signing petitions or sending letters to politicians?
Jeremyg's Answer to What milestones are there between us and AGI?
QZ's Answer to Where can I find mentorship and advice for becoming a researcher?
Robertskmiles's Answer to Is merging with AI through brain-computer interfaces a potential solution to safety problems?
Casejp's Answer to What if we put the AI in a box and have a second, more powerful, AI with the goal of preventing the first one from escaping?
TapuZuko's Answer to Is the question of whether we're living in a simulation relevant to AI safety? If so, how?
TapuZuko's Answer to Might an aligned superintelligence immediately kill everyone and then go on to create a "hedonium shockwave"?
TapuZuko's Answer to Isn't the real concern autonomous weapons?
Quintin Pope's Answer to Will superintelligence make a large part of humanity unemployable?
Redshift's Answer to In "aligning AI with human values", which humans' values are we talking about?
Plex's Answer to What is "agent foundations"?
Answer to Why is AI safety important?
Filip's Answer to What are the differences between AGI, transformative AI and superintelligence?
Morpheus's Answer to Is it already too late to work on AI alignment?
MIRI's Answer to How long will it be until superintelligent AI is created?
CyberByte's Answer to How long will it be until superintelligent AI is created?
Plex's Answer to How long will it be until superintelligent AI is created?
Answer to Can we tell an AI just to figure out what we want and then do that?
Answer to AIs aren’t as smart as rats, let alone humans. Isn’t it far too early to be worrying about this kind of thing?
Answer to Who is Nick Bostrom?
Answer to What is "whole brain emulation"?
Answer to What is "friendly AI"?
Answer to What is "coherent extrapolated volition"?
Linnea's Answer to What are OpenAI Codex and GitHub Copilot?
Filip's Answer to We already have psychopaths who are "misaligned" with the rest of humanity, but somehow we deal with them. Can't we do something similar with AI?
Filip's Answer to Isn't it too soon to be working on AGI safety?
Answer to Can we program the superintelligence to maximize human pleasure or satisfaction of human desires?
Answer to Can we add "friendliness" to any artificial intelligence design?
Matthew1970's Answer to What are the editorial protocols for Stampy questions and answers?
Plex's Answer to Will there be a discontinuity in AI capabilities? If so, at what stage?
Filip's Answer to Are AI researchers trying to make conscious AI?
Answer to Can we teach a superintelligence a moral code with machine learning?
Answer to How might a superintelligence technologically manipulate humans?
Answer to Why might we expect a moderate AI takeoff?
Aprillion's Answer to Wouldn't it be safer to only build narrow AIs?
Answer to Why does AI need goals in the first place? Can’t it be intelligent without any agenda?
SlimeBunnyBat's Answer to Isn't the real concern technological unemployment?
Answer to Can’t we just program the superintelligence not to harm us?
Answer to What can we expect the motivations of a superintelligent machine to be?
Answer to Why might we expect a fast takeoff?
Answer to How could general intelligence be programmed into a machine?
Filip's Answer to What about having a human supervisor who must approve all the AI's decisions before executing them?
Aprillion's Answer to What's meant by calling an AI "agenty" or "agentlike"?
Filip's Answer to Do you need a PhD to work on AI Safety?
MIRI's Answer to Why work on AI safety early?
Answer to Isn’t it immoral to control and impose our values on AI?
Zekava's Answer to Why does there seem to have been an explosion of activity in AI in recent years?
Aprillion's Answer to If an AI became conscious, how would we ever know?