Canonical answers

From Stampy's Wiki

Canonical answers may be served to readers by Stampy, so only answers which have a reasonably high stamp score should be marked as canonical. All canonical answers are open to be collaboratively edited and updated, and they should represent a consensus response (written from the Stampy Point Of View) to a question which is within Stampy's scope.

Answers to YouTube questions should not be marked as canonical, and will generally remain as they were when originally written since they have details which are specific to an idiosyncratic question. YouTube answers may be forked into wiki answers, in order to better respond to a particular question, in which case the YouTube question should have its canonical version field set to the new more widely useful question.

See also

145 canonical answers answering canonical questions, and 2 canonical answers answering a question marked as non-canonical!

All Canonical Answers

Answer to A lot of concern appears to focus on human-level or “superintelligent” AI. Is that a realistic prospect in the foreseeable future?
Answer to AGI will be a computer program. Why wouldn't it just do what it's programmed to do?
Answer to Are Google, OpenAI etc. aware of the risk?
Answer to Are robots the real problem? How can AI cause harm if it has no ability to directly manipulate the physical world?
Answer to Are there types of advanced AI that would be safer than others?
Answer to Aren’t there some pretty easy ways to eliminate these potential problems?
Answer to At a high level, what is the challenge of alignment that we must meet to secure a good future?
Answer to Can an AI really be smarter than humans?
Answer to Can humans stay in control of the world if human- or superhuman-level AI is developed?
Answer to Can people contribute to alignment by using proof assistants to generate formal proofs?
Answer to Can we constrain a goal-directed AI using specified rules?
Answer to Can we just tell an AI to do what we want?
Answer to Can we test an AI to make sure that it’s not going to take over and do harmful things after it achieves superintelligence?
Answer to Can you give an AI a goal of “minimally impact the world”?
Answer to Can you stop an advanced AI from upgrading itself?
Answer to Could AI have basic emotions?
Answer to Could we program an AI to automatically shut down if it starts doing things we don’t want it to?
Answer to Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world?
Answer to How can I collect questions for Stampy?
Answer to How can I contact the Stampy team?
Answer to How can I contribute to Stampy?
Answer to How could an intelligence explosion be useful?
Answer to How could poorly defined goals lead to such negative outcomes?
Answer to How difficult should we expect alignment to be?
Answer to How do I add content from LessWrong / Effective Altruism Forum tag-wikis to Stampy?
Answer to How do I form my own views about AI safety?
Answer to How do I format answers on Stampy?
Answer to How does AI taking things literally contribute to alignment being hard?
Answer to How does the stamp eigenkarma system work?
Answer to How doomed is humanity?
Answer to How fast will AI takeoff be?
Answer to How is AGI different from current AI?
Answer to How is ‘intelligence’ defined?
Answer to How likely is an intelligence explosion?
Answer to How likely is it that an AI would pretend to be a human to further its goals?
Answer to How might AGI kill people?
Answer to How might a superintelligence socially manipulate humans?
Answer to How might an AI achieve a seemingly beneficial goal via inappropriate means?
Answer to How might an intelligence explosion be dangerous?
Answer to How might non-agentic GPT-style AI cause an intelligence explosion or otherwise contribute to existential risk?
Answer to How quickly could an AI go from the first indications of problems to an unrecoverable disaster?
Answer to I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute?
Answer to I want to work on AI alignment. How can I get funding?
Answer to I'm interested in working on AI Safety. What should I do?
Answer to If AI takes over the world how could it create and maintain the infrastructure that humans currently provide?
Answer to If I only care about helping people alive today, does AI safety still matter?
Answer to If we solve alignment, are we sure of a good future?
Answer to Is AI alignment possible?
Answer to Is donating small amounts to AI safety organisations going to make a non-negligible difference?
Answer to Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff?
Answer to Is it possible to block an AI from doing certain things on the internet?
Answer to Is it possible to code into an AI to avoid all the ways a given task could go wrong - and is it dangerous to try that?
Answer to Is large scale automated AI persuasion and propaganda a concern?
Answer to Is the concern that autonomous AI systems could become malevolent or self aware, or develop “volition”, and turn on us? And can’t we just unplug them?
Answer to Is the focus on the existential threat of superintelligent AI diverting too much attention from more pressing debates about AI in surveillance and the battlefield and its potential effects on the economy?
Answer to Is there a danger in anthropomorphising AI’s and trying to understand them in human terms?
Answer to Isn’t AI just a tool like any other? Won’t AI just do what we tell it to do?
Answer to I’d like a good introduction to AI alignment. Where can I find one?
Answer to I’d like to find people to talk and learn with to help me stay motivated to get involved with AI alignment. Where can I find them?
Answer to I’d like to get deeper into the AI alignment literature. Where should I look?
Answer to I’m convinced that this is important and want to contribute. What can I do to help?
Answer to Might an intelligence explosion never occur?
Answer to Once we notice that a superintelligence given a specific task is starting to try to take over the world, can’t we turn it off, reprogram it, or otherwise correct the problem?
Answer to Superintelligence sounds a lot like science fiction. Do people think about this in the real world?
Answer to We’re going to merge with the machines so this will never be a problem, right?
Answer to What approaches are AI alignment organizations working on?
Answer to What are alternate phrasings for?
Answer to What are brain-computer interfaces?
Answer to What are good resources on AI alignment?
Answer to What are human values?
Answer to What are some good podcasts about AI alignment?
Answer to What are the different possible AI takeoff speeds?
Answer to What are the potential benefits of AI as it grows increasingly sophisticated?
Answer to What are the style guidelines for writing for Stampy?
Answer to What can we do to contribute to AI safety?
Answer to What does Elon Musk think about AI safety?
Answer to What exactly is AGI and what will it look like?
Answer to What harm could a single superintelligence do, when it took so many humans to build civilization?
Answer to What is AI alignment?
Answer to What is Causal Decision Theory?
Answer to What is Evidential Decision Theory?
Answer to What is Functional Decision Theory?
Answer to What is GPT-3?
Answer to What is MIRI’s mission?
Answer to What is a canonical question on Stampy's Wiki?
Answer to What is a duplicate question on Stampy's Wiki?
Answer to What is a follow-up question on Stampy's Wiki?
Answer to What is a quantilizer?
Answer to What is a value handshake?
Answer to What is an agent?
Answer to What is an s-risk?
Answer to What is biological cognitive enhancement?
Answer to What is greater-than-human intelligence?
Answer to What is hedonium?
Answer to What is meant by AI takeoff?
Answer to What is narrow AI?
Answer to What is superintelligence?
Answer to What is the Control Problem?
Answer to What is the Stampy project?
Answer to What is the general nature of the concern about AI alignment?
Answer to What is the intelligence explosion?
Answer to What is the long reflection?
Answer to What is the orthogonality thesis?
Answer to What is the windfall clause?
Answer to What kind of questions do we want on Stampy?
Answer to What should I read to learn about decision theory?
Answer to What should be marked as a canonical answer on Stampy's Wiki?
Answer to What sources of information can Stampy use?
Answer to What technical problems are MIRI working on?
Answer to What would an actually good solution to AI alignment look like?
Answer to When should I stamp an answer?
Answer to When will an intelligence explosion happen?
Answer to When will transformative AI be created?
Answer to Where can I find questions to answer for Stampy?
Answer to Where can I find the all the features of Stampy's Wiki?
Answer to Where can I learn about interpretability?
Answer to Who created Stampy?
Answer to Who is Stampy?
Answer to Why can't we just make a child AI and raise it?
Answer to Why can't we simply stop developing AI?
Answer to Why can't we turn the computers off?
Answer to Why can’t we just use Asimov’s 3 laws of robotics?
Answer to Why can’t we just use natural language instructions?
Answer to Why can’t we just…
Answer to Why does AI takeoff speed matter?
Answer to Why don't we just not build AGI if it's so dangerous?
Answer to Why is AGI dangerous?
Answer to Why is AI alignment hard?
Answer to Why is safety important for smarter-than-human AI?
Answer to Why is the future of AI suddenly in the news? What has changed?
Answer to Why might a maximizing AI cause bad outcomes?
Answer to Why might a superintelligence be dangerous?
Answer to Why might an AI do something that we don’t want it to, if it’s really so intelligent?
Answer to Why might contributing to Stampy be worth my time?
Answer to Why might people try to build AGI rather than stronger and stronger narrow AIs?
Answer to Why might we expect a superintelligence to be hostile by default?
Answer to Why not just put it in a box?
Answer to Why should I worry about superintelligence?
Answer to Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer?
Answer to Why think that AI can outperform humans?
Answer to Why would great intelligence produce great power?
Answer to Will an aligned superintelligence care about animals other than humans?
Answer to Won’t AI be just like us?
Answer to Would AI alignment be hard with deep learning?
Answer to Would it improve the safety of quantilizers to cut of the top few % of the distribution?
Answer to Wouldn't a superintelligence be smart enough to know right from wrong?
Answer to Wouldn't it be a good for humanity to die out?