Canonical answers

From Stampy's Wiki

Canonical answers may be served to readers by Stampy, so only answers which have a reasonably high stamp score should be marked as canonical. All canonical answers are open to be collaboratively edited and updated, and they should represent a consensus response (written from the Stampy Point Of View) to a question which is within Stampy's scope.

Answers to questions from YouTube comments should not be marked as canonical, and will generally remain as they were when originally written since they have details which are specific to an idiosyncratic question. YouTube answers may be forked into wiki answers, in order to better respond to a particular question, in which case the YouTube question should have its canonical version field set to the new more widely useful question.

See also

168 canonical answers answering canonical questions, and 3 canonical answers answering a question marked as non-canonical!

All Canonical Answers

Answer to A lot of concern appears to focus on human-level or “superintelligent” AI. Is that a realistic prospect in the foreseeable future?
Answer to Any AI will be a computer program. Why wouldn't it just do what it's programmed to do?
Answer to Are Google, OpenAI, etc. aware of the risk?
Answer to Are there types of advanced AI that would be safer than others?
Answer to Aren't robots the real problem? How can AI cause harm if it has no ability to directly manipulate the physical world?
Answer to Aren’t there some pretty easy ways to eliminate these potential problems?
Answer to At a high level, what is the challenge of alignment that we must meet to secure a good future?
Answer to Can an AI really be smarter than humans?
Answer to Can humans stay in control of the world if human- or superhuman-level AI is developed?
Answer to Can people contribute to alignment by using proof assistants to generate formal proofs?
Answer to Can we constrain a goal-directed AI using specified rules?
Answer to Can we test an AI to make sure that it’s not going to take over and do harmful things after it achieves superintelligence?
Answer to Can you give an AI a goal which involves “minimally impacting the world”?
Answer to Can you stop an advanced AI from upgrading itself?
Answer to Can't we just tell an AI to do what we want?
Answer to Could AI have basic emotions?
Answer to Could we program an AI to automatically shut down if it starts doing things we don’t want it to?
Answer to Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world?
Answer to How can I collect questions for Stampy?
Answer to How can I contact the Stampy team?
Answer to How can I contribute to Stampy?
Answer to How can I join the Stampy dev team?
Answer to How close do AI experts think we are to creating superintelligence?
Answer to How could an intelligence explosion be useful?
Answer to How could poorly defined goals lead to such negative outcomes?
Answer to How difficult should we expect alignment to be?
Answer to How do I add content from LessWrong / Effective Altruism Forum tag-wikis to Stampy?
Answer to How do I form my own views about AI safety?
Answer to How do I format answers on Stampy?
Answer to How does AI taking things literally contribute to alignment being hard?
Answer to How does the stamp eigenkarma system work?
Answer to How doomed is humanity?
Answer to How fast will AI takeoff be?
Answer to How is "intelligence" defined?
Answer to How is AGI different from current AI?
Answer to How likely is an "intelligence explosion"?
Answer to How likely is it that an AI would pretend to be a human to further its goals?
Answer to How might AGI kill people?
Answer to How might a superintelligence socially manipulate humans?
Answer to How might an "intelligence explosion" be dangerous?
Answer to How might an AI achieve a seemingly beneficial goal via inappropriate means?
Answer to How might non-agentic GPT-style AI cause an "intelligence explosion" or otherwise contribute to existential risk?
Answer to How might things go wrong with AI even without an agentic superintelligence?
Answer to How might we get from Artificial General Intelligence to a Superintelligent system?
Answer to How quickly could an AI go from the first indications of problems to an unrecoverable disaster?
Answer to I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute?
Answer to I want to work on AI alignment. How can I get funding?
Answer to I'm interested in working on AI safety. What should I do?
Answer to If AI takes over the world how could it create and maintain the infrastructure that humans currently provide?
Answer to If I only care about helping people alive today, does AI safety still matter?
Answer to If we solve alignment, are we sure of a good future?
Answer to Is AI alignment possible?
Answer to Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff?
Answer to Is it possible to block an AI from doing certain things on the Internet?
Answer to Is it possible to code into an AI to avoid all the ways a given task could go wrong, and would it be dangerous to try that?
Answer to Is large-scale automated AI persuasion and propaganda a serious concern?
Answer to Is the focus on the existential threat of superintelligent AI diverting too much attention from more pressing debates about AI in surveillance and the battlefield, and its potential effects on the economy?
Answer to Is there a danger in anthropomorphizing AI’s and trying to understand them in human terms?
Answer to Is this about AI systems becoming malevolent or conscious and turning on us?
Answer to Isn’t AI just a tool like any other? Won’t it just do what we tell it to?
Answer to I’d like to get deeper into the AI alignment literature. Where should I look?
Answer to Might an "intelligence explosion" never occur?
Answer to OK, I’m convinced. How can I help?
Answer to Once we notice that a superintelligence given a specific task is trying to take over the world, can’t we turn it off, reprogram it or otherwise correct the problem?
Answer to Superintelligence sounds like science fiction. Do people think about this in the real world?
Answer to We’re going to merge with the machines so this will never be a problem, right?
Answer to What approaches are AI alignment organizations working on?
Answer to What are "human values"?
Answer to What are "scaling laws" and how are they relevant to safety?
Answer to What are alternate phrasings for?
Answer to What are brain-computer interfaces?
Answer to What are language models?
Answer to What are mesa-optimizers?
Answer to What are some AI alignment research agendas currently being pursued?
Answer to What are some good books about AGI safety?
Answer to What are some good podcasts about AI alignment?
Answer to What are some good resources on AI alignment?
Answer to What are some objections to the importance of AI alignment?
Answer to What are some of the most impressive recent advances in AI capabilities?
Answer to What are some specific open tasks on Stampy?
Answer to What are the differences between “AI safety”, “AGI safety”, “AI alignment” and “AI existential safety”?
Answer to What are the different possible AI takeoff speeds?
Answer to What are the ethical challenges related to whole brain emulation?
Answer to What are the potential benefits of AI as it grows increasingly sophisticated?
Answer to What are the style guidelines for writing for Stampy?
Answer to What can I do to contribute to AI safety?
Answer to What does Elon Musk think about AI safety?
Answer to What exactly is AGI and what will it look like?
Answer to What harm could a single superintelligence do when it took so many humans to build civilization?
Answer to What is "biological cognitive enhancement"?
Answer to What is "evidential decision theory"?
Answer to What is "functional decision theory"?
Answer to What is "greater-than-human intelligence"?
Answer to What is "hedonium"?
Answer to What is "narrow AI"?
Answer to What is "superintelligence"?
Answer to What is "transformative AI"?
Answer to What is AI Safety via Debate?
Answer to What is GPT-3?
Answer to What is Goodhart's law?
Answer to What is MIRI’s mission?
Answer to What is Stampy's copyright?
Answer to What is a "quantilizer"?
Answer to What is a "value handshake"?
Answer to What is a canonical question on Stampy's Wiki?
Answer to What is a duplicate question on Stampy's Wiki?
Answer to What is a follow-up question on Stampy's Wiki?
Answer to What is an "agent"?
Answer to What is an "intelligence explosion"?
Answer to What is an "s-risk"?
Answer to What is artificial general intelligence safety / AI alignment?
Answer to What is causal decision theory?
Answer to What is meant by "AI takeoff"?
Answer to What is the "control problem"?
Answer to What is the "long reflection"?
Answer to What is the "orthogonality thesis"?
Answer to What is the "windfall clause"?
Answer to What is the Stampy project?
Answer to What is the general nature of the concern about AI alignment?
Answer to What kind of questions do we want on Stampy?
Answer to What should I read to learn about decision theory?
Answer to What should be marked as a canonical answer on Stampy's Wiki?
Answer to What sources of information can Stampy use?
Answer to What technical problems are MIRI working on?
Answer to What would a good future with AGI look like?
Answer to What would a good solution to AI alignment look like?
Answer to When should I stamp an answer?
Answer to When will an intelligence explosion happen?
Answer to When will transformative AI be created?
Answer to Where can I find all the features of Stampy's Wiki?
Answer to Where can I find people to talk to about AI alignment?
Answer to Where can I find questions to answer for Stampy?
Answer to Where can I learn about AI alignment?
Answer to Where can I learn about interpretability?
Answer to Who created Stampy?
Answer to Who is Stampy?
Answer to Why can't we just make a "child AI" and raise it?
Answer to Why can't we just turn the AI off if it starts to misbehave?
Answer to Why can't we simply stop developing AI?
Answer to Why can’t we just use Asimov’s Three Laws of Robotics?
Answer to Why can’t we just use natural language instructions?
Answer to Why can’t we just…
Answer to Why do we expect that a superintelligence would closely approximate a utility maximizer?
Answer to Why does AI takeoff speed matter?
Answer to Why don't we just not build AGI if it's so dangerous?
Answer to Why is AGI dangerous?
Answer to Why is AGI safety a hard problem?
Answer to Why is AI alignment a hard problem?
Answer to Why is safety important for smarter-than-human AI?
Answer to Why is the future of AI suddenly in the news? What has changed?
Answer to Why might a maximizing AI cause bad outcomes?
Answer to Why might a superintelligent AI be dangerous?
Answer to Why might an AI do something that we don’t want it to, if it’s really so intelligent?
Answer to Why might contributing to Stampy be worth my time?
Answer to Why might people try to build AGI rather than stronger and stronger narrow AIs?
Answer to Why might we expect a superintelligence to be hostile by default?
Answer to Why not just put it in a box?
Answer to Why should I worry about superintelligence?
Answer to Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer?
Answer to Why think that AI can outperform humans?
Answer to Why would great intelligence produce great power?
Answer to Why would we only get one chance to align a superintelligence?
Answer to Will an aligned superintelligence care about animals other than humans?
Answer to Will we ever build a superintelligence?
Answer to Won’t AI be just like us?
Answer to Would AI alignment be hard with deep learning?
Answer to Would an aligned AI allow itself to be shut down?
Answer to Would donating small amounts to AI safety organizations make any significant difference?
Answer to Would it improve the safety of quantilizers to cut off the top few percent of the distribution?
Answer to Wouldn't a superintelligence be smart enough to know right from wrong?
Answer to Wouldn't it be a good thing for humanity to die out?