Canonical answers

From Stampy's Wiki

Canonical answers may be served to readers by Stampy, so only answers which have a reasonably high stamp score should be marked as canonical. All canonical answers are open to be collaboratively edited and updated, and they should represent a consensus response (written from the Stampy Point Of View) to a question which is within Stampy's scope.

Answers to YouTube questions should not be marked as canonical, and will generally remain as they were when originally written since they have details which are specific to an idiosyncratic question. YouTube answers may be forked into wiki answers, in order to better respond to a particular question, in which case the YouTube question should have its canonical version field set to the new more widely useful question.

See also

91 canonical answers, but 7 of them are answering a question marked as non-canonical!

All Canonical Answers

Answer to A lot of concern appears to focus on human-level or “superintelligent” AI. Is that a realistic prospect in the foreseeable future?
Answer to Are Google, OpenAI etc. aware of the risk?
Answer to Are robots the real problem? How can AI cause harm if it has no ability to directly manipulate the physical world?
Answer to Are there types of advanced AI that would be safer than others?
Answer to Aren’t there some pretty easy ways to eliminate these potential problems?
Answer to Can an AI really be smarter than humans?
Answer to Can humans and a superintelligence co-exist without the superintelligence destroying the humans?
Answer to Can humans stay in control of the world if human- or superhuman-level AI is developed?
Answer to Can you give an AI a goal of “minimally impact the world”?
Answer to Can you stop an advanced AI from upgrading itself?
Answer to Could AI have basic emotions?
Answer to Could we program an AI to automatically shut down if it starts doing things we don’t want it to?
Answer to How can I collect questions for Stampy?
Answer to How can I contact the Stampy team?
Answer to How can I contribute to Stampy?
Answer to How could an intelligence explosion be useful?
Answer to How could poorly defined goals lead to such negative outcomes?
Answer to How is AGI different from current AI?
Answer to How is ‘intelligence’ defined?
Answer to How likely is it that an AI would pretend to be a human to further its goals?
Answer to How might AGI kill people?
Answer to How might a superintelligence socially manipulate humans?
Answer to How might an AI achieve a seemingly beneficial goal via inappropriate means?
Answer to How might an intelligence explosion be dangerous?
Answer to How might non-agentic GPT-style AI cause an intelligence explosion or otherwise contribute to existential risk?
Answer to How soon will transformative AI likely come and why?
Answer to I want to work on AI alignment. How can I get funding?
Answer to I'm interested in working on AI Safety. What should I do?
Answer to Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff?
Answer to Is it possible to block an AI from doing certain things on the internet?
Answer to Is it possible to code into an AI to avoid all the ways a given task could go wrong - and is it dangerous to try that?
Answer to Is the concern that autonomous AI systems could become malevolent or self aware, or develop “volition”, and turn on us? And can’t we just unplug them?
Answer to Is the focus on the existential threat of superintelligent AI diverting too much attention from more pressing debates about AI in surveillance and the battlefield and its potential effects on the economy?
Answer to Is there a danger in anthropomorphising AI’s and trying to understand them in human terms?
Answer to Isn’t AI just a tool like any other? Won’t AI just do what we tell it to do?
Answer to I’d like a good introduction to AI alignment. Where can I find one?
Answer to Might an intelligence explosion never occur?
Answer to On a scale of 1 to 100 how doomed is humanity?
Answer to Superintelligence sounds a lot like science fiction. Do people think about this in the real world?
Answer to We’re going to merge with the machines so this will never be a problem, right?
Answer to What approaches are AI alignment organizations working on?
Answer to What are alternate phrasings for?
Answer to What are brain-computer interfaces?
Answer to What are some good external resources to link to when editing Stampy's Wiki?
Answer to What are the core challenges between us today and aligned superintelligence?
Answer to What are the potential benefits of AI as it grows increasingly sophisticated?
Answer to What can we do to contribute to AI safety?
Answer to What do you mean by “fast takeoff”?
Answer to What exactly is AGI and what will it look like?
Answer to What harm could a single superintelligence do, when it took so many humans to build civilization?
Answer to What is MIRI’s mission?
Answer to What is Stampy's scope?
Answer to What is a canonical question on Stampy's Wiki?
Answer to What is a duplicate question on Stampy's Wiki?
Answer to What is a follow-up question on Stampy's Wiki?
Answer to What is a verified account on Stampy's Wiki?
Answer to What is biological cognitive enhancement?
Answer to What is greater-than-human intelligence?
Answer to What is superintelligence?
Answer to What is the Control Problem?
Answer to What is the general nature of the concern about AI safety?
Answer to What is the intelligence explosion?
Answer to What should I read to learn about decision theory?
Answer to What should be marked as a canonical answer on Stampy's Wiki?
Answer to What technical problems are MIRI working on?
Answer to When will an intelligence explosion happen?
Answer to Where can I learn about interpretability?
Answer to Why can't we just make a child AI and raise it?
Answer to Why can't we simply stop developing AI?
Answer to Why can't we turn the computers off?
Answer to Why can’t we just use Asimov’s 3 laws of robotics?
Answer to Why can’t we just use natural language instructions?
Answer to Why can’t we just…
Answer to Why don't we just not build AGI if it's so dangerous?
Answer to Why is AGI dangerous?
Answer to Why is AI Safety hard?
Answer to Why is safety important for smarter-than-human AI?
Answer to Why is the future of AI suddenly in the news? What has changed?
Answer to Why might a fast takeoff be dangerous?
Answer to Why might an AI do something that we don’t want it to, if it’s really so intelligent?
Answer to Why might people try to build AGI rather than stronger and stronger narrow AIs?
Answer to Why not just put it in a box?
Answer to Why should I worry about superintelligence?
Answer to Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer?
Answer to Why think that AI can outperform humans?
Answer to Why would great intelligence produce great power?
Answer to Won’t AI be just like us?
Answer to Would it improve the safety of quantilizers to cut of the top few % of the distribution?
Answer to Wouldn’t it be intelligent enough to know right from wrong?