Canonically answered questions

From Stampy's Wiki

Canonical questions are the questions which we've checked are in scope and not duplicates, so we want answers to them. They may be edited to represent a class of question more broadly, rather than keeping all their idosyncracies. Once they're answered canonically Stampy will serve them to readers.

See also

334 canonical questions, 195 of them are answered, and 151 canonically answered!

All canonically answered questions

A lot of concern appears to focus on human-level or “superintelligent” AI. Is that a realistic prospect in the foreseeable future?
Any AI will be a computer program. Why wouldn't it just do what it's programmed to do?
Are Google, OpenAI, etc. aware of the risk?
Are there types of advanced AI that would be safer than others?
Aren't robots the real problem? How can AI cause harm if it has no ability to directly manipulate the physical world?
Aren’t there some pretty easy ways to eliminate these potential problems?
At a high level, what is the challenge of alignment that we must meet to secure a good future?
Can an AI really be smarter than humans?
Can humans stay in control of the world if human- or superhuman-level AI is developed?
Can people contribute to alignment by using proof assistants to generate formal proofs?
Can we constrain a goal-directed AI using specified rules?
Can we test an AI to make sure that it’s not going to take over and do harmful things after it achieves superintelligence?
Can you give an AI a goal which involves “minimally impacting the world”?
Can you stop an advanced AI from upgrading itself?
Can't we just tell an AI to do what we want?
Could AI have basic emotions?
Could we program an AI to automatically shut down if it starts doing things we don’t want it to?
How can I collect questions for Stampy?
How can I contact the Stampy team?
How can I contribute to Stampy?
How can I join the Stampy dev team?
How could an intelligence explosion be useful?
How could poorly defined goals lead to such negative outcomes?
How difficult should we expect alignment to be?
How do I add content from LessWrong / Effective Altruism Forum tag-wikis to Stampy?
How do I form my own views about AI safety?
How do I format answers on Stampy?
How does AI taking things literally contribute to alignment being hard?
How does the stamp eigenkarma system work?
How doomed is humanity?
How fast will AI takeoff be?
How is "intelligence" defined?
How is AGI different from current AI?
How likely is an "intelligence explosion"?
How likely is it that an AI would pretend to be a human to further its goals?
How might AGI kill people?
How might a superintelligence socially manipulate humans?
How might an "intelligence explosion" be dangerous?
How might an AI achieve a seemingly beneficial goal via inappropriate means?
How might non-agentic GPT-style AI cause an "intelligence explosion" or otherwise contribute to existential risk?
How quickly could an AI go from the first indications of problems to an unrecoverable disaster?
I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute?
I want to work on AI alignment. How can I get funding?
I'm interested in working on AI safety. What should I do?
If AI takes over the world how could it create and maintain the infrastructure that humans currently provide?
If I only care about helping people alive today, does AI safety still matter?
If we solve alignment, are we sure of a good future?
Is AI alignment possible?
Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff?
Is it possible to block an AI from doing certain things on the Internet?
Is it possible to code into an AI to avoid all the ways a given task could go wrong, and would it be dangerous to try that?
Is large-scale automated AI persuasion and propaganda a serious concern?
Is the focus on the existential threat of superintelligent AI diverting too much attention from more pressing debates about AI in surveillance and the battlefield, and its potential effects on the economy?
Is there a danger in anthropomorphizing AI’s and trying to understand them in human terms?
Is this about AI systems becoming malevolent or conscious and turning on us?
Isn’t AI just a tool like any other? Won’t it just do what we tell it to?
I’d like to get deeper into the AI alignment literature. Where should I look?
I’m convinced that this is important and want to contribute. What can I do to help?
Might an "intelligence explosion" never occur?
Once we notice that a superintelligence given a specific task is trying to take over the world, can’t we turn it off, reprogram it or otherwise correct the problem?
Superintelligence sounds like science fiction. Do people think about this in the real world?
We’re going to merge with the machines so this will never be a problem, right?
What approaches are AI alignment organizations working on?
What are "human values"?
What are alternate phrasings for?
What are brain-computer interfaces?
What are some good podcasts about AI alignment?
What are some good resources on AI alignment?
What are some of the most impressive recent advances in AI capabilities?
What are some specific open tasks on Stampy?
What are the different possible AI takeoff speeds?
What are the potential benefits of AI as it grows increasingly sophisticated?
What are the style guidelines for writing for Stampy?
What can I do to contribute to AI safety?
What does Elon Musk think about AI safety?
What harm could a single superintelligence do when it took so many humans to build civilization?
What is "AI alignment"?
What is "biological cognitive enhancement"?
What is "evidential decision theory"?
What is "functional decision theory"?
What is "greater-than-human intelligence"?
What is "hedonium"?
What is "narrow AI"?
What is "superintelligence"?
What is Artificial General Intelligence and what will it look like?
What is GPT-3?
What is MIRI’s mission?
What is a "quantilizer"?
What is a "value handshake"?
What is a canonical question on Stampy's Wiki?
What is a duplicate question on Stampy's Wiki?
What is a follow-up question on Stampy's Wiki?
What is an "agent"?
What is an "intelligence explosion"?
What is an "s-risk"?
What is causal decision theory?
What is meant by "AI takeoff"?
What is the "control problem"?
What is the "long reflection"?
What is the "orthogonality thesis"?
What is the "windfall clause"?
What is the Stampy project?
What is the general nature of the concern about AI alignment?
What kind of questions do we want on Stampy?
What should I read to learn about decision theory?
What should be marked as a canonical answer on Stampy's Wiki?
What sources of information can Stampy use?
What technical problems are MIRI working on?
What would a good solution to AI alignment look like?
When should I stamp an answer?
When will an intelligence explosion happen?
When will transformative AI be created?
Where can I find all the features of Stampy's Wiki?
Where can I find people to talk to about AI alignment?
Where can I find questions to answer for Stampy?
Where can I learn about AI alignment?
Where can I learn about interpretability?
Who created Stampy?
Who is Stampy?
Why can't we just make a "child AI" and raise it?
Why can't we just turn the AI off if it starts to misbehave?
Why can't we simply stop developing AI?
Why can’t we just use Asimov’s Three Laws of Robotics?
Why can’t we just use natural language instructions?
Why can’t we just “put the AI in a box” so that it can’t influence the outside world?
Why can’t we just…
Why do we expect that a superintelligence would closely approximate a utility maximizer?
Why does AI takeoff speed matter?
Why don't we just not build AGI if it's so dangerous?
Why is AGI dangerous?
Why is AGI safety a hard problem?
Why is safety important for smarter-than-human AI?
Why is the future of AI suddenly in the news? What has changed?
Why might a maximizing AI cause bad outcomes?
Why might a superintelligent AI be dangerous?
Why might an AI do something that we don’t want it to, if it’s really so intelligent?
Why might contributing to Stampy be worth my time?
Why might people try to build AGI rather than stronger and stronger narrow AIs?
Why might we expect a superintelligence to be hostile by default?
Why should I worry about superintelligence?
Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer?
Why think that AI can outperform humans?
Why would great intelligence produce great power?
Will an aligned superintelligence care about animals other than humans?
Will we ever build a superintelligence?
Won’t AI be just like us?
Would AI alignment be hard with deep learning?
Would donating small amounts to AI safety organizations make any significant difference?
Would it improve the safety of quantilizers to cut off the top few percent of the distribution?
Wouldn't a superintelligence be smart enough to know right from wrong?
Wouldn't it be a good thing for humanity to die out?