Answers

From Stampy's Wiki

This page contains all the answers except those directed at questions from YouTube.

See also

150 wiki answers, 92 of them canonical!

All wiki-added and imported Questions

Answer to A lot of concern appears to focus on human-level or “superintelligent” AI. Is that a realistic prospect in the foreseeable future?
Answer to AIs aren’t as smart as rats, let alone humans. Isn’t it sort of early to be worrying about this kind of thing?
Answer to Are Google, OpenAI etc. aware of the risk?
Answer to Are robots the real problem? How can AI cause harm if it has no ability to directly manipulate the physical world?
Answer to Are there types of advanced AI that would be safer than others?
Answer to Aren’t there some pretty easy ways to eliminate these potential problems?
Answer to But superintelligences are very smart. Aren’t they smart enough not to make silly mistakes in comprehension?
Answer to Can an AI really be smarter than humans?
Answer to Can humans and a superintelligence co-exist without the superintelligence destroying the humans?
Answer to Can humans stay in control of the world if human- or superhuman-level AI is developed?
Answer to Can we add friendliness to any artificial intelligence design?
Answer to Can we just tell an AI to do what we want right now, based on the desires of our non-surgically altered brains?
Answer to Can we program the superintelligence to maximize human pleasure or desire satisfaction?
Answer to Can we specify a code of rules that the AI has to follow?
Answer to Can we teach a superintelligence a moral code with machine learning?
Answer to Can we tell an AI just to figure out what we want, then do that?
Answer to Can we test a weak or human-level AI to make sure that it’s not going to do bad things after it achieves superintelligence?
Answer to Can you give an AI a goal of “minimally impact the world”?
Answer to Can you stop an advanced AI from upgrading itself?
Answer to Can’t we just program the superintelligence not to harm us?
Answer to Could AI have basic emotions?
Answer to Could we program an AI to automatically shut down if it starts doing things we don’t want it to?
Answer to Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world?
Answer to Even if hostile superintelligences are dangerous, why would we expect a superintelligence to ever be hostile?
Answer to How can I collect questions for Stampy?
Answer to How can I contact the Stampy team?
Answer to How can I contribute to Stampy?
Answer to How could an intelligence explosion be useful?
Answer to How could general intelligence be programmed into a machine?
Answer to How could poorly defined goals lead to such negative outcomes?
Answer to How is AGI different from current AI?
Answer to How is ‘intelligence’ defined?
Answer to How likely is it that an AI would pretend to be a human to further its goals?
Answer to How might AGI kill people?
Answer to How might a superintelligence socially manipulate humans?
Answer to How might a superintelligence technologically manipulate humans?
Answer to How might an AI achieve a seemingly beneficial goal via inappropriate means?
Answer to How might an intelligence explosion be dangerous?
Answer to How might non-agentic GPT-style AI cause an intelligence explosion or otherwise contribute to existential risk?
Answer to How quickly could an AI go from the first indications of problems to an unrecoverable disaster?
Answer to How soon will transformative AI likely come and why?
Answer to I want to work on AI alignment. How can I get funding?
Answer to I'm interested in working on AI Safety. What should I do?
Answer to If superintelligence is a real risk, what do we do about it?
Answer to Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff?
Answer to Is it possible to block an AI from doing certain things on the internet?
Answer to Is it possible to code into an AI to avoid all the ways a given task could go wrong - and is it dangerous to try that?
Answer to Is the concern that autonomous AI systems could become malevolent or self aware, or develop “volition”, and turn on us? And can’t we just unplug them?
Answer to Is the focus on the existential threat of superintelligent AI diverting too much attention from more pressing debates about AI in surveillance and the battlefield and its potential effects on the economy?
Answer to Is there a danger in anthropomorphising AI’s and trying to understand them in human terms?
Answer to Isn’t AI just a tool like any other? Won’t AI just do what we tell it to do?
Answer to Isn’t it immoral to control and impose our values on AI?
Answer to I’d like a good introduction to AI alignment. Where can I find one?
Answer to Might an intelligence explosion never occur?
Answer to On a scale of 1 to 100 how doomed is humanity?
Answer to Once we notice that the superintelligence working on calculating digits of pi is starting to try to take over the world, can’t we turn it off, reprogram it, or otherwise correct its mistake?
Answer to Superintelligence sounds a lot like science fiction. Do people think about this in the real world?
Answer to We’re going to merge with the machines so this will never be a problem, right?
Answer to What approaches are AI alignment organizations working on?
Answer to What are alternate phrasings for?
Answer to What are brain-computer interfaces?
Answer to What are some good external resources to link to when editing Stampy's Wiki?
Answer to What are the core challenges between us today and aligned superintelligence?
Answer to What are the potential benefits of AI as it grows increasingly sophisticated?
Answer to What are the style guidelines for writing for Stampy?
Answer to What can we do to contribute to AI safety?
Answer to What can we expect the motivations of a superintelligent machine to be?
Answer to What do you mean by “fast takeoff”?
Answer to What exactly is AGI and what will it look like?
Answer to What harm could a single superintelligence do, when it took so many humans to build civilization?
Answer to What is AI Safety?
Answer to What is Coherent Extrapolated Volition?
Answer to What is Friendly AI?
Answer to What is MIRI’s mission?
Answer to What is Stampy's scope?
Answer to What is a canonical question on Stampy's Wiki?
Answer to What is a duplicate question on Stampy's Wiki?
Answer to What is a follow-up question on Stampy's Wiki?
Answer to What is a verified account on Stampy's Wiki?
Answer to What is biological cognitive enhancement?
Answer to What is greater-than-human intelligence?
Answer to What is superintelligence?
Answer to What is superintelligence?
Answer to What is the Control Problem?
Answer to What is the general nature of the concern about AI safety?
Answer to What is the intelligence explosion?
Answer to What should I read to learn about decision theory?
Answer to What should be marked as a canonical answer on Stampy's Wiki?
Answer to What technical problems are MIRI working on?
Answer to What would an actually good solution to the control problem look like?
Answer to When will an intelligence explosion happen?
Answer to Where can I learn about interpretability?
Answer to Who is Professor Nick Bostrom?
Answer to Why can't we just make a child AI and raise it?
Answer to Why can't we simply stop developing AI?
Answer to Why can't we turn the computers off?
Answer to Why can’t we just use Asimov’s 3 laws of robotics?
Answer to Why can’t we just use natural language instructions?
Answer to Why can’t we just…
Answer to Why can’t we just…
Answer to Why does AI need goals in the first place? Can’t it be intelligent without any agenda?
Answer to Why does takeoff speed matter?
Answer to Why don't we just not build AGI if it's so dangerous?
Answer to Why is AGI dangerous?
Answer to Why is AI Safety hard?
Answer to Why is AI Safety important?
Answer to Why is safety important for smarter-than-human AI?
Answer to Why is the future of AI suddenly in the news? What has changed?
Answer to Why might a fast takeoff be dangerous?
Answer to Why might an AI do something that we don’t want it to, if it’s really so intelligent?
Answer to Why might people try to build AGI rather than stronger and stronger narrow AIs?
Answer to Why might we expect a fast takeoff?
Answer to Why might we expect a moderate takeoff?
Answer to Why not just put it in a box?
Answer to Why should I worry about superintelligence?
Answer to Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer?
Answer to Why think that AI can outperform humans?
Answer to Why would great intelligence produce great power?
Answer to Won’t AI be just like us?
Answer to Would AI alignment be hard with deep learning?
Answer to Would it improve the safety of quantilizers to cut of the top few % of the distribution?
Answer to Wouldn’t it be intelligent enough to know right from wrong?
Aprillion's Answer to Can AI be creative?
Aprillion's Answer to Do we actually need AGI?
Aprillion's Answer to If an AI became conscious, how would we ever know?
Aprillion's Answer to If an AI system is smart, could it figure out the moral way to behave?
Aprillion's Answer to Is it possible to limit an AGI from full access to the internet?
Aprillion's Answer to Isn’t AI just a tool like any other?
Aprillion's Answer to What's meant by calling an AI "agenty" or "agentlike"?
CyberByte's Answer to How long will it be until AGI is created?
Filip's Answer to Are AI researchers trying to make conscious AI?
Filip's Answer to Do you need a PhD to work on AI Safety?
Filip's Answer to Isn't it too soon to work on AGI safety?
Filip's Answer to People talk about "Aligning AI with human values" but which humans' values are we talking about?
Filip's Answer to We already have psychopaths who are misaligned with the rest of the people, but somehow we deal with them. Can't we do the same with AI?
Filip's Answer to What about having a human supervisor who must approve all the AI's decisions before executing them?
Filip's Answer to What are the differences between AGI, TAI, and Superintelligence?
Filip's Answer to Will AI learn to be independent from people or will it always ask for our orders?
Luke Muehlhauser's Answer to What is superintelligence?
MIRI's Answer to How long will it be until AGI is created?
MIRI's Answer to Why can’t we just “put the AI in a box” so it can’t influence the outside world?
MIRI's Answer to Why work on AI safety early?
Morpheus's Answer to Is humanity doomed?
NotaSentientAI's Answer to Why not just put it in a box?
Plex's Answer to Are robots the real problem? How can AI cause harm if it has no ability to directly manipulate the physical world?
Plex's Answer to How does the stamp eigenkarma system work?
Plex's Answer to How long will it be until AGI is created?
Plex's Answer to What is the Stampy project?
SlimeBunnyBat's Answer to Isn't the real concern technological unemployment?
Zekava's Answer to Why does there seem to have been an explosion of activity in AI in recent years?