Questions

From Stampy's Wiki

This page shows all non-duplicate in-scope wiki-added questions.

See also

114 wiki questions, 77 imported FAQ questions, 139 of them are answered, and 90 canonically answered!

All wiki-added and imported Questions

A lot of concern appears to focus on human-level or “superintelligent” AI. Is that a realistic prospect in the foreseeable future?
AI is a computer program. Why wouldn't it just do what it's programmed to do?
AIs aren’t as smart as rats, let alone humans. Isn’t it sort of early to be worrying about this kind of thing?
Are AI researchers trying to make conscious AI?
Are Google, OpenAI etc. aware of the risk?
Are robots the real problem? How can AI cause harm if it has no ability to directly manipulate the physical world?
Are there anything that can actually be done in the amount of time left?
Are there types of advanced AI that would be safer than others?
Aren’t there some pretty easy ways to eliminate these potential problems?
Can AI be creative?
Can an AI really be smarter than humans?
Can humans and a superintelligence co-exist without the superintelligence destroying the humans?
Can humans stay in control of the world if human- or superhuman-level AI is developed?
Can people contribute to alignment by using proof assistants to generate formal proofs?
Can we add friendliness to any artificial intelligence design?
Can we just tell an AI to do what we want right now, based on the desires of our non-surgically altered brains?
Can we program the superintelligence to maximize human pleasure or desire satisfaction?
Can we specify a code of rules that the AI has to follow?
Can we teach a superintelligence a moral code with machine learning?
Can we tell an AI just to figure out what we want, then do that?
Can we test a weak or human-level AI to make sure that it’s not going to do bad things after it achieves superintelligence?
Can you give an AI a goal of “minimally impact the world”?
Can you stop an advanced AI from upgrading itself?
Can’t we just program the superintelligence not to harm us?
Could AI have basic emotions?
Could an AGI already be at large?
Could we program an AI to automatically shut down if it starts doing things we don’t want it to?
Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world?
Do you need a PhD to work on AI Safety?
Even if we are rationally convinced about the urgency of existential risks, it can be hard to feel that emotionally, because the danger is quite abstract. How can this gap be bridged?
How can I collect questions for Stampy?
How can I contact the Stampy team?
How can I contribute to Stampy?
How can I get hired by an organization working on AI alignment?
How can I join the Stampy dev team?
How could an intelligence explosion be useful?
How could general intelligence be programmed into a machine?
How could poorly defined goals lead to such negative outcomes?
How does the stamp eigenkarma system work?
How good is the world model of GPT-3?
How is AGI different from current AI?
How is ‘intelligence’ defined?
How likely is an intelligence explosion?
How likely is it that an AI would pretend to be a human to further its goals?
How long will it be until AGI is created?
How might AGI kill people?
How might a superintelligence socially manipulate humans?
How might a superintelligence technologically manipulate humans?
How might an AI achieve a seemingly beneficial goal via inappropriate means?
How might an intelligence explosion be dangerous?
How might non-agentic GPT-style AI cause an intelligence explosion or otherwise contribute to existential risk?
How quickly could an AI go from the first indications of problems to an unrecoverable disaster?
How should I decide which quality level to place a question in?
How successfully have institutions managed risks from novel technology in the past?
I want to work on AI alignment. How can I get funding?
I'm interested in working on AI Safety. What should I do?
If AI takes over the world how could it create and maintain the infrastructure that humans currently provide?
If an AI became conscious, how would we ever know?
If an AI system is smart, could it figure out the moral way to behave?
If superintelligence is a real risk, what do we do about it?
If we solve alignment, are we sure of a good future?
In what ways are real world machine learning systems different from expected utility maximizers?
Is donating small amounts to AI safety organisations going to make a non-negligible difference?
Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff?
Is humanity doomed?
Is it likely that hardware will allow an exponential takeoff?
Is it possible to block an AI from doing certain things on the internet?
Is it possible to code into an AI to avoid all the ways a given task could go wrong - and is it dangerous to try that?
Is it possible to limit an AGI from full access to the internet?
Is the concern that autonomous AI systems could become malevolent or self aware, or develop “volition”, and turn on us? And can’t we just unplug them?
Is the focus on the existential threat of superintelligent AI diverting too much attention from more pressing debates about AI in surveillance and the battlefield and its potential effects on the economy?
Is there a danger in anthropomorphising AI’s and trying to understand them in human terms?
Isn't it too soon to work on AGI safety?
Isn't the real concern AI being misused by terrorists or other bad actors?
Isn't the real concern AI enabled totalitarianism?
Isn't the real concern autonomous weapons?
Isn't the real concern technological unemployment?
Isn’t AI just a tool like any other? Won’t AI just do what we tell it to do?
Isn’t it immoral to control and impose our values on AI?
I’d like a good introduction to AI alignment. Where can I find one?
I’d like to find people to talk and learn with to help me stay motivated to get involved with AI alignment. Where can I find them?
I’d like to get deeper into the AI alignment literature. Where should I look?
I’m convinced that this is important and want to contribute. What can I do to help?
Might an intelligence explosion never occur?
On a scale of 1 to 100 how doomed is humanity?
Once we notice that the superintelligence working on calculating digits of pi is starting to try to take over the world, can’t we turn it off, reprogram it, or otherwise correct its mistake?
People talk about "Aligning AI with human values" but which humans' values are we talking about?
Shouldn't we work on things other than than AI alignment?
Superintelligence sounds a lot like science fiction. Do people think about this in the real world?
Testing adding a question to the wiki?
We already have psychopaths who are misaligned with the rest of the people, but somehow we deal with them. Can't we do the same with AI?
We’re going to merge with the machines so this will never be a problem, right?
What about AI concerns other than existential safety?
What about having a human supervisor who must approve all the AI's decisions before executing them?
What approaches are AI alignment organizations working on?
What are alternate phrasings for?
What are brain-computer interfaces?
What are good external resources to link to when editing Stampy's Wiki?
What are low cost things people who won't become researchers can do to contribute?
What are some important terms in AI alignment?
What are the core challenges between us today and aligned superintelligence?
What are the differences between AGI, TAI, and Superintelligence?
What are the different possible AI takeoff speeds?
What are the potential benefits of AI as it grows increasingly sophisticated?
What can we do to contribute to AI safety?
What can we expect the motivations of a superintelligent machine to be?
What convinced people working on AI alignment that it was worth spending their time on this cause?
What do the different difficulty levels mean on Stampy's Wiki?
What evidence do experts usually base their timeline predictions on?
What external content would be useful to the Stampy project?
What harm could a single superintelligence do, when it took so many humans to build civilization?
What if we put AI in a box, and have a second more powerful AI with a goal to prevent the first one escaping?
What is 'Transformative AI'?
What is AGI and what will it look like?
What is AI Safety?
What is AI alignment?
What is Codex / Github Copilot?
What is Coherent Extrapolated Volition?
What is Friendly AI?
What is GPT-3?
What is MIRI’s mission?
What is Stampy Point Of View?
What is Stampy's scope?
What is a canonical question on Stampy's Wiki?
What is a canonical version of a question on Stampy's Wiki?
What is a duplicate question on Stampy's Wiki?
What is a follow-up question on Stampy's Wiki?
What is a quantilizer?
What is a verified account on Stampy's Wiki?
What is biological cognitive enhancement?
What is greater-than-human intelligence?
What is meant by AI takeoff?
What is narrow AI?
What is superintelligence?
What is the Control Problem?
What is the Stampy project?
What is the general nature of the concern about AI safety?
What is the intelligence explosion?
What is the orthogonality thesis?
What is whole brain emulation?
What organizations are working on AI existential safety?
What research agendas are most relevant to x-risk reduction?
What should I read to learn about decision theory?
What should be marked as a canonical answer on Stampy's Wiki?
What should be marked as a related question on Stampy's Wiki?
What technical problems are MIRI working on?
What would an actually good solution to the control problem look like?
What's especially worrisome about autonomous weapons?
What's meant by calling an AI "agenty" or "agentlike"?
When should I stamp an answer?
When will an intelligence explosion happen?
When will transformative AI be created?
Where can I learn about interpretability?
Which country will AGI be created by, and does this matter?
Who helped create Stampy?
Who is Professor Nick Bostrom?
Who is Stampy?
Why can't we just make a child AI and raise it?
Why can't we simply stop developing AI?
Why can't we turn the computers off?
Why can’t we just use Asimov’s 3 laws of robotics?
Why can’t we just use natural language instructions?
Why can’t we just “put the AI in a box” so it can’t influence the outside world?
Why can’t we just…
Why does AI need goals in the first place? Can’t it be intelligent without any agenda?
Why does takeoff speed matter?
Why does there seem to have been an explosion of activity in AI in recent years?
Why don't we just not build AGI if it's so dangerous?
Why is AGI dangerous?
Why is AI Safety hard?
Why is AI Safety important?
Why is safety important for smarter-than-human AI?
Why is the future of AI suddenly in the news? What has changed?
Why might a fast takeoff be dangerous?
Why might an AI do something that we don’t want it to, if it’s really so intelligent?
Why might people try to build AGI rather than stronger and stronger narrow AIs?
Why might we expect a fast takeoff?
Why might we expect a moderate AI takeoff?
Why might we expect a superintelligence to be hostile by default?
Why should I worry about superintelligence?
Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer?
Why think that AI can outperform humans?
Why work on AI safety early?
Why would great intelligence produce great power?
Will AI learn to be independent from people or will it always ask for our orders?
Will superintelligence make a large part of humanity unemployable?
Won’t AI be just like us?
Would it improve the safety of quantilizers to cut of the top few % of the distribution?
Wouldn't a superintelligence be smart enough not to make silly mistakes in its comprehension of our instructions?
Wouldn't it be safer to only build narrow AIs?
Wouldn’t it be intelligent enough to know right from wrong?