Alternate phrasings

From Stampy's Wiki

See also

297 canonical questions without alternate phrasings, 37 with!

Canonical questions without alternate phrasings

A lot of concern appears to focus on human-level or “superintelligent” AI. Is that a realistic prospect in the foreseeable future? (add alt phrasings)
Any AI will be a computer program. Why wouldn't it just do what it's programmed to do? (add alt phrasings)
Are AI researchers trying to make conscious AI? (add alt phrasings)
Are any major politicians concerned about this? (add alt phrasings)
Are expert surveys on AI safety available? (add alt phrasings)
Are there any AI alignment projects which governments could usefully put a very large amount of resources into? (add alt phrasings)
Are there any plausibly workable proposals for regulating or banning dangerous AI research? (add alt phrasings)
Are there promising ways to make AI alignment researchers smarter? (add alt phrasings)
Aren't robots the real problem? How can AI cause harm if it has no ability to directly manipulate the physical world? (add alt phrasings)
At a high level, what is the challenge of alignment that we must meet to secure a good future? (add alt phrasings)
Can AI be creative? (add alt phrasings)
Can an AI really be smarter than humans? (add alt phrasings)
Can humans stay in control of the world if human- or superhuman-level AI is developed? (add alt phrasings)
Can people contribute to alignment by using proof assistants to generate formal proofs? (add alt phrasings)
Can we add "friendliness" to any artificial intelligence design? (add alt phrasings)
Can we ever be sure that an AI is aligned? (add alt phrasings)
Can we get AGI by scaling up architectures similar to current ones, or are we missing key insights? (add alt phrasings)
Can we tell an AI just to figure out what we want and then do that? (add alt phrasings)
Can we test an AI to make sure that it’s not going to take over and do harmful things after it achieves superintelligence? (add alt phrasings)
Can you stop an advanced AI from upgrading itself? (add alt phrasings)
Can't we just tell an AI to do what we want? (add alt phrasings)
Can’t we just program the superintelligence not to harm us? (add alt phrasings)
Considering how hard it is to predict the future, why do we think we can say anything useful about AGI today? (add alt phrasings)
Could AI have basic emotions? (add alt phrasings)
Could I contribute by offering coaching to alignment researchers? If so, how would I go about this? (add alt phrasings)
Could an AGI have already been created and currently be affecting the world? (add alt phrasings)
Could divesting from AI companies without good safety culture be useful, or would this be likely to have a negligible impact? (add alt phrasings)
Could emulated minds do AI alignment research? (add alt phrasings)
Could we build provably beneficial AI systems? (add alt phrasings)
Could we get significant biological intelligence enhancements long before AGI? (add alt phrasings)
Could we program an AI to automatically shut down if it starts doing things we don’t want it to? (add alt phrasings)
Could we tell the AI to do what's morally right? (add alt phrasings)
Could weak AI systems help with alignment research? (add alt phrasings)
Do you need a PhD to work on AI Safety? (add alt phrasings)
Does it make sense to focus on scenarios where change is rapid and due to a single actor, or slower and dependent on getting agreements between several relevant actors? (add alt phrasings)
Does the importance of AI risk depend on caring about transhumanist utopias? (add alt phrasings)
Even if we are rationally convinced about the urgency of existential AI risk, it can be hard to feel that emotionally because the danger is so abstract. How can this gap be bridged? (add alt phrasings)
How can I be a more productive student/researcher? (add alt phrasings)
How can I collect questions for Stampy? (add alt phrasings)
How can I contribute in the area of community building? (add alt phrasings)
How can I contribute to Stampy? (add alt phrasings)
How can I get hired by an organization working on AI alignment? (add alt phrasings)
How can I join the Stampy dev team? (add alt phrasings)
How can I support alignment researchers to be more productive? (add alt phrasings)
How could an intelligence explosion be useful? (add alt phrasings)
How could general intelligence be programmed into a machine? (add alt phrasings)
How could poorly defined goals lead to such negative outcomes? (add alt phrasings)
How difficult should we expect alignment to be? (add alt phrasings)
How do I add content from LessWrong / Effective Altruism Forum tag-wikis to Stampy? (add alt phrasings)
How do I form my own views about AI safety? (add alt phrasings)
How do I know whether I'm a good fit for work on AI safety? (add alt phrasings)
How do I stay motivated and productive? (add alt phrasings)
How do I stay updated about AI progress? (add alt phrasings)
How do organizations do adversarial training and red teaming? (add alt phrasings)
How do the incentives in markets increase AI risk? (add alt phrasings)
How does AI taking things literally contribute to alignment being hard? (add alt phrasings)
How does the current global microchip supply chain work, and who has political power over it? (add alt phrasings)
How fast will AI takeoff be? (add alt phrasings)
How good is the world model of GPT-3? (add alt phrasings)
How hard is it for an AGI to develop powerful nanotechnology? (add alt phrasings)
How important is research closure and OPSEC for capabilities-synergistic ideas? (add alt phrasings)
How is "intelligence" defined? (add alt phrasings)
How is AGI different from current AI? (add alt phrasings)
How is metaethics relevant to AI alignment? (add alt phrasings)
How likely are AI organizations to respond appropriately to the risks of their creations? (add alt phrasings)
How likely is an "intelligence explosion"? (add alt phrasings)
How likely is it that AGI will first be developed by a large established organization, rather than a small startup, an academic group or a government? (add alt phrasings)
How likely is it that an AI would pretend to be a human to further its goals? (add alt phrasings)
How likely is it that governments will play a significant role? What role would be desirable, if any? (add alt phrasings)
How long will it be until superintelligent AI is created? (add alt phrasings)
How might "acausal trade" affect alignment? (add alt phrasings)
How might AGI interface with cybersecurity? (add alt phrasings)
How might a real-world AI system that receives orders in natural language and does what you mean look? (add alt phrasings)
How might a superintelligence socially manipulate humans? (add alt phrasings)
How might a superintelligence technologically manipulate humans? (add alt phrasings)
How might an "intelligence explosion" be dangerous? (add alt phrasings)
How might an AI achieve a seemingly beneficial goal via inappropriate means? (add alt phrasings)
How might non-agentic GPT-style AI cause an "intelligence explosion" or otherwise contribute to existential risk? (add alt phrasings)
How might things go wrong with AI even without an agentic superintelligence? (add alt phrasings)
How might we reduce the chance of an AI arms race? (add alt phrasings)
How might we reduce the diffusion of dangerous AI technology to insufficiently careful actors? (add alt phrasings)
How much resources did the processes of biological evolution use to evolve intelligent creatures? (add alt phrasings)
How possible (and how desirable) is it to change which path humanity follows to get to AGI? (add alt phrasings)
How powerful will a mature superintelligence be? (add alt phrasings)
How quickly could an AI go from the first indications of problems to an unrecoverable disaster? (add alt phrasings)
How quickly would the AI capabilities ecosystem adopt promising new advances in AI alignment? (add alt phrasings)
How should I change my financial investments in response to the possibility of transformative AI? (add alt phrasings)
How should I decide which quality level to attribute to a proposed question? (add alt phrasings)
How should I personally prepare for when transformative AI arrives? (add alt phrasings)
How software- and/or hardware-bottlenecked are we on AGI? (add alt phrasings)
How successfully have institutions managed risks from novel technology in the past? (add alt phrasings)
How tractable is it to get governments to play a good role (rather than a bad role) and/or to get them to play a role at all (rather than no role)? (add alt phrasings)
How would I know if AGI were imminent? (add alt phrasings)
How would we know if an AI were suffering? (add alt phrasings)
I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute? (add alt phrasings)
I want to work on AI alignment. How can I get funding? (add alt phrasings)
If AGI comes from a new paradigm, how likely is it to arise late in the paradigm when it is already deployed at scale, versus early on when only a few people are exploring the idea? (add alt phrasings)
If AI takes over the world how could it create and maintain the infrastructure that humans currently provide? (add alt phrasings)
If an AI became conscious, how would we ever know? (add alt phrasings)
If we solve alignment, are we sure of a good future? (add alt phrasings)
In "aligning AI with human values", which humans' values are we talking about? (add alt phrasings)
In what ways are real-world machine learning systems different from expected utility maximizers? (add alt phrasings)
Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff? (add alt phrasings)
Is it likely that hardware will allow an exponential takeoff? (add alt phrasings)
Is it possible to code into an AI to avoid all the ways a given task could go wrong, and would it be dangerous to try that? (add alt phrasings)
Is large-scale automated AI persuasion and propaganda a serious concern? (add alt phrasings)
Is merging with AI through brain-computer interfaces a potential solution to safety problems? (add alt phrasings)
Is the UN concerned about existential risk from AI? (add alt phrasings)
Is the focus on the existential threat of superintelligent AI diverting too much attention from more pressing debates about AI in surveillance and the battlefield, and its potential effects on the economy? (add alt phrasings)
Is the question of whether we're living in a simulation relevant to AI safety? If so, how? (add alt phrasings)
Is there a Chinese AI safety community? Are there safety researchers working at leading Chinese AI labs? (add alt phrasings)
Is there a danger in anthropomorphizing AI’s and trying to understand them in human terms? (add alt phrasings)
Is there something useful we can ask governments to do for AI alignment? (add alt phrasings)
Is this about AI systems becoming malevolent or conscious and turning on us? (add alt phrasings)
Isn't it hard to make a significant difference as a person who isn't going to be a world-class researcher? (add alt phrasings)
Isn't it too soon to be working on AGI safety? (add alt phrasings)
Isn't the real concern AI being misused by terrorists or other bad actors? (add alt phrasings)
Isn't the real concern AI-enabled totalitarianism? (add alt phrasings)
Isn't the real concern autonomous weapons? (add alt phrasings)
Isn't the real concern technological unemployment? (add alt phrasings)
Isn’t it immoral to control and impose our values on AI? (add alt phrasings)
I’d like to get deeper into the AI alignment literature. Where should I look? (add alt phrasings)
I’m convinced that this is important and want to contribute. What can I do to help? (add alt phrasings)
Might an "intelligence explosion" never occur? (add alt phrasings)
Might an aligned superintelligence force people to "upload" themselves, so as to more efficiently use the matter of their bodies? (add alt phrasings)
Might an aligned superintelligence force people to have better lives and change more quickly than they want? (add alt phrasings)
Might an aligned superintelligence immediately kill everyone and then go on to create a "hedonium shockwave"? (add alt phrasings)
Might attempting to align AI cause a "near miss" which results in a much worse outcome? (add alt phrasings)
Might humanity create astronomical amounts of suffering when colonizing the universe after creating an aligned superintelligence? (add alt phrasings)
Might trying to build a hedonium-maximizing AI be easier and more likely to work than trying for eudaimonia? (add alt phrasings)
Once we notice that a superintelligence given a specific task is trying to take over the world, can’t we turn it off, reprogram it or otherwise correct the problem? (add alt phrasings)
Should I engage in political or collective action like signing petitions or sending letters to politicians? (add alt phrasings)
Should we expect "warning shots" before an unrecoverable catastrophe? (add alt phrasings)
Superintelligence sounds like science fiction. Do people think about this in the real world? (add alt phrasings)
This all seems rather abstract. Isn't promoting love, wisdom, altruism or rationality more important? (add alt phrasings)
To what extent are there meaningfully different paths to AGI, versus just one path? (add alt phrasings)
We already have psychopaths who are "misaligned" with the rest of humanity, but somehow we deal with them. Can't we do something similar with AI? (add alt phrasings)
We’re going to merge with the machines so this will never be a problem, right? (add alt phrasings)
What about having a human supervisor who must approve all the AI's decisions before executing them? (add alt phrasings)
What actions can I take in under five minutes to contribute to the cause of AI safety? (add alt phrasings)
What alignment strategies are scalably safe and competitive? (add alt phrasings)
What approaches are AI alignment organizations working on? (add alt phrasings)
What are "coherence theorems" and what do they tell us about AI? (add alt phrasings)
What are "human values"? (add alt phrasings)
What are "scaling laws" and how are they relevant to safety? (add alt phrasings)
What are "selection theorems" and can they tell us anything useful about the likely shape of AGI systems? (add alt phrasings)
What are OpenAI Codex and GitHub Copilot? (add alt phrasings)
What are alternate phrasings for? (add alt phrasings)
What are brain-computer interfaces? (add alt phrasings)
What are likely to be the first transformative applications of AI? (add alt phrasings)
What are plausible candidates for "pivotal acts"? (add alt phrasings)
What are some AI alignment research agendas currently being pursued? (add alt phrasings)
What are some good podcasts about AI alignment? (add alt phrasings)
What are some good resources on AI alignment? (add alt phrasings)
What are some helpful AI policy ideas? (add alt phrasings)
What are some important examples of specialised terminology in AI alignment? (add alt phrasings)
What are some of the leading AI capabilities organizations? (add alt phrasings)
What are some of the most impressive recent advances in AI capabilities? (add alt phrasings)
What are some open research questions in AI alignment? (add alt phrasings)
What are some practice or entry-level problems for getting into alignment research? (add alt phrasings)
What are some problems in philosophy that are related to AI safety? (add alt phrasings)
What are some specific open tasks on Stampy? (add alt phrasings)
What are the "win conditions"/problems that need to be solved? (add alt phrasings)
What are the differences between AGI, transformative AI and superintelligence? (add alt phrasings)
What are the different possible AI takeoff speeds? (add alt phrasings)
What are the different versions of decision theory? (add alt phrasings)
What are the editorial protocols for Stampy questions and answers? (add alt phrasings)
What are the leading theories in moral philosophy and which of them might be technically the easiest to encode into an AI? (add alt phrasings)
What are the potential benefits of AI as it grows increasingly sophisticated? (add alt phrasings)
What beneficial things would an aligned superintelligence be able to do? (add alt phrasings)
What can I do to contribute to AI safety? (add alt phrasings)
What can we expect the motivations of a superintelligent machine to be? (add alt phrasings)
What convinced people working on AI alignment that it was worth spending their time on this cause? (add alt phrasings)
What could a superintelligent AI do, and what would be physically impossible even for it? (add alt phrasings)
What does Elon Musk think about AI safety? (add alt phrasings)
What does a typical work day in the life of an AI safety researcher look like? (add alt phrasings)
What does alignment failure look like? (add alt phrasings)
What evidence do experts usually base their timeline predictions on? (add alt phrasings)
What external content would be useful to the Stampy project? (add alt phrasings)
What harm could a single superintelligence do when it took so many humans to build civilization? (add alt phrasings)
What if technological progress stagnates and we never achieve AGI? (add alt phrasings)
What if we put the AI in a box and have a second, more powerful, AI with the goal of preventing the first one from escaping? (add alt phrasings)
What is "AI alignment"? (add alt phrasings)
What is "Do What I Mean"? (add alt phrasings)
What is "HCH"? (add alt phrasings)
What is "agent foundations"? (add alt phrasings)
What is "biological cognitive enhancement"? (add alt phrasings)
What is "coherent extrapolated volition"? (add alt phrasings)
What is "evidential decision theory"? (add alt phrasings)
What is "friendly AI"? (add alt phrasings)
What is "functional decision theory"? (add alt phrasings)
What is "greater-than-human intelligence"? (add alt phrasings)
What is "hedonium"? (add alt phrasings)
What is "logical decision theory"? (add alt phrasings)
What is "metaphilosophy" and how does it relate to AI safety? (add alt phrasings)
What is "narrow AI"? (add alt phrasings)
What is "superintelligence"? (add alt phrasings)
What is "transformative AI"? (add alt phrasings)
What is "whole brain emulation"? (add alt phrasings)
What is Artificial General Intelligence and what will it look like? (add alt phrasings)
What is Artificial General Intelligence safety/alignment? (add alt phrasings)
What is GPT-3? (add alt phrasings)
What is MIRI’s mission? (add alt phrasings)
What is a "pivotal act"? (add alt phrasings)
What is a "quantilizer"? (add alt phrasings)
What is a "value handshake"? (add alt phrasings)
What is a duplicate question on Stampy's Wiki? (add alt phrasings)
What is a follow-up question on Stampy's Wiki? (add alt phrasings)
What is a verified account on Stampy's Wiki? (add alt phrasings)
What is an "agent"? (add alt phrasings)
What is an "intelligence explosion"? (add alt phrasings)
What is an "s-risk"? (add alt phrasings)
What is causal decision theory? (add alt phrasings)
What is meant by "AI takeoff"? (add alt phrasings)
What is the "control problem"? (add alt phrasings)
What is the "long reflection"? (add alt phrasings)
What is the "orthogonality thesis"? (add alt phrasings)
What is the "universal prior"? (add alt phrasings)
What is the "windfall clause"? (add alt phrasings)
What is the Stampy project? (add alt phrasings)
What is the general nature of the concern about AI alignment? (add alt phrasings)
What is the probability of extinction from misaligned superintelligence? (add alt phrasings)
What kind of a challenge is solving AI alignment? (add alt phrasings)
What milestones are there between us and AGI? (add alt phrasings)
What plausibly happens five years before and after AGI? (add alt phrasings)
What research is being done to align modern deep learning systems? (add alt phrasings)
What safety problems are associated with whole brain emulation? (add alt phrasings)
What should I read to learn about decision theory? (add alt phrasings)
What should be marked as a "related" question on Stampy's Wiki? (add alt phrasings)
What should be marked as a canonical answer on Stampy's Wiki? (add alt phrasings)
What should the first AGI systems be aligned to do? (add alt phrasings)
What sources of information can Stampy use? (add alt phrasings)
What subjects should I study at university to prepare myself for alignment research? (add alt phrasings)
What technical problems are MIRI working on? (add alt phrasings)
What technological developments could speed up AI progress? (add alt phrasings)
What would a "warning shot" look like? (add alt phrasings)
What would a good future with AGI look like? (add alt phrasings)
What would a good solution to AI alignment look like? (add alt phrasings)
What would a world shortly before AGI look like? (add alt phrasings)
What would be physically possible and desirable to have in an AI-built utopia? (add alt phrasings)
What's especially worrisome about autonomous weapons? (add alt phrasings)
What's meant by calling an AI "agenty" or "agentlike"? (add alt phrasings)
When should I stamp an answer? (add alt phrasings)
When will an intelligence explosion happen? (add alt phrasings)
Where can I find mentorship and advice for becoming a researcher? (add alt phrasings)
Where can I find people to talk to about AI alignment? (add alt phrasings)
Where can I learn about AI alignment? (add alt phrasings)
Where can I learn about interpretability? (add alt phrasings)
Which country will AGI likely be created by, and does this matter? (add alt phrasings)
Which military applications of AI are likely to be developed? (add alt phrasings)
Which organizations are working on AI policy? (add alt phrasings)
Which organizations are working on AI safety? (add alt phrasings)
Which university should I study at if I want to best prepare for working on AI alignment? (add alt phrasings)
Who is Nick Bostrom? (add alt phrasings)
Why can't we just make a "child AI" and raise it? (add alt phrasings)
Why can't we just turn the AI off if it starts to misbehave? (add alt phrasings)
Why can't we simply stop developing AI? (add alt phrasings)
Why can’t we just use Asimov’s Three Laws of Robotics? (add alt phrasings)
Why can’t we just use natural language instructions? (add alt phrasings)
Why can’t we just… (add alt phrasings)
Why do some AI researchers not worry about alignment? (add alt phrasings)
Why do we expect that a superintelligence would closely approximate a utility maximizer? (add alt phrasings)
Why do you like stamps so much? (add alt phrasings)
Why does AI need goals in the first place? Can’t it be intelligent without any agenda? (add alt phrasings)
Why does AI takeoff speed matter? (add alt phrasings)
Why does there seem to have been an explosion of activity in AI in recent years? (add alt phrasings)
Why don't we just not build AGI if it's so dangerous? (add alt phrasings)
Why is AGI dangerous? (add alt phrasings)
Why is AGI safety a hard problem? (add alt phrasings)
Why is AI safety important? (add alt phrasings)
Why is safety important for smarter-than-human AI? (add alt phrasings)
Why is the future of AI suddenly in the news? What has changed? (add alt phrasings)
Why might a maximizing AI cause bad outcomes? (add alt phrasings)
Why might a superintelligent AI be dangerous? (add alt phrasings)
Why might an AI do something that we don’t want it to, if it’s really so intelligent? (add alt phrasings)
Why might people try to build AGI rather than stronger and stronger narrow AIs? (add alt phrasings)
Why might we expect a fast takeoff? (add alt phrasings)
Why might we expect a moderate AI takeoff? (add alt phrasings)
Why should I worry about superintelligence? (add alt phrasings)
Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer? (add alt phrasings)
Why think that AI can outperform humans? (add alt phrasings)
Why work on AI safety early? (add alt phrasings)
Why would great intelligence produce great power? (add alt phrasings)
Will AGI be agentic? (add alt phrasings)
Will AI learn to be independent from people or will it always ask for our orders? (add alt phrasings)
Will an aligned superintelligence care about animals other than humans? (add alt phrasings)
Will superintelligence make a large part of humanity unemployable? (add alt phrasings)
Will there be a discontinuity in AI capabilities? If so, at what stage? (add alt phrasings)
Won’t AI be just like us? (add alt phrasings)
Would "warning shots" make a difference and, if so, would they be helpful or harmful? (add alt phrasings)
Would AI alignment be hard with deep learning? (add alt phrasings)
Would an AI create or maintain suffering because some people want it? (add alt phrasings)
Would donating small amounts to AI safety organizations make any significant difference? (add alt phrasings)
Would it improve the safety of quantilizers to cut off the top few percent of the distribution? (add alt phrasings)
Wouldn't a superintelligence be smart enough not to make silly mistakes in its comprehension of our instructions? (add alt phrasings)
Wouldn't it be a good thing for humanity to die out? (add alt phrasings)
Wouldn't it be safer to only build narrow AIs? (add alt phrasings)

Canonical questions with alternate phrasings

AIs aren’t as smart as rats, let alone humans. Isn’t it far too early to be worrying about this kind of thing? AI is stupid, how is this a concern right now?, Current AI systems aren't very impressive, shouldn't we wait until AI is more capable?, Isn't it too soon to be working on this? (edit alt phrasings)
Are Google, OpenAI, etc. aware of the risk? Are existing AI companies thinking about this?, To what extent is this on the radar for the industry ? (edit alt phrasings)
Are there types of advanced AI that would be safer than others? Are some AI designs less dangerous?, Are there any safe kinds of AI? (edit alt phrasings)
Aren’t there some pretty easy ways to eliminate these potential problems? AI Safety doesn't seem that hard?, It seems like these risks should be simple to avoid? (edit alt phrasings)
Can we constrain a goal-directed AI using specified rules? What about laws of robotics that AI is bound by? (edit alt phrasings)
Can we program the superintelligence to maximize human pleasure or satisfaction of human desires? Is human happiness a good thing to maximise?, What's wrong with a goal like creating the greatest possible human satisfaction? (edit alt phrasings)
Can we teach a superintelligence a moral code with machine learning? What if we use ML to learn morality from human data? (edit alt phrasings)
Can you give an AI a goal which involves “minimally impacting the world”? What about minimising the system's effects on the world?, What if we include a term for "don't have too big an impact"? (edit alt phrasings)
How can I contact the Stampy team? How can I submit suggestions?, How can I talk to you? (edit alt phrasings)
How can I convince others and present the arguments well? How can I have healthy discussions about this? (edit alt phrasings)
How do I format answers on Stampy? What's the markup for Stampy? (edit alt phrasings)
How does the stamp eigenkarma system work? How do I get stamps?, What are stamps? (edit alt phrasings)
How doomed is humanity? Are we screwed?, What are our chances of surviving? (edit alt phrasings)
How might AGI kill people? What are concrete plausible stories for how an AI takes over the world? (edit alt phrasings)
I'm interested in working on AI safety. What should I do? How can I get started working on AI Safety?, I want to do this as my job, how do I do that? (edit alt phrasings)
If I only care about helping people alive today, does AI safety still matter? What if I discount morally? Does this still matter?, What if I have a person-affecting view on population ethics? Should I still care about alignment? (edit alt phrasings)
Is AI alignment possible? Is AI alignment possible? (edit alt phrasings)
Is it already too late to work on AI alignment? Are there anything that can actually be done in the amount of time left? (edit alt phrasings)
Is it possible to block an AI from doing certain things on the Internet? Is it possible to limit an AGI from full access to the internet? (edit alt phrasings)
Isn’t AI just a tool like any other? Won’t it just do what we tell it to? Isn’t AI just a tool like any other? (edit alt phrasings)
What about AI concerns other than misalignment? Shouldn't we work on things other than than AI alignment? (edit alt phrasings)
What are some objections to the importance of AI alignment? What are some common objections to the need for AI alignment, and brief responses to these? (edit alt phrasings)
What are the style guidelines for writing for Stampy? What is Stampy Point Of View? (edit alt phrasings)
What is a canonical question on Stampy's Wiki? What is a canonical version of a question on Stampy's Wiki? (edit alt phrasings)
What kind of questions do we want on Stampy? What is Stampy about?, What kind of questions do we want on Stampy?, What questions can I ask Stampy? (edit alt phrasings)
What’s a good AI alignment elevator pitch? How do I convince other people that AI safety is important? (edit alt phrasings)
When will transformative AI be created? How long will it be until transformative AI is created? (edit alt phrasings)
Where can I find all the features of Stampy's Wiki? How do I help out with organizational tasks on Stampy's Wiki? (edit alt phrasings)
Where can I find questions to answer for Stampy? What questions should I answer on Stampy? (edit alt phrasings)
Who created Stampy? What's the dev team?, Who helped create Stampy?, Who made you? (edit alt phrasings)
Who is Stampy? Who are you? (edit alt phrasings)
Why can’t we just “put the AI in a box” so that it can’t influence the outside world? Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world? (edit alt phrasings)
Why might contributing to Stampy be worth my time? How will Stampy be useful?, Why should I help with Stampy? (edit alt phrasings)
Why might we expect a superintelligence to be hostile by default? Why would AI want to kill us? (edit alt phrasings)
Will there be an AI-assisted "long reflection" and how might it look? How might an AI-enabled "long reflection" look? (edit alt phrasings)
Will we ever build a superintelligence? Is it physically possible to make a AI smarter than humans?, Is superintelligence even possible? (edit alt phrasings)
Wouldn't a superintelligence be smart enough to know right from wrong? If an AI system is smart, could it figure out the moral way to behave? (edit alt phrasings)