Category:Unanswered questions
From Stampy's Wiki
These questions have not yet been answered.
Pages in category "Unanswered questions"
The following 106 pages are in this category, out of 106 total.
A
C
- Can we ever be sure that an AI is aligned?
- Can we get AGI by scaling up architectures similar to current ones, or are we missing key insights?
- Considering how hard it is to predict the future, why do we think we can say anything useful about AGI today?
- Could divesting from AI companies without good safety culture be useful, or would this be likely to have a negligible impact?
- Could we build provably beneficial AI systems?
- Could we get significant biological intelligence enhancements long before AGI?
- Could we tell the AI to do what's morally right?
- Could weak AI systems help with alignment research?
D
H
- How can I be a more productive student/researcher?
- How can I contribute in the area of community building?
- How can I support alignment researchers to be more productive?
- How did the military think about and use AI in the past?
- How do I convince other people that AI safety is important?
- How do I know whether I'm a good fit for work on AI safety?
- How do I stay motivated and productive?
- How do organizations do adversarial training and red teaming?
- How does the current global microchip supply chain work, and who has political power over it?
- How hard is it for an AGI to develop powerful nanotechnology?
- How important is research closure and OPSEC for capabilities-synergistic ideas?
- How is metaethics relevant to AI alignment?
- How likely are AI organizations to respond appropriately to the risks of their creations?
- How likely is it that AGI will first be developed by a large established organization, rather than a small startup, an academic group or a government?
- How likely is it that governments will play a significant role? What role would be desirable, if any?
- How might "acausal trade" affect alignment?
- How might a real-world AI system that receives orders in natural language and does what you mean look?
- How might AGI interface with cybersecurity?
- How might an AI-enabled "long reflection" look?
- How might we reduce the chance of an AI arms race?
- How might we reduce the diffusion of dangerous AI technology to insufficiently careful actors?
- How possible (and how desirable) is it to change which path humanity follows to get to AGI?
- How quickly would the AI capabilities ecosystem adopt promising new advances in AI alignment?
- How should I change my financial investments in response to the possibility of transformative AI?
- How should I personally prepare for when transformative AI arrives?
- How software- and/or hardware-bottlenecked are we on AGI?
- How tractable is it to get governments to play a good role (rather than a bad role) and/or to get them to play a role at all (rather than no role)?
- How would I know if AGI were imminent?
- How would we know if an AI were suffering?
I
- If AGI comes from a new paradigm, how likely is it to arise late in the paradigm when it is already deployed at scale, versus early on when only a few people are exploring the idea?
- If an AI system is smart, could it figure out the moral way to behave?
- Is merging with AI through brain-computer interfaces a potential solution to safety problems?
- Is the question of whether we're living in a simulation relevant to AI safety? If so, how?
- Is the UN concerned about existential risk from AI?
- Is there a Chinese AI safety community? Are there safety researchers working at leading Chinese AI labs?
- Is there something useful we can ask governments to do for AI alignment?
- Isn't it hard to make a significant difference as a person who isn't going to be a world-class researcher?
- Isn't the real concern autonomous weapons?
M
- Might an aligned superintelligence force people to "upload" themselves, so as to more efficiently use the matter of their bodies?
- Might an aligned superintelligence force people to have better lives and change more quickly than they want?
- Might an aligned superintelligence immediately kill everyone and then go on to create a "hedonium shockwave"?
- Might attempting to align AI cause a "near miss" which results in a much worse outcome?
- Might humanity create astronomical amounts of suffering when colonizing the universe after creating an aligned superintelligence?
- Might trying to build a hedonium-maximizing AI be easier and more likely to work than trying for eudaimonia?
S
W
- What actions can I take in under five minutes to contribute to the cause of AI safety?
- What alignment strategies are scalably safe and competitive?
- What are "coherence theorems" and what do they tell us about AI?
- What are "selection theorems" and can they tell us anything useful about the likely shape of AGI systems?
- What are likely to be the first transformative applications of AI?
- What are plausible candidates for "pivotal acts"?
- What are some helpful AI policy ideas?
- What are some open research questions in AI alignment?
- What are some practice or entry-level problems for getting into alignment research?
- What are some problems in philosophy that are related to AI safety?
- What are the "win conditions"/problems that need to be solved?
- What are the different versions of decision theory?
- What are the leading theories in moral philosophy and which of them might be technically the easiest to encode into an AI?
- What beneficial things would an aligned superintelligence be able to do?
- What could a superintelligent AI do, and what would be physically impossible even for it?
- What does a typical work day in the life of an AI safety researcher look like?
- What does alignment failure look like?
- What evidence do experts usually base their timeline predictions on?
- What if technological progress stagnates and we never achieve AGI?
- What is "agent foundations"?
- What is "Do What I Mean"?
- What is "logical decision theory"?
- What is "metaphilosophy" and how does it relate to AI safety?
- What is a "pivotal act"?
- What is the "universal prior"?
- What is the probability of extinction from misaligned superintelligence?
- What milestones are there between us and AGI?
- What plausibly happens five years before and after AGI?
- What research is being done to align modern deep learning systems?
- What safety problems are associated with whole brain emulation?
- What should the first AGI systems be aligned to do?
- What subjects should I study at university to prepare myself for alignment research?
- What technological developments could speed up AI progress?
- What would a world shortly before AGI look like?
- What would be physically possible and desirable to have in an AI-built utopia?
- What’s a good AI alignment elevator pitch?
- Where can I find mentorship and advice for becoming a researcher?
- Which country will AGI likely be created by, and does this matter?
- Which military applications of AI are likely to be developed?
- Which organizations are working on AI policy?
- Which university should I study at if I want to best prepare for working on AI alignment?
- Why do some AI researchers not worry about alignment?
- Why do you like stamps so much?
- Will AGI be agentic?
- Will government intervention on AI through regulations and policies end up net negative or positive in regards to reducing xrisk?
- Would "warning shots" make a difference and, if so, would they be helpful or harmful?
- Would an AI create or maintain suffering because some people want it?