Stampy's Wiki is focused on answering common questions people have which are specifically about AI existential safety. More technical questions are also in our scope, though replying to all possible proposals is not feasible and this is not a great place to submit detailed ideas for evaluation.
We are interested in:
- Questions which come up often when people are introduced to this collection of ideas and are strongly relevant to the field e.g.
- "How long will it be until transformative AI arrives?"
- "Why might advanced AI harm humans?"
- Technical questions related to the field e.g.
- "What is Cooperative Inverse Reinforcement Learning?"
- "What is Logical Induction useful for?"
- Questions about how to contribute to the field e.g.
- "Should I get a PhD?"
- "Where can I find relevant job opportunities?"
More good examples can be found in the canonical questions category.
We do not aim to cover:
- Aspects of AI Safety or fairness which are not strongly relevant to existential safety e.g.
- "How should self-driving cars weigh up moral dilemmas"
- "How can we minimize the risk of privacy problems caused by machine learning algorithms?"
- Extremely specific and detailed questions the answering of which is unlikely to be of value to more than a single person e.g.
- "What if we did <multiple paragraphs of dense text>? Would that result in safe AI?"