Don't worry about accidentally adding duplicates if a simple search does not find what you're looking for, the team will process incoming questions and match them to existing ones.
Stampy is focused specifically on AI existential safety (both introductory and technical questions), but does not aim to cover general AI questions or other topics which don't interact strongly with the effects of AI on humanity's long-term future. More technical questions are also in our scope, though replying to all possible proposals is not feasible and this is not a place to submit detailed ideas for evaluation.
We are interested in:
- Introductory questions closely related to the field e.g.
- "How long will it be until transformative AI arrives?"
- "Why might advanced AI harm humans?"
- Technical questions related to the field e.g.
- "What is Cooperative Inverse Reinforcement Learning?"
- "What is Logical Induction useful for?"
- Questions about how to contribute to the field e.g.
- "Should I get a PhD?"
- "Where can I find relevant job opportunities?"
More good examples can be found at canonical questions.
We do not aim to cover:
- Aspects of AI Safety or fairness which are not strongly relevant to existential safety e.g.
- "How should self-driving cars weigh up moral dilemmas"
- "How can we minimize the risk of privacy problems caused by machine learning algorithms?"
- Extremely specific and detailed questions the answering of which is unlikely to be of value to more than a single person e.g.
- "What if we did <multiple paragraphs of dense text>? Would that result in safe AI?"