Stampy is focused specifically on
AI existential safety (both introductory and technical questions), but does not aim to cover general AI questions or other topics which don't interact strongly with the effects of AI on humanity's long-term future.
[Show detailed answer]
[Show brief answer]
Stampy is focused on answering common questions people have which are specifically about
AI existential safety. More technical questions are also in our scope, though replying to all possible proposals is not feasible and this is not a great place to submit detailed ideas for evaluation.
We are interested in:
- Introductory questions closely related to the field e.g.
- "How long will it be until transformative AI arrives?"
- "Why might advanced AI harm humans?"
- Technical questions related to the field e.g.
- "What is Cooperative Inverse Reinforcement Learning?"
- "What is Logical Induction useful for?"
- Questions about how to contribute to the field e.g.
- "Should I get a PhD?"
- "Where can I find relevant job opportunities?"
More good examples can be found at canonical questions.
We do not aim to cover:
- Aspects of AI Safety or fairness which are not strongly relevant to existential safety e.g.
- "How should self-driving cars weigh up moral dilemmas"
- "How can we minimize the risk of privacy problems caused by machine learning algorithms?"
- Extremely specific and detailed questions the answering of which is unlikely to be of value to more than a single person e.g.
- "What if we did <multiple paragraphs of dense text>? Would that result in safe AI?"
We will generally not delete out-of-scope content, but it will be
reviewed as low priority to answer, not be marked as a
canonical question, and not be served to readers by
stampy.