stampy

From Stampy's Wiki
Stampy
stampy
Main Question: What is the Stampy project? (edit) (edit non-canonical answer)

Description

The Stampy project is a volunteer effort to create a comprehensive FAQ on Artificial Intelligence existential safety, and a bot (User:Stampy) capable of using the FAQ and other resources to educate people about AI alignment via an interactive natural language interface.

The goals of the project are to:

  • Offer answers which are regularly improved and reviewed by our community
    • Let people answer questions in a way which scales, freeing up the time of people who understand the field while allowing more people to learn from a reliable source
    • Between the stamp eigenkarma system and giving verified researchers and other proven people power to promote or dis-promote answers, we'll try to reliably surface only answers which have been checked by someone who knows what they're talking about
    • Make external resources more easy to find by encouraging lots of links out
  • Provide a form of legitimate peripheral participation for the AI Safety community, as an on-boarding path for people who want to help
    • Encourage people to think and read about AI alignment while trying to answer questions
    • Create a community of co-learners who can give each other feedback and social reinforcement
  • Collect data about the kinds of questions people actually ask and how they respond, so we can better focus resources on answering them

Canonically answered

What is Stampy's scope?

Stampy is focused specifically on AI existential safety (both introductory and technical questions), but does not aim to cover general AI questions or other topics which don't interact strongly with the effects of AI on humanity's long-term future.

stampy is focused on answering common questions people have which are specifically about AI existential safety. More technical questions are also in our scope, though replying to all possible proposals is not feasible and this is not a great place to submit detailed ideas for evaluation.

We are interested in:

  • Questions which come up often when people are introduced to this collection of ideas and are strongly relevant to the field e.g.
    • "How long will it be until transformative AI arrives?"
    • "Why might advanced AI harm humans?"
  • Technical questions related to the field e.g.
    • "What is Cooperative Inverse Reinforcement Learning?"
    • "What is Logical Induction useful for?"
  • Questions about how to contribute to the field e.g.
    • "Should I get a PhD?"
    • "Where can I find relevant job opportunities?"

More good examples can be found in Category:Canonical_questions.

We do not aim to cover:

  • Aspects of AI Safety or fairness which are not strongly relevant to existential safety e.g.
    • "How should self-driving cars weigh up moral dilemmas"
    • "How can we minimize the risk of privacy problems caused by machine learning algorithms?"
  • Extremely specific and detailed questions the answering of which is unlikely to be of value to more than a single person e.g.
    • "What if we did <multiple paragraphs of dense text>? Would that result in safe AI?"

We will generally not delete out-of-scope content, but it will be reviewed as low priority to answer (either "Meh" or "Rejected"), not be marked as a canonical question, and not be served to readers by User:Stampy.

Non-canonical answers

Canonical answers may be served to readers by Stampy, so only answers which have a reasonably high stamp score should be marked as canonical. All canonical answers are open to be collaboratively edited and updated, and they should represent a consensus response (written from the Stampy Point Of View) to a question which is within Stampy's scope.

Answers to non-canonical questions should not be marked as canonical, and will generally remain as they were when originally written since they have details which are specific to an idiosyncratic question. Raw answers may be forked off of canonical answers, in order to better respond to a particular question, in which case the raw question should have its canonical version field set to the new more widely useful question.

See Browse FAQ for a full list.

How can I contribute to Stampy's Wiki?

The main way you can help is to can answer questions or ask questions which will be used to power an interactive FAQ system. We're looking to cover everything in Stampy's scope. You could also consider joining the dev team if you have programming skills. If you want to help and you're not already invited to the Discord, ask plex#1874 on Discord (or User_talk:plex on wiki).

If you are a researcher or otherwise employed by an AI Safety focused organization, please contact us and we'll set you up with an account with extra privileges.

If you're a developer and want to help out on the project, great! If you're not already on the Rob Miles Discord ask plex for an invite. If you are, let us know you're interested in contributing in #bot-dev.

Progress and open tasks are tracked on the Stampy trello.

Instead of discussing these topics in the default YouTube comments section, how about putting a link on each video to a page run by forum-specific software such as Disqus, Reddit, or even a wiki? YouTube comments are OK for posting quick reactions, but the format here strikes me as poorly suited for long back-and-forth discussion threads. Does anyone agree, and if so, what forum software do you recommend?

I just found this comment via the new Stampy wiki: https://stampy.ai/wiki/Main_Page which is the interface we'll be using to construct an FAQ using questions on Rob's channel as a base. Good idea, though it took us a few years to get to it, and did it in a slightly different form.

What is the Stampy project?

The Stampy project is a volunteer effort to create a comprehensive FAQ on Artificial Intelligence existential safety, and a bot (User:Stampy) capable of using the FAQ and other resources to educate people about AI alignment via an interactive natural language interface.

The goals of the project are to:

  • Offer answers which are regularly improved and reviewed by our community
    • Let people answer questions in a way which scales, freeing up the time of people who understand the field while allowing more people to learn from a reliable source
    • Between the stamp eigenkarma system and giving verified researchers and other proven people power to promote or dis-promote answers, we'll try to reliably surface only answers which have been checked by someone who knows what they're talking about
    • Make external resources more easy to find by encouraging lots of links out
  • Provide a form of legitimate peripheral participation for the AI Safety community, as an on-boarding path for people who want to help
    • Encourage people to think and read about AI alignment while trying to answer questions
    • Create a community of co-learners who can give each other feedback and social reinforcement
  • Collect data about the kinds of questions people actually ask and how they respond, so we can better focus resources on answering them

Verified accounts are given to people who have clearly demonstrated understanding of AI Safety outside of this project, such as by being employed and vouched for by a major AI Safety organization or by producing high-impact research. Verified accounts may freely mark answers as canonical or not, regardless of how many Stamps the person has, to determine whether those answers are used by Stampy.

Unanswered questions