Main Page

From Stampy's Wiki
Welcome to Stampy's Wiki, the editor's hub for open effort to build a comprehensive FAQ about artificial intelligence existential safety—the field trying to make sure that when we build superintelligent artificial systems they are aligned with human values so that they do things compatible with our survival and flourishing.

We're also building a web UI (early prototype) and bot interface, so you'll soon be able to browse the FAQ and other sources in a cleaner way than the wiki. Feel free to get involved as an early contributor!


These are unanswered questions which we've reviewed and decided are within Stampy's scope. Feel free to answer them if you want to help out. Your answers will be reviewed, stamped, and possibly improved by others so don't worry about them not being perfect :)

Is it hard like 'building a secure OS that works on the first try'? Hard like 'the engineering/logistics/implementation portion of the Manhattan Project'? Both? Some other option? Etc.

See more...

Details on how to use each feature are on the individual pages.
Get involved
Questions

Answers Review answers Improve answers Recent activity Pages to create Content External
  • Stampy's Public Discord - Ask there for an invite to the real one, until OpenAI approves our chatbot for a public Discord
  • Wiki stats - Graphs over time of active users, edits, pages, response time, etc
  • Google Drive - Folder with Stampy-related documents
UI controls