Main Page

From Stampy's Wiki
Revision as of 20:26, 1 July 2022 by 756254556811165756 (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Welcome to Stampy's Wiki, the editor's hub for open effort to build a comprehensive FAQ about artificial intelligence existential safety—the field trying to make sure that when we build superintelligent artificial systems they are aligned with human values so that they do things compatible with our survival and flourishing.

We're also building a cleaner web UI for readers and a bot interface. Feel free to get involved as an early contributor!


These are unanswered questions which we've reviewed and decided are within Stampy's scope. Feel free to answer them if you want to help out. Your answers will be reviewed, stamped, and possibly improved by others so don't worry about them not being perfect :)
Many of the below answers already have a bulletpoint sketch by Rob over on this Google Doc, you are encouraged to turn those into full answers!

I'm not yet convinced that AI safety by itself is a realistic risk. That things will work that well that it can increase. I guess the most interesting way to be convinced is just to read what convinced other people. So it's what I ask most people I meet about a subject I have doubt about

See more...

Details on how to use each feature are on the individual pages.
Get involved
Questions

Answers Review answers Improve answers Recent activity Pages to create Content External
  • Stampy's Public Discord - Ask there for an invite to the real one, until OpenAI approves our chatbot for a public Discord
  • Wiki stats - Graphs over time of active users, edits, pages, response time, etc
  • Google Drive - Folder with Stampy-related documents
UI controls To-do list

What are some specific open tasks on Stampy?

Show your endorsement of this answer by giving it a stamp of approval!

Other than the usual fare of writing and processing and organizing questions and answers, here are some specific open tasks: