How does the Stampy chatbot work?
This website hosts a chatbot named Stampy that can answer your questions related to AI safety.
Stampy will serve a human-written answer directly from this website when one exists. When there is no matching answer, Stampy will attempt to generate one.
Like all LLM-based chatbots, it will sometimes hallucinate/confabulate/make stuff up. However, it will cite sources to support what it says, and when in doubt, you can check these sources to see whether what Stampy says is accurate.
At a high level1
-
Searching a curated dataset of AI safety sources (like articles and blog posts) for snippets relevant to the question;
-
Feeding these snippets into the context window
of an LLM and prompting the LLM to write a summary;Context windowThe space which contains all of the text being continued by a language model. The longer the context window, the more information the model can take into account when generating outputs.
-
Showing this summary on the site, along with links to the sources.
For a more detailed explanation of how the chatbot works, see this post.
This approach is called “retrieval-augmented generation” (RAG). ↩︎