Review answers

From Stampy's Wiki
Revision as of 15:13, 26 April 2022 by 756254556811165756 (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

If you think an answer is good (i.e. accurate, helpful, and replies well to the question) then please give it a stamp of approval. If you're a more experienced editor and have a good feel for when there is consensus around an answer being accepted, feel free to mark answers as canonical, so that Stampy will start serving them to users.

Individual pages

These pages track 175 answers which need review (with some being counted multiple times).

Review answers

These 47 non-canonical answers are answering canonical questions.

Questions are: (i) contributed by online users or Stampy editors via the Stampy Wiki; or (ii) scraped from online content (various AI-alignment-related FAQs as well as the comments sections of certain AI Alignment YouTube videos).

The scraped content is currently a secondary concern, but this crude process of aggregation will eventually be streamlined into a reliable source of human-editable questions and answers.

Questions are reviewed by Stampy editors, who decide if: (i) they're duplicates of existing questions (the criterion being that the answer to the existing question would be fully satisfactory to the asker of the new question); (ii) they're sufficiently within the scope of the Stampy project.

We are working on using semantic search to suggest possible duplicates.

If they meet these two criterion, questions are added to a list of canonical questions.

A rating system allows editors to assign quality levels "Meh"/"Unreviewed"/"Approved"/"Good"/"Excellent" in order the questions on Answer questions, so that the most important questions can be worked on first.

Answers to canonical questions can be contributed via the Stampy Wiki by online users or by Stampy editors directly, at which point the question is added to a list of "answered canonical questions"

Editors can attempt to improve a contributed answer, and/or can "stamp" it to indicate their approval, adding to its "stamp score".


Once the answer to a canonical question gets a sufficiently high stamp score it gets added to a list of canonical answers (to canonical questions).

These canonical question/answer pairs are then ready to be served to the user interface. In order for them to become visible there, though, they must be associated with existing canonical question/answer pairs in one of two ways: RELATED or FOLLOWUP. Any editor can improve these relationships, either based on tags or their own understanding of what a reader might want to know. Questions should aim to have 2-5 related + followups generally, although exceptions can be made.

If Questions B is RELATED to Question A, it will slide in below Question A on the UI page when Question A is clicked on, provided that if it is not already present on the page.

If Question B is FOLLOWUP to Question A, it will always slide in below Question A when Question A is clicked on, even if it is already present on the UI page.

A and B being RELATED questions can be thought of as a kind of conceptual adjacency. If a user is interested to know the answer to A, they'll probably be interested in the answer to B too, and vice versa. Reading these in either order should make roughly the same amount of sense to the average user.

Question B being FOLLOWUP to Question A can be thought of in terms of progressive knowledge: the answer to B will only really make sense to the average user if they have read the answer to A first. This is also used for letting Stampy ask clarifying questions to direct readers to the right part of his knowledge graph.


If you click on "Edit answer", then "[Show Advanced Options]", you'll be given the option to submit a brief version of you answer (this field will be automatically filled if the full answer exceeds 2000 characters).

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: None (add tags)

test ans

Stamps: Augustus Caesar
Show your endorsement of this answer by giving it a stamp of approval!

Tags: None (add tags)

test ans

Stamps: Augustus Caesar
Show your endorsement of this answer by giving it a stamp of approval!

Tags: None (add tags)

There are debates about how discontinuous an intelligence explosion would be, with Paul Christiano expecting to see the world being transformed by less and less weak AGIs over some number of years, while Eliezer Yudkowsky expects a rapid jump in capabilities once generality is achieved and the self-improvement process is able to sustain itself.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!


Vael Gates's project links to lots of example transcripts of persuading senior AI capabilities researchers.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!


Codex / Github Copilot are AIs that use GPT-3 to write and edit code. When given some input code and comments describing the intended function, they will write output that extends the prompt as accurately as possible.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!


"The real concern" isn't a particularly meaningful concept here. Deep learning has proven to be a very powerful technology, with far reaching implications across a number of aspects of human existence. There are significant benefits to be found if we manage the technology properly, but that management means addressing a broad range of concerns, one of which is the alignment problem.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!


Whole Brain Emulation (WBE) or ‘mind uploading’ is a computer emulation of all the cells and connections in a human brain. So even if the underlying principles of general intelligence prove difficult to discover, we might still emulate an entire human brain and make it run at a million times its normal speed (computer circuits communicate much faster than neurons do). Such a WBE could do more thinking in one second than a normal human can in 31 years. So this would not lead immediately to smarter-than-human intelligence, but it would lead to faster-than-human intelligence. A WBE could be backed up (leading to a kind of immortality), and it could be copied so that hundreds or millions of WBEs could work on separate problems in parallel. If WBEs are created, they may therefore be able to solve scientific problems far more rapidly than ordinary humans, accelerating further technological progress.

See also:

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!


Until AI doesn't exceed human capabilities, we could do that.

But there is no reason why AI capabilities would stop at the human level. Systems more intelligent than us, could think of several ways to outsmart us, so our best bet is to have them as closely aligned to our values as possible.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!


The problem is that the actions can be harmful in a very non-obvious, indirect way. It's not at all obvious which actions should be stopped.

For example when the system comes up with a very clever way to acquire resources - this action's safety depends on what it intends to use these resources for.

Such a supervision may buy us some safety, if we find a way to make the system's intentions very transparent.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!


Verified accounts are given to people who have clearly demonstrated understanding of AI Safety outside of this project, such as by being employed and vouched for by a major AI Safety organization or by producing high-impact research. Verified accounts may freely mark answers as canonical or not, regardless of how many Stamps the person has, to determine whether those answers are used by Stampy.

Stamps: plex
Show your endorsement of this answer by giving it a stamp of approval!

Tags: stampy (edit tags)

This depends on how we will program it. It definitely can be autonomous, even now, we have some autonomous vehicles or flight control systems and many more.

Even though it's possible to build such systems, it may be better if they actively ask humans for supervision, for example in cases where they are uncertain what to do.

Stamps: plex
Show your endorsement of this answer by giving it a stamp of approval!


Nobody knows for sure when we will have ASI or if it is even possible. Predictions on AI timelines are notoriously variable, but recent surveys about the arrival of human-level AGI have median dates between 2040 and 2050 although the median for (optimistic) AGI researchers and futurists is in the early 2030s (source). What will happen if/when we are able to build human-level AGI is a point of major contention among experts. One survey asked (mostly) experts to estimate the likelihood that it would take less than 2 or 30 years for a human-level AI to improve to greatly surpass all humans in most professions. Median answers were 10% for "within 2 years" and 75% for "within 30 years". We know little about the limits of intelligence and whether increasing it will follow the law of accelerating or diminishing returns. Of particular interest to the control problem is the fast or hard takeoff scenario. It has been argued that the increase from a relatively harmless level of intelligence to a dangerous vastly superhuman level might be possible in a matter of seconds, minutes or hours: too fast for human controllers to stop it before they know what's happening. Moving from human to superhuman level might be as simple as adding computational resources, and depending on the implementation the AI might be able to quickly absorb large amounts of internet knowledge. Once we have an AI that is better at AGI design than the team that made it, the system could improve itself or create the next generation of even more intelligent AIs (which could then self-improve further or create an even more intelligent generation, and so on). If each generation can improve upon itself by a fixed or increasing percentage per time unit, we would see an exponential increase in intelligence: an intelligence explosion.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!


... further results

These 9 answers have been added in the last month.

The development team works on multiple projects in support of Stampy. Currently, these projects include:

  • Stampy UI, which is made mostly in TypeScript.
  • The Stampy Wiki, which is made mostly in PHP and JavaScript.
  • The Stampy Bot, which is made in Python.

However, even if you don’t specialize in any of these areas, do reach out if you would like to help.

To join, please contact our Project Manager, plex. You can reach him on discord at plex#1874. He will be able to point your skills in the right direction to help in the most effective way possible.

Stamps: plex
Show your endorsement of this answer by giving it a stamp of approval!


  • All content produced on this wiki is released under the CC-BY-SA 4.0 license. Exceptions for unattributed use may be granted by admins, contact plex for inquiries.
  • Questions from YouTube or other sources are reproduced with the intent of fair use, as derivative and educational material.
  • Source code of https://stampy-ui.aprillion.workers.dev/ is released under MIT license
  • Logo and visual design copyright is owned by Rob Miles, all rights reserved.
Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: stampy (edit tags)

Questions are: (i) contributed by online users or Stampy editors via the Stampy Wiki; or (ii) scraped from online content (various AI-alignment-related FAQs as well as the comments sections of certain AI Alignment YouTube videos).

The scraped content is currently a secondary concern, but this crude process of aggregation will eventually be streamlined into a reliable source of human-editable questions and answers.

Questions are reviewed by Stampy editors, who decide if: (i) they're duplicates of existing questions (the criterion being that the answer to the existing question would be fully satisfactory to the asker of the new question); (ii) they're sufficiently within the scope of the Stampy project.

We are working on using semantic search to suggest possible duplicates.

If they meet these two criterion, questions are added to a list of canonical questions.

A rating system allows editors to assign quality levels "Meh"/"Unreviewed"/"Approved"/"Good"/"Excellent" in order the questions on Answer questions, so that the most important questions can be worked on first.

Answers to canonical questions can be contributed via the Stampy Wiki by online users or by Stampy editors directly, at which point the question is added to a list of "answered canonical questions"

Editors can attempt to improve a contributed answer, and/or can "stamp" it to indicate their approval, adding to its "stamp score".


Once the answer to a canonical question gets a sufficiently high stamp score it gets added to a list of canonical answers (to canonical questions).

These canonical question/answer pairs are then ready to be served to the user interface. In order for them to become visible there, though, they must be associated with existing canonical question/answer pairs in one of two ways: RELATED or FOLLOWUP. Any editor can improve these relationships, either based on tags or their own understanding of what a reader might want to know. Questions should aim to have 2-5 related + followups generally, although exceptions can be made.

If Questions B is RELATED to Question A, it will slide in below Question A on the UI page when Question A is clicked on, provided that if it is not already present on the page.

If Question B is FOLLOWUP to Question A, it will always slide in below Question A when Question A is clicked on, even if it is already present on the UI page.

A and B being RELATED questions can be thought of as a kind of conceptual adjacency. If a user is interested to know the answer to A, they'll probably be interested in the answer to B too, and vice versa. Reading these in either order should make roughly the same amount of sense to the average user.

Question B being FOLLOWUP to Question A can be thought of in terms of progressive knowledge: the answer to B will only really make sense to the average user if they have read the answer to A first. This is also used for letting Stampy ask clarifying questions to direct readers to the right part of his knowledge graph.


If you click on "Edit answer", then "[Show Advanced Options]", you'll be given the option to submit a brief version of you answer (this field will be automatically filled if the full answer exceeds 2000 characters).

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: None (add tags)

TODO: delete this page or create an actual answer

Stamps: Augustus Caesar
Show your endorsement of this answer by giving it a stamp of approval!

Tags: None (add tags)

test ans

Stamps: Augustus Caesar
Show your endorsement of this answer by giving it a stamp of approval!

Tags: None (add tags)

... further results

These 152 canonical answers have one or fewer stamps.

One possible way to ensure the safety of a powerful AI system is to keep it contained in a software environment. There is nothing intrinsically wrong with this procedure - keeping an AI system in a secure software environment would make it safer than letting it roam free. However, even AI systems inside software environments might not be safe enough.

Humans sometimes put dangerous humans inside boxes to limit their ability to influence the external world. Sometimes, these humans escape their boxes. The security of a prison depends on certain assumptions, which can be violated. Yoshie Shiratori reportedly escaped prison by weakening the door-frame with miso soup and dislocating his shoulders.

Human written software has a high defect rate; we should expect a perfectly secure system to be difficult to create. If humans construct a software system they think is secure, it is possible that the security relies on a false assumption. A powerful AI system could potentially learn how its hardware works and manipulate bits to send radio signals. It could fake a malfunction and attempt social engineering when the engineers look at its code. As the saying goes: in order for someone to do something we had imagined was impossible requires only that they have a better imagination.

Experimentally, humans have convinced other humans to let them out of the box. Spooky.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: boxing (edit tags)

In principle it could (if you believe in functionalism), but it probably won't. One way to ensure that AI has human-like emotions would be to copy the way human brain works, but that's not what most AI researchers are trying to do.

It's similar to how once some people thought we will build mechanical horses to pull our vehicles, but it turned out it's much easier to build a car. AI probably doesn't need emotions or maybe even consciousness to be powerful, and the first AGIs that will get built will be the ones that are easiest to build.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!


Alternate phrasings are used to improve the semantic search which Stampy uses to serve people questions, by giving alternate ways to say a question which might trigger a match when the main wording won't. They should generally only be used when there is a significantly different wording, rather than for only very minor changes.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: stampy (edit tags)

A slow takeoff is where AI capabilities improve gradually, giving us plenty of time to adapt. In a moderate takeoff we might see accelerating progress, but we still won’t be caught off guard by a dramatic change. Whereas, in a fast or hard takeoff AI would go from being not very generally competent to sufficiently superhuman to control the future too fast for humans to course correct if something goes wrong.

The article Distinguishing definitions of takeoff goes into more detail on this.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!


Intelligence is powerful. One might say that “Intelligence is no match for a gun, or for someone with lots of money,” but both guns and money were produced by intelligence. If not for our intelligence, humans would still be foraging the savannah for food.

Intelligence is what caused humans to dominate the planet in the blink of an eye (on evolutionary timescales). Intelligence is what allows us to eradicate diseases, and what gives us the potential to eradicate ourselves with nuclear war. Intelligence gives us superior strategic skills, superior social skills, superior economic productivity, and the power of invention.

A machine with superintelligence would be able to hack into vulnerable networks via the internet, commandeer those resources for additional computing power, take over mobile machines connected to networks connected to the internet, use them to build additional machines, perform scientific experiments to understand the world better than humans can, invent quantum computing and nanotechnology, manipulate the social world better than we can, and do whatever it can to give itself more power to achieve its goals — all at a speed much faster than humans can respond to.

See also

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!