Recent answers

From Stampy's Wiki

Back to Review answers.

These 10 answers have been added in the last month.

The problem of autonomous weapons is not directly related to the AI Safety problem, but both fit into the "be careful what you do with AI" category.

In the short term, these would allow for worse totalitarianism as automated security forces will never rebel. This removes the moderating influence of human personnel as convincing machines to do a horrible thing is easier than convincing humans. Despots need security forces to remain in power. Human security forces betraying a despot is a common way that despots lose power, this would not happen with robots.

Another consideration is that computer security is hard! Autonomous weapons could be hacked, initially by humans but eventually by an AGI. This is not good for humanity's chances of surviving the transition to AGI, although access to autonomous weapons is probably not necessary for this transition to go poorly.

See also Stop Killer Robots.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!


Metaphilosophy is the philosophy of philosophy.

Bostrom has described the AI Safety problem as "philosophy with a deadline". Metaphilosophy could be used to steer an AGI towards e.g. our coherent extrapolated volition.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: definitions, metaphilosophy (create tag) (edit tags)

There is currently no clear win condition that most/all researchers agree on. Many researchers have their own paradigm and view the problem from a different angle.

Here are some of the sub-fields on AI Safety research. We need to solve the challenges in many of these fields to win.

See also Concrete Problems in AI Safety.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!


In order of smallest commitment to largest:

  1. Link your friends to Stampy or Rob's videos
  2. Join or start a local AI Safety group at a university
  3. Get good at giving an elevator pitch
  4. Become a competent advocate by being convincing and have comprehensive knowledge, to answer follow-up questions
Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: contributing, community (create tag) (edit tags)

Ajeya Cotra's attempted to calculate this number in her paper Bio Anchors.

[...]the total amount of computation done over the course of evolution from the first animals with neurons to humans was (~1e16 seconds) * (~1e25 FLOP/s) = ~1e41 FLOP

Nuño Sempere argues that this calculation of the computation done by neurons is insufficient as the environment would also need to be simulated, leading to a possibly much larger number.

Cotra posits that this amount of computation should be taken on an upper bound to the amount of computation needed to develop AGI. The actual amount of computation needed is probably many orders of magnitude lower.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: evolution (create tag) (edit tags)

Whilst this is posible, AI technologies seem to be progressing much faster than cognitive enhancement.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: biological cognitive enhancement (create tag) (edit tags)

Notes:

Link https://forum.effectivealtruism.org/posts/GvHPnzGJQJ7iAiJNr/on-presenting-the-case-for-

https://www.lesswrong.com/posts/8c8AZq5hgifmnHKSN/agi-safety-faq-all-dumb-questions-allowed-thread?commentId=2JiMsmu32EvzKv4yP

The future of the world will be dominated by these systems. We control the world because we're the most capable and coordinated entities on the planet.

Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: communication (create tag) (edit tags)

Currently, private AI labs are greatly ahead of any known governmental efforts in the production of advanced AI, so it looks unlikely that a government will be involved directly in the creation of the first AGI.

Nevertheless, governments are important drivers of policies, so in order to avoid unaligned AGI, here are some suggestions that they can enact:

  • Chose policies that slow down rather than speed up AGI development, such as not incentivizing AI capabilities research.
  • Attempt to foster cooperation and reduce the number of players competing to create the first AGI in order to avoid race dynamics that lead to cut corners on safety.
  • Generally foster a stable, peaceful, connected world. Also don't start wars!
Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: government (create tag) (edit tags)

There are two different reasons you might be looking for a 5 minute contribution.

  1. You are only willing to spend five minutes total
  2. You want a simple call to action which will concretize your commitment. You are looking for a small action which can open the door for a larger action.

If you are looking to only spend five minutes total, you can:

  • Send an article with a friend, so that they can learn more. One possible choice is the 80000 hours career profile
  • Share a link on social media, you never know who may be interested
  • Donate to an organization working on AI risk

IF you are looking for a small action which will start things moving, you might consider:

  • Ordering a book (such as the alignment problem), and follow up by reading it
  • Signing up for a newsletter
  • Applying for career coaching by AISS.
Stamps: None
Show your endorsement of this answer by giving it a stamp of approval!

Tags: personal action (create tag) (edit tags)