Non-canonical answers

From Stampy's Wiki

Canonical answers may be served to readers by Stampy, so only answers which have a reasonably high stamp score should be marked as canonical. All canonical answers are open to be collaboratively edited and updated, and they should represent a consensus response (written from the Stampy Point Of View) to a question which is within Stampy's scope.

Answers to questions from YouTube comments should not be marked as canonical, and will generally remain as they were when originally written since they have details which are specific to an idiosyncratic question. YouTube answers may be forked into wiki answers, in order to better respond to a particular question, in which case the YouTube question should have its canonical version field set to the new more widely useful question.

See also

414 non-canonical answers, 276 from YouTube, 137 not!

Non-YouTube Non-Canonical Answers

's Answer to How quickly would the AI capabilities ecosystem adopt promising new advances in AI alignment?
Answer to AIs aren’t as smart as rats, let alone humans. Isn’t it far too early to be worrying about this kind of thing?
Answer to Can we add "friendliness" to any artificial intelligence design?
Answer to Can we program the superintelligence to maximize human pleasure or satisfaction of human desires?
Answer to Can we teach a superintelligence a moral code with machine learning?
Answer to Can we tell an AI just to figure out what we want and then do that?
Answer to Can’t we just program the superintelligence not to harm us?
Answer to How could general intelligence be programmed into a machine?
Answer to How might a superintelligence technologically manipulate humans?
Answer to Is this about AI systems becoming malevolent or conscious and turning on us?
Answer to Isn’t it immoral to control and impose our values on AI?
Answer to What can we expect the motivations of a superintelligent machine to be?
Answer to What is "coherent extrapolated volition"?
Answer to What is "friendly AI"?
Answer to What is "superintelligence"?
Answer to What is "whole brain emulation"?
Answer to What is Artificial General Intelligence safety/alignment?
Answer to Who is Nick Bostrom?
Answer to Why can’t we just…
Answer to Why does AI need goals in the first place? Can’t it be intelligent without any agenda?
Answer to Why is AI safety important?
Answer to Why might we expect a fast takeoff?
Answer to Why might we expect a moderate AI takeoff?
Answer to Wouldn't a superintelligence be smart enough not to make silly mistakes in its comprehension of our instructions?
Answer to Wouldn't a superintelligence be smart enough to know right from wrong?
Aprillion's Answer to Can AI be creative?
Aprillion's Answer to If an AI became conscious, how would we ever know?
Aprillion's Answer to Is it possible to limit an AGI from full access to the internet?
Aprillion's Answer to Isn’t AI just a tool like any other?
Aprillion's Answer to Might trying to build a hedonium-maximizing AI be easier and more likely to work than trying for eudaimonia?
Aprillion's Answer to What's meant by calling an AI "agenty" or "agentlike"?
Aprillion's Answer to Wouldn't it be safer to only build narrow AIs?
Casejp's Answer to Should I engage in political or collective action like signing petitions or sending letters to politicians?
Casejp's Answer to What if we put the AI in a box and have a second, more powerful, AI with the goal of preventing the first one from escaping?
Chlorokin's Answer to Could emulated minds do AI alignment research?
Chlorokin's Answer to What are "coherence theorems" and what do they tell us about AI?
Chlorokin's Answer to What if we put the AI in a box and have a second, more powerful, AI with the goal of preventing the first one from escaping?
Chlorokin's Answer to What is "Do What I Mean"?
Chlorokin's Answer to What is a "pivotal act"?
Chlorokin's Answer to Will superintelligence make a large part of humanity unemployable?
CyberByte's Answer to How long will it be until superintelligent AI is created?
Dpaleka's Answer to Are there any courses on technical AI safety topics?
Dropinthesea's Answer to What actions can I take in under five minutes to contribute to the cause of AI safety?
Dropinthesea's Answer to What is "Do What I Mean"?
Filip's Answer to Are AI researchers trying to make conscious AI?
Filip's Answer to Do you need a PhD to work on AI Safety?
Filip's Answer to Isn't it too soon to be working on AGI safety?
Filip's Answer to We already have psychopaths who are "misaligned" with the rest of humanity, but somehow we deal with them. Can't we do something similar with AI?
Filip's Answer to What about having a human supervisor who must approve all the AI's decisions before executing them?
Filip's Answer to What are the differences between AGI, transformative AI and superintelligence?
Filip's Answer to Will AI learn to be independent from people or will it always ask for our orders?
Jeremyg's Answer to What milestones are there between us and AGI?
Linnea's Answer to What are OpenAI Codex and GitHub Copilot?
Linnea's Answer to What are the ethical challenges related to whole brain emulation?
Luke Muehlhauser's Answer to What is "superintelligence"?
MIRI's Answer to How long will it be until superintelligent AI is created?
MIRI's Answer to Why can’t we just “put the AI in a box” so that it can’t influence the outside world?
MIRI's Answer to Why work on AI safety early?
Magdalena's Answer to What does generative visualization look like in reinforcement learning?
Magdalena's Answer to What is artificial general intelligence safety / AI alignment?
Magdalena's Answer to What subjects should I study at university to prepare myself for alignment research?
Matthew1970's Answer to What are the editorial protocols for Stampy questions and answers?
Morpheus's Answer to Is it already too late to work on AI alignment?
Murphant's Answer to Could I contribute by offering coaching to alignment researchers? If so, how would I go about this?
Murphant's Answer to Could we tell the AI to do what's morally right?
Murphant's Answer to Do AIs suffer?
Murphant's Answer to How can I contribute in the area of community building?
Murphant's Answer to How likely is it that governments will play a significant role? What role would be desirable, if any?
Murphant's Answer to How much resources did the processes of biological evolution use to evolve intelligent creatures?
Murphant's Answer to Might an aligned superintelligence force people to have better lives and change more quickly than they want?
Murphant's Answer to What are some important examples of specialised terminology in AI alignment?
Murphant's Answer to What are the "win conditions"/problems that need to be solved?
Murphant's Answer to What is "metaphilosophy" and how does it relate to AI safety?
Murphant's Answer to What's especially worrisome about autonomous weapons?
Nattfrosten's Answer to What are language models?
Nattfrosten's Answer to What are mesa-optimizers?
Nico Hill2's Answer to Will we ever build a superintelligence?
NotaSentientAI's Answer to Why not just put it in a box?
Plex's Answer to Aren't robots the real problem? How can AI cause harm if it has no ability to directly manipulate the physical world?
Plex's Answer to How can I convince others and present the arguments well?
Plex's Answer to How does the field of AI Safety want to accomplish its goal of preventing existential risk?
Plex's Answer to How long will it be until superintelligent AI is created?
Plex's Answer to How long will it be until transformative AI is created?
Plex's Answer to I'm not convinced AI would be a severe threat to humanity. Why are you so sure?
Plex's Answer to What is "agent foundations"?
Plex's Answer to What is a verified account on Stampy's Wiki?
Plex's Answer to What is everyone working on in AI alignment?
Plex's Answer to What’s a good AI alignment elevator pitch?
Plex's Answer to Will there be a discontinuity in AI capabilities? If so, at what stage?
QZ's Answer to Where can I find mentorship and advice for becoming a researcher?
QueenDaisy's Answer to Are any major politicians concerned about this?
QueenDaisy's Answer to Might an aligned superintelligence force people to "upload" themselves, so as to more efficiently use the matter of their bodies?
QueenDaisy's Answer to What could a superintelligent AI do, and what would be physically impossible even for it?
Quintin Pope's Answer to Will superintelligence make a large part of humanity unemployable?
Redshift's Answer to In "aligning AI with human values", which humans' values are we talking about?
Robertskmiles's Answer to Are expert surveys on AI safety available?
Robertskmiles's Answer to Is merging with AI through brain-computer interfaces a potential solution to safety problems?
RoseMcClelland's Answer to How do you figure out model performance scales?
RoseMcClelland's Answer to How does MIRI communicate their view on alignment?
RoseMcClelland's Answer to How is Beth Barnes evaluating LM power seeking?
RoseMcClelland's Answer to How would we align an AGI whose learning algorithms / cognition look like human brains?
RoseMcClelland's Answer to What does Evan Hubinger think of Deception + Inner Alignment?
RoseMcClelland's Answer to What does the scheme Externalized Reasoning Oversight involve?
RoseMcClelland's Answer to What is Conjecture, and what is their team working on?
RoseMcClelland's Answer to What is FAR's theory of change?
RoseMcClelland's Answer to What is Future of Humanity Instititute working on?
RoseMcClelland's Answer to What is Refine?
RoseMcClelland's Answer to What is Truthful AI's approach to improve society?
RoseMcClelland's Answer to What is an adversarial oversight scheme?
RoseMcClelland's Answer to What is the Center for Human Compatible AI (CHAI)?
RoseMcClelland's Answer to What is the purpose of the Visible Thoughts Project?
RoseMcClelland's Answer to What language models are Anthropic working on?
RoseMcClelland's Answer to What other organizations are working on technical AI alignment?
RoseMcClelland's Answer to What projects are CAIS working on?
RoseMcClelland's Answer to What projects are Redwood Research working on?
RoseMcClelland's Answer to What work is Redwood doing on LLM interpretability?
RoseMcClelland's Answer to Who is Jacob Steinhardt and what is he working on?
RoseMcClelland's Answer to Who is Sam Bowman and what is he working on?
Severin's Answer to How can I be a more productive student/researcher?
Severin's Answer to Isn't the real concern AI being misused by terrorists or other bad actors?
Severin's Answer to What are the leading theories in moral philosophy and which of them might be technically the easiest to encode into an AI?
SlimeBunnyBat's Answer to Isn't the real concern technological unemployment?
Sudonym's Answer to What does alignment failure look like?
TJ6K's Answer to What beneficial things would an aligned superintelligence be able to do?
TapuZuko's Answer to Is the question of whether we're living in a simulation relevant to AI safety? If so, how?
TapuZuko's Answer to Isn't the real concern autonomous weapons?
TapuZuko's Answer to Might an aligned superintelligence immediately kill everyone and then go on to create a "hedonium shockwave"?
Tinytitan's Answer to Could we get significant biological intelligence enhancements long before AGI?
Yaakov's Answer to What are the different versions of decision theory?
Yaakov's Answer to Which organizations are working on AI alignment?
Zekava's Answer to Why does there seem to have been an explosion of activity in AI in recent years?

YouTube Non-Canonical Answers

Abram Demski's Answer to Maximizers and Satisficers on 2019-08-23T16:05:49 by Martin Verrisin
Aprillion's Answer to 10 Reasons to Ignore AI Safety on 2021-04-16T16:50:15 by cwjalex
Aprillion's Answer to 8Dbaybled8D's question on Intro to AI Safety
Aprillion's Answer to Agustin Doige's question on Avoiding Positive Side Effects
Aprillion's Answer to AkantorJojo's question on Intro to AI Safety
Aprillion's Answer to Alliotte Raphael's question on Intro to AI Safety
Aprillion's Answer to Daniel Buzovský's question on Where do we go now
Aprillion's Answer to Deutschebahn's question on Mesa-Optimizers 2
Aprillion's Answer to Dorda Giovex's question on Real Inner Misalignment
Aprillion's Answer to Jakub Mintal's question on Mesa-Optimizers 2
Aprillion's Answer to Math Magician's question on Are AI Risks like Nuclear Risks?
Aprillion's Answer to Mera Flynn's question on The Windfall Clause
Aprillion's Answer to Mesa-Optimizers on 2021-02-17T11:05:43 by Lepus Lunaris
Aprillion's Answer to Mesa-Optimizers on 2021-02-17T17:36:20 by Robert K
Aprillion's Answer to Mesa-Optimizers on 2021-02-18T14:51:23 by Alexander Harris
Aprillion's Answer to Mesa-Optimizers on 2021-03-06T00:27:29 by Loz Shamler
Aprillion's Answer to Nick Hounsome's question on What Can We Do About Reward Hacking?
Aprillion's Answer to Riccardo manfrin's question on Mesa-Optimizers
Aprillion's Answer to Samuel Sandeen's question on Intro to AI Safety
Aprillion's Answer to Smo1k's question on Mesa-Optimizers 2
Aprillion's Answer to Smrt fašizmu's question on Maximizers and Satisficers
Aprillion's Answer to The Other Guy's question on The Orthogonality Thesis
Aprillion's Answer to Traywor's question on Mesa-Optimizers 2
Aprillion's Answer to Wertyuiop's question on Intro to AI Safety
Augustus Caesar's Answer to George Michael Sherry's question on Pascal's Mugging
Augustus Caesar's Answer to Instrumental Convergence on 2021-02-24T05:56:14 by WILL D
Augustus Caesar's Answer to James Tenney's question on Intro to AI Safety
Augustus Caesar's Answer to Mesa-Optimizers on 2021-02-19T17:28:53 by No Google, I don't want to use my real name.
Augustus Caesar's Answer to Mesa-Optimizers on 2021-02-19T21:47:02 by milp
Augustus Caesar's Answer to Mesa-Optimizers on 2021-02-23T14:17:01 by androkguz
Augustus Caesar's Answer to Mesa-Optimizers on 2021-02-24T01:55:01 by frozenbagel16
Augustus Caesar's Answer to Mesa-Optimizers on 2021-02-24T11:12:04 by somename
Augustus Caesar's Answer to Mesa-Optimizers on 2021-04-12T11:45:03 by Fanny10000
Augustus Caesar's Answer to Quantilizers on 2020-12-13T22:43:03 by TheWhiteWolf
Augustus Caesar's Answer to Quantilizers on 2020-12-14T12:42:56 by Jenaf 37
Augustus Caesar's Answer to Quantilizers on 2020-12-14T14:52:51 by fiziwig
Augustus Caesar's Answer to Quantilizers on 2020-12-14T18:31:12 by Progressor 4ward
Augustus Caesar's Answer to Quantilizers on 2020-12-30T05:52:51 by Mark
Augustus Caesar's Answer to René's question on Reward Modeling
Augustus Caesar's Answer to Unknown User's question on Intro to AI Safety
CarlFeynman's Answer to Dismythed & JWA's question on The Orthogonality Thesis
ChaosAlpha's Answer to Toby Buckley's question on Mesa-Optimizers
Chriscanal's Answer to Mesa-Optimizers on 2021-02-22T12:19:22 by Bagd Biggerd
Command Master's Answer to M A's question on Real Inner Misalignment
Command Master's Answer to Seeker.87's question on Real Inner Misalignment
Damaged's Answer to Bootleg Jones's question on Intro to AI Safety
Damaged's Answer to Ceelvain's question on Intro to AI Safety
Damaged's Answer to Geoffry Gifari's question on Steven Pinker on AI
Damaged's Answer to Henry Goodman's question on Real Inner Misalignment
Damaged's Answer to Heysemberth Kingdom-Brunel's question on Mesa-Optimizers
Damaged's Answer to Luka Rapava's question on The Orthogonality Thesis
Damaged's Answer to M's question on Intro to AI Safety
Damaged's Answer to Maccollo's question on Video Title Unknown
Damaged's Answer to Matbmp's question on Intro to AI Safety
Damaged's Answer to Mesa-Optimizers on 2021-02-19T21:35:29 by fritt wastaken
Damaged's Answer to Michael Brown's question on Intro to AI Safety
Damaged's Answer to Milan Mašát's question on Video Title Unknown
Damaged's Answer to PalimpsestProd's question on Video Title Unknown
Damaged's Answer to Rob Stringer's question on Mesa-Optimizers
Damaged's Answer to Ryan Paton's question on Intro to AI Safety
Damaged's Answer to Sophrosynicle's question on Real Inner Misalignment
Damaged's Answer to Tim Peterson's question on AI learns to Create Cat Pictures
Damaged's Answer to What can AGI do? on 2021-03-04T08:30:55 by Luka Rapava
Damaged's Answer to Уэстерн Спай's question on Are AI Risks like Nuclear Risks?
Dude with computer's Answer to Geoffry Gifari's question on The Orthogonality Thesis
Evhub's Answer to Mesa-Optimizers on 2021-02-18T00:06:17 by poketopa1234
Frgtbhznjkhfs's Answer to The Orthogonality Thesis on 2020-05-13T03:18:31 by Tomáš Růžička
Gelisam's Answer to ( ͡° ͜ʖ ͡°)'s question on Real Inner Misalignment
Gelisam's Answer to Nomentir Alque Nomintio's question on Mesa-Optimizers
Gelisam's Answer to Quantilizers on 2020-12-13T22:13:57 by Richard Collins
Gelisam's Answer to Quantilizers on 2020-12-13T22:20:31 by octavio echeverria
Gelisam's Answer to Quantilizers on 2020-12-14T05:37:51 by Joshua Hillerup
Gelisam's Answer to Quantilizers on 2020-12-14T10:02:10 by Julia Henriques
Gelisam's Answer to Quantilizers on 2020-12-14T10:54:03 by Kolop315
Gelisam's Answer to Quantilizers on 2020-12-14T11:03:48 by Nicod3m0 Otimsis
Gelisam's Answer to Quantilizers on 2020-12-21T07:14:00 by Serenacula
Gelisam's Answer to Quantilizers on 2020-12-21T16:09:27 by MrLeoniu
Gelisam's Answer to Quantilizers on 2021-01-02T12:24:40 by jonseah
Gelisam's Answer to Quantilizers on 2021-02-09T18:00:54 by Jon Bray
Gelisam's Answer to Quantilizers on 2021-04-17T22:22:17.016139 by Unknown
JJ Hep's Answer to Quantilizers on 2020-12-24T19:36:39 by Luke Mills
Jamespetts's Answer to Maximizers and Satisficers on 2021-02-20T17:58:35 by Donald Engelmann
Morpheus's Answer to MattettaM's question on Maximizers and Satisficers
Plex's Answer to 10 Reasons to Ignore AI Safety on 2021-02-23T23:16:35 by Bailey Jorgensen
Plex's Answer to 10 Reasons to Ignore AI Safety on 2021-03-08T13:55:51 by james sc
Plex's Answer to AkantorJojo's question on Intro to AI Safety
Plex's Answer to Chris's question on Intro to AI Safety
Plex's Answer to Gedelijan's question on Real Inner Misalignment
Plex's Answer to Instrumental Convergence on 2019-09-07T18:05:31 by Tyler Gust
Plex's Answer to Jay Ayerson's question on Intro to AI Safety
Plex's Answer to Killer Robot Arms Race on 2021-02-20T14:59:31 by Chiron
Plex's Answer to Marc Bollinger's question on Mesa-Optimizers
Plex's Answer to Mesa-Optimizers on 2021-02-21T11:55:30 by Piñata Oblongata
Plex's Answer to Mesa-Optimizers on 2021-02-22T04:26:43 by Jorel Fermin
Plex's Answer to Mesa-Optimizers on 2021-02-22T11:35:00 by Damien Asmodeus
Plex's Answer to Mesa-Optimizers on 2021-02-23T07:23:02 by Chrysippus
Plex's Answer to Mesa-Optimizers on 2021-02-23T17:49:04 by Will Holmes
Plex's Answer to Mesa-Optimizers on 2021-02-23T19:00:23 by aforcemorepowerful
Plex's Answer to Mesa-Optimizers on 2021-02-24T04:50:30 by Jonathon Chambers
Plex's Answer to Mesa-Optimizers on 2021-02-24T19:05:36 by Solomon Ucko
Plex's Answer to Mesa-Optimizers on 2021-03-01T21:21:57 by Steen Eugen Poulsen
Plex's Answer to Mesa-Optimizers on 2021-03-01T22:38:48 by Iagoba Apellaniz
Plex's Answer to Mesa-Optimizers on 2021-03-02T04:09:47 by Harsh Deshpande
Plex's Answer to Mesa-Optimizers on 2021-03-06T18:33:28 by stealthguard
Plex's Answer to Mesa-Optimizers on 2021-03-11T09:34:01 by HTIDtricky
Plex's Answer to Nachis04's question on Intro to AI Safety
Plex's Answer to Niels Peppelaar's question on 10 Reasons to Ignore AI Safety
Plex's Answer to Predicting AI on 2021-01-12T13:50:35 by glitch gamer
Plex's Answer to Quantilizers on 2020-12-13T20:52:21 by owen heckmann
Plex's Answer to Quantilizers on 2020-12-13T21:45:39 by DragonSheep
Plex's Answer to Quantilizers on 2020-12-13T21:53:33 by Bastiaan Cnossen
Plex's Answer to Quantilizers on 2020-12-13T21:53:49 by cmilkau
Plex's Answer to Quantilizers on 2020-12-13T21:55:08 by loligesgame
Plex's Answer to Quantilizers on 2020-12-13T21:58:44 by Vincent Grange
Plex's Answer to Quantilizers on 2020-12-13T22:29:50 by Qwerty and Azerty
Plex's Answer to Quantilizers on 2020-12-13T23:11:49 by DragonSheep
Plex's Answer to Quantilizers on 2020-12-13T23:59:24 by Nixitur
Plex's Answer to Quantilizers on 2020-12-14T00:59:38 by Ricardas Ricardas
Plex's Answer to Quantilizers on 2020-12-14T02:46:17 by M Kelly
Plex's Answer to Quantilizers on 2020-12-14T03:52:16 by James Barclay
Plex's Answer to Quantilizers on 2020-12-14T04:47:48 by Recoded Zaphod
Plex's Answer to Quantilizers on 2020-12-14T05:12:00 by Jeremy Hoffman
Plex's Answer to Quantilizers on 2020-12-14T06:05:58 by Paulo Van Huffel
Plex's Answer to Quantilizers on 2020-12-14T07:32:09 by Taras Pylypenko
Plex's Answer to Quantilizers on 2020-12-14T22:43:20 by Matthew Campbell
Plex's Answer to Quantilizers on 2020-12-14T22:59:02 by Panzerkampfwagen
Plex's Answer to Quantilizers on 2020-12-15T01:44:10 by AdibasWakfu
Plex's Answer to Quantilizers on 2020-12-15T12:08:55 by Life Happens
Plex's Answer to Quantilizers on 2020-12-15T23:08:13 by
Plex's Answer to Quantilizers on 2020-12-16T06:52:01 by Kyra Zimmer
Plex's Answer to Quantilizers on 2020-12-19T13:13:33 by SocialDownclimber
Plex's Answer to Quantilizers on 2020-12-19T13:37:50 by Alex Webb
Plex's Answer to Quantilizers on 2020-12-19T18:22:43 by Yezpahr
Plex's Answer to Quantilizers on 2020-12-25T18:03:37 by Timothy Hansen
Plex's Answer to Quantilizers on 2020-12-30T17:26:14 by Sean Pedersen
Plex's Answer to Quantilizers on 2021-01-11T09:32:08 by Underrated1
Plex's Answer to Quantilizers on 2021-02-18T15:39:07 by Spoon Of Doom
Plex's Answer to Quantilizers on 2021-02-19T12:19:50 by Shantanu Ojha
Plex's Answer to Quantilizers on 2021-02-20T12:55:45 by Marcus Antonius
Plex's Answer to Quantilizers on 2021-02-22T21:54:31 by James Petts
Plex's Answer to Quantilizers on 2021-04-17T22:21:29.271057 by Unknown
Plex's Answer to Ranibow Sprimkle's question on Instrumental Convergence
Plex's Answer to Reitze Jansen's question on Mesa-Optimizers
Plex's Answer to Safe Exploration on 2020-12-06T21:48:38 by Rares Rotar
Plex's Answer to Samuel Hvidager's question on Intro to AI Safety
Plex's Answer to TackerTacker's question on Mesa-Optimizers 2
Plex's Answer to The Orthogonality Thesis on 2019-04-20T04:57:23 by Enciphered
Plex's Answer to The Windfall Clause on 2020-07-07T07:10:36 by boarattackboar
Plex's Answer to The Windfall Clause on 2020-12-13T14:49:52 by the Decoy
Plex's Answer to Tommy karrick's question on Mesa-Optimizers
Plex's Answer to Use of Utility Functions on 2017-04-27T20:50:28 by William Dye
Plex's Answer to Use of Utility Functions on 2021-02-26T11:44:06 by Michael Moran
Plex's Answer to WNJ: Raise AI Like Kids? on 2021-02-26T17:02:41 by peterbrehmj
Plex's Answer to jhjkhgjhfgjg jgjyfhdhbfjhg's question on Mesa-Optimizers 2
Robert hildebrandt's Answer to Quantilizers on 2020-12-14T01:27:48 by Moleo
Robert hildebrandt's Answer to Quantilizers on 2020-12-14T03:13:46 by SlimThrull
Robert hildebrandt's Answer to Quantilizers on 2020-12-14T03:47:06 by snigwithasword
Robert hildebrandt's Answer to Quantilizers on 2020-12-14T04:04:30 by Noah McCann
Robert hildebrandt's Answer to Quantilizers on 2020-12-14T11:23:40 by illesizs
Robert hildebrandt's Answer to Quantilizers on 2020-12-15T17:14:50 by c99kfm
Robert hildebrandt's Answer to Quantilizers on 2020-12-16T06:22:32 by Chrysippus
Robert.hildebrandt's Answer to Mesa-Optimizers on 2021-02-17T18:39:38 by Dennis Haupt
Robert.hildebrandt's Answer to Mesa-Optimizers on 2021-02-18T00:28:56 by LoliShocks
Robert.hildebrandt's Answer to Mesa-Optimizers on 2021-02-18T05:37:03 by Irun S
Robert.hildebrandt's Answer to Mesa-Optimizers on 2021-02-18T13:35:33 by Peter Smythe
Robert.hildebrandt's Answer to Mesa-Optimizers on 2021-02-18T19:47:40 by Mvskoke Hunter
Robert.hildebrandt's Answer to WNJ: Think of AGI like a Corporation? on 2021-02-21T09:03:12 by Chedim
Robertskmiles's Answer to A Commenter's question on What can AGI do?
Robertskmiles's Answer to AI Safety Gridworlds 2 on 2020-06-02T00:45:31 by Wylliam Judd
Robertskmiles's Answer to Alan W's question on Intro to AI Safety
Robertskmiles's Answer to Alessandrə Rustichelli's question on Intro to AI Safety
Robertskmiles's Answer to Avoiding Negative Side Effects on 2020-11-17T01:48:43 by Neological Gamer
Robertskmiles's Answer to Channel Introduction on 2021-04-07T23:33:04 by Robert Miles
Robertskmiles's Answer to Experts on the Future of AI on 2020-11-09T06:23:59 by Sara L
Robertskmiles's Answer to Ian's question on Maximizers and Satisficers
Robertskmiles's Answer to Instrumental Convergence on 2020-05-18T23:25:44 by phil guer
Robertskmiles's Answer to Killer Robot Arms Race on 2020-06-06T11:20:21 by DaVince21
Robertskmiles's Answer to Maor Eitan's question on Intro to AI Safety
Robertskmiles's Answer to Maximizers and Satisficers on 2019-09-01T08:11:48 by Paper Benni
Robertskmiles's Answer to Mesa-Optimizers on 2021-02-18T17:43:34 by Michael R-A
Robertskmiles's Answer to Mesa-Optimizers on 2021-02-19T08:59:09 by valberm
Robertskmiles's Answer to Mesa-Optimizers on 2021-02-25T10:23:18 by RaukGorth
Robertskmiles's Answer to Mesa-Optimizers on 2021-02-25T13:40:47 by MrAngry27
Robertskmiles's Answer to Mesa-Optimizers on 2021-02-27T11:59:46 by Arnau Adell
Robertskmiles's Answer to NINJA NAJM's question on Avoiding Negative Side Effects
Robertskmiles's Answer to Quantilizers on 2020-12-13T22:12:28 by Markus Johansson
Robertskmiles's Answer to Quantilizers on 2020-12-14T18:12:54 by Blah Blah
Robertskmiles's Answer to Quantilizers on 2020-12-15T18:18:01 by mrsuperguy2073
Robertskmiles's Answer to Quantilizers on 2020-12-25T23:07:12 by Peter Franz
Robertskmiles's Answer to Quantilizers on 2021-02-19T12:19:50 by Shantanu Ojha
Robertskmiles's Answer to Quantilizers on 2021-02-24T23:54:57 by Nathan Kouvalis
Robertskmiles's Answer to Rob Sokolowski's question on AI Safety Gridworlds 2
Robertskmiles's Answer to Steven Pinker on AI on 2020-08-23T02:40:09 by Xystem 4
Robertskmiles's Answer to Superintelligence Mod for Civilization V on 2019-04-11T22:16:10 by Mateja Petrovic
Robertskmiles's Answer to The Orthogonality Thesis on 2019-04-18T19:04:56 by Jan Bam
Robertskmiles's Answer to The Orthogonality Thesis on 2020-10-13T22:10:08 by Juan Pablo Garibotti Arias
Robertskmiles's Answer to The Orthogonality Thesis on 2021-02-21T04:49:23 by peterbrehmj
Robertskmiles's Answer to Uiytt's question on Intro to AI Safety
Robertskmiles's Answer to Use of Utility Functions on 2020-09-23T08:51:14 by Amaar Quadri
Robertskmiles's Answer to What can AGI do? on 2020-03-30T09:18:52 by Jade Gorton
Robertskmiles's Answer to What can AGI do? on 2020-12-19T06:19:12 by Firaro
Robertskmiles's Answer to Where do we go now on 2020-05-12T20:06:14 by Musthegreat 94
Robertskmiles's Answer to ZT1ST's question on Instrumental Convergence
Self-modification and wireheading
SlimeBunnyBat's Answer to 5astelija's question on Mesa-Optimizers 2
SlimeBunnyBat's Answer to Andy Gee's question on Mesa-Optimizers 2
SlimeBunnyBat's Answer to Ansatz66's question on Intro to AI Safety
SlimeBunnyBat's Answer to Arthur Wittmann's question on Killer Robot Arms Race
SlimeBunnyBat's Answer to Ben Crulis's question on Mesa-Optimizers
SlimeBunnyBat's Answer to Etienne Maheu's question on Intro to AI Safety
SlimeBunnyBat's Answer to IkarusKK's question on Real Inner Misalignment
SlimeBunnyBat's Answer to Marcelo Pinheiro's question on Real Inner Misalignment
SlimeBunnyBat's Answer to Marie Rentergent's question on Mesa-Optimizers 2
SlimeBunnyBat's Answer to Mesa-Optimizers on 2021-02-17T08:00:03 by James Crewdson
SlimeBunnyBat's Answer to Mesa-Optimizers on 2021-02-17T10:26:59 by Koro
SlimeBunnyBat's Answer to Midhunraj R's question on Quantilizers
SlimeBunnyBat's Answer to Nerd Herd's question on Real Inner Misalignment
SlimeBunnyBat's Answer to Oliver Bergau's question on Quantilizers
SlimeBunnyBat's Answer to Robert Tuttle's question on Mesa-Optimizers 2
SlimeBunnyBat's Answer to Sigmata0's question on Intro to AI Safety
SlimeBunnyBat's Answer to SlimeBunnyBat's Answer to Drekpaprika's question on Intro to AI Safety
SlimeBunnyBat's Answer to Son of a Beech's question on The Orthogonality Thesis
SlimeBunnyBat's Answer to Stellar Lake System's question on The Orthogonality Thesis
Social Christancing's Answer to G T's question on Mesa-Optimizers
Social Christancing's Answer to Jason Burbank's question on Mesa-Optimizers
Social Christancing's Answer to Mesa-Optimizers on 2021-02-17T12:57:20 by X3 KJ
Social Christancing's Answer to Mesa-Optimizers on 2021-02-23T01:55:12 by Sebastian Gramsz
Social Christancing's Answer to Mesa-Optimizers on 2021-02-28T20:30:04 by DodoDojo
Social Christancing's Answer to Quantilizers on 2021-02-22T13:59:07 by Marshall White
Social Christancing's Answer to Siranut usawasutsakorn's question on Mesa-Optimizers 2
Social Christancing's Answer to Socially unacceptable's question on The Orthogonality Thesis
Stargate9000's Answer to Mesa-Optimizers on 2021-02-17T00:01:16 by Asdayasman アズデイ
Stargate9000's Answer to Mesa-Optimizers on 2021-02-17T06:03:30 by ХОРОШО
Stargate9000's Answer to Mesa-Optimizers on 2021-02-28T14:11:32 by andybaldman
Stargate9000's Answer to Mesa-Optimizers on 2021-03-13T23:08:08 by Tomasz Rogala
Stargate9000's Answer to Mesa-Optimizers on 2021-03-14T21:34:35 by Kyle Merritt
Stargate9000's Answer to Mesa-Optimizers on 2021-04-17T22:20:41.776124 by Unknown
Stargate9000's Answer to Quantilizers on 2021-02-18T15:39:07 by Spoon Of Doom
Stargate9000's Answer to Quantilizers on 2021-03-09T18:02:04 by Blackmage89
Stargate9000's Answer to The Orthogonality Thesis on 2021-02-27T23:46:44 by Stellar Lake System
Stargate9000's Answer to The Orthogonality Thesis on 2021-03-13T12:39:34 by Linus Behrbohm
Sudonym's Answer to Famitory's question on Intro to AI Safety
Sudonym's Answer to Instrumental Convergence on 2020-06-09T18:06:22 by Yuval
Sudonym's Answer to Iterated Distillation and Amplification on 2021-01-07T19:03:28 by Keenan Pepper
Sudonym's Answer to Mesa-Optimizers on 2021-02-18T16:37:12 by ɥɐou
Sudonym's Answer to Mesa-Optimizers on 2021-03-09T20:02:31 by Corman
Sudonym's Answer to Mesa-Optimizers on 2021-03-09T21:29:31 by Metsuryu
Sudonym's Answer to Quantilizers on 2020-12-13T22:53:59 by J M
Sudonym's Answer to Quantilizers on 2020-12-14T01:34:53 by boobshart
Sudonym's Answer to Quantilizers on 2020-12-15T05:08:10 by DarkestMirrored
Sudonym's Answer to Quantilizers on 2020-12-15T09:14:48 by Samuel Woods
Sudonym's Answer to Quantilizers on 2020-12-15T16:27:49 by Nutwit
Sudonym's Answer to Quantilizers on 2020-12-16T05:57:09 by Adrian Regenfuß
Sudonym's Answer to Quantilizers on 2020-12-16T19:38:29 by Martin Verrisin
Sudonym's Answer to Quantilizers on 2020-12-18T00:46:12 by Wilco Verhoef
Sudonym's Answer to Quantilizers on 2020-12-18T23:53:02 by Ent229
Sudonym's Answer to Quantilizers on 2020-12-25T04:37:20 by Daniel MK
Sudonym's Answer to Quantilizers on 2020-12-25T11:41:42 by PianoShow
Sudonym's Answer to Quantilizers on 2020-12-26T16:22:16 by Songbird
Sudonym's Answer to Quantilizers on 2021-01-02T15:40:56 by wertyuiop
Sudonym's Answer to Quantilizers on 2021-01-05T19:27:35 by kade99TV
Sudonym's Answer to Quantilizers on 2021-01-09T16:33:43 by Stephen
Sudonym's Answer to Reward Hacking Reloaded on 2020-10-26T01:46:26 by Julian Danzer
Sudonym's Answer to Steven Pinker on AI on 2020-05-13T19:07:46 by kilroy1964
Sudonym's Answer to The Orthogonality Thesis on 2019-04-15T17:07:34 by echoes
Sudonym's Answer to The Orthogonality Thesis on 2020-12-27T23:05:56 by Miguel Borromeo
Sudonym's Answer to Uberchops's question on Quantilizers
Sudonym's Answer to WNJ: Think of AGI like a Corporation? on 2020-06-05T00:00:24 by Clayton Voges
U8k's Answer to onje berdy's question on Real Inner Misalignment
Yevgeniy Andreyevich's Answer to Lapis Salamander's question on Intro to AI Safety
Yevgeniy Andreyevich's Answer to Rich Traube's question on WNJ: Think of AGI like a Corporation?
Yevgeniy Andreyevich's Answer to afla light's question on 10 Reasons to Ignore AI Safety
Yevgeniy's Answer to Ted Archer's question on Maximizers and Satisficers