Questions from YouTube

From Stampy's Wiki

These questions are from YouTube.

See also

2940 YouTube questions, 1068 of them are asked on Discord, 269 answered, and 1 prioritized and ready to ask on Discord!

All YouTube questions (best first)

Anderson 63 Scooper's question on WNJ: Think of AGI like a Corporation?
Uberchops's question on Quantilizers
Daniel Buzovský's question on Where do we go now
Gus Kelty's question on 10 Reasons to Ignore AI Safety
Martin Verrisin's question on Maximizers and Satisficers
Ben Crulis's question on Mesa-Optimizers
TackerTacker's question on Mesa-Optimizers 2
Agustin Doige's question on Avoiding Positive Side Effects
Toby Buckley's question on Mesa-Optimizers
MattettaM's question on Maximizers and Satisficers
famitory's question on Intro to AI Safety
Raphaël Weuts's question on The Orthogonality Thesis
Lemon Party's question on What Can We Do About Reward Hacking?
Chaz Allen's question on 10 Reasons to Ignore AI Safety
Levi Poon's question on Mesa-Optimizers
Circuitrinos's question on Maximizers and Satisficers
Smaster7772's question on What can AGI do?
Use of Utility Functions on 2017-05-23T03:32:40 by
Simon G.'s question on The Orthogonality Thesis
Aaron Rotenberg's question on Quantilizers
Loweren's question on Mesa-Optimizers
Snaileri's question on Avoiding Negative Side Effects
Fraser's question on Mesa-Optimizers
Tetraedri's question on What Can We Do About Reward Hacking?
Just nobody's question on Quantilizers
Jqerty's question on Killer Robot Arms Race
Firefox Metzger's question on Reward Modeling
Rob Sokolowski's question on AI Safety Gridworlds 2
Penny Lane's question on Empowerment
Huntracony's question on Quantilizers
Quitch's question on Predicting AI
Ardent Drops's question on Quantilizers
Ben Crulis's question on Quantilizers
Maverician's question on Predicting AI
J.J. Shank's question on Reward Hacking Reloaded
Szarvasmarha's question on 10 Reasons to Ignore AI Safety
Reckless Roges's question on The Orthogonality Thesis
Brindlebriar's question on Maximizers and Satisficers
ThoperSought's question on Mesa-Optimizers
Jonathan Zea's question on Intro to AI Safety
Michael Spence's question on Instrumental Convergence
12tone's question on Avoiding Negative Side Effects
Kataquax's question on Reward Hacking
Adelar Scheidt's question on Avoiding Negative Side Effects
jhjkhgjhfgjg jgjyfhdhbfjhg's question on Mesa-Optimizers 2
Misium's question on Reward Hacking
DiabloMinero's question on Are AI Risks like Nuclear Risks?
SbotTV's question on The Orthogonality Thesis
The Great of Beam's question on The Windfall Clause
Leonefoscolo's question on The Windfall Clause
Virzen Virzen's question on Avoiding Negative Side Effects
Albert Perrien's question on Maximizers and Satisficers
Sdtok's question on Avoiding Positive Side Effects
Tobias Görgen's question on Maximizers and Satisficers
Milorad Menjic's question on Real Inner Misalignment
Polemikful's question on Pascal's Mugging
Morgan Rogers's question on Real Inner Misalignment
Mothuzad's question on Real Inner Misalignment
MsJaye0001's question on Real Inner Misalignment
Nando Alves's question on Real Inner Misalignment
Naomi Harding's question on Real Inner Misalignment
Nikolay Tonev's question on Real Inner Misalignment
Noah McCann's question on Real Inner Misalignment
PsychoticusRex's question on Real Inner Misalignment
Pyrohawk's question on Real Inner Misalignment
Rhannmah's question on Real Inner Misalignment
Steven13131123's question on Real Inner Misalignment
Ten Rings's question on Real Inner Misalignment
Themrus's question on Real Inner Misalignment
Tim Peterson's question on Real Inner Misalignment
Vladislav Kalmyikov's question on Real Inner Misalignment
Vulkanodox's question on Real Inner Misalignment
Wren Charmratchet's question on Real Inner Misalignment
dusparr's question on Real Inner Misalignment
jacejunk's question on Real Inner Misalignment
kasuha's question on Real Inner Misalignment
maccollo's question on Real Inner Misalignment
misterjeckyll's question on Real Inner Misalignment
scarabbi's question on Real Inner Misalignment
svnhddbst's question on Real Inner Misalignment
theApeShow's question on Real Inner Misalignment
Георги Георгиев's question on Real Inner Misalignment
Marcus Antonius's question on Quantilizers
The Other Guy's question on The Orthogonality Thesis
Niels Peppelaar's question on 10 Reasons to Ignore AI Safety
Spoon Of Doom's question on Quantilizers
Ian's question on Maximizers and Satisficers
Kade99TV's question on Quantilizers
ХОРОШО's question on Mesa-Optimizers
Mvskoke Hunter's question on Mesa-Optimizers
Corman's question on Mesa-Optimizers
Alexander Harris's question on Mesa-Optimizers
Reitze Jansen's question on Mesa-Optimizers
Lapis Salamander's question on Intro to AI Safety
Yuval's question on Instrumental Convergence
TheWhiteWolf's question on Quantilizers
Jenaf 37's question on Quantilizers
Linus Behrbohm's question on The Orthogonality Thesis
Fanny10000's question on Mesa-Optimizers
AkantorJojo's question on Intro to AI Safety
A Commenter's question on What can AGI do?
Ansatz66's question on Intro to AI Safety
Snigwithasword's question on Quantilizers
Moleo's question on Quantilizers
Jonseah's question on Quantilizers
Ted Archer's question on Maximizers and Satisficers
Piñata Oblongata's question on Mesa-Optimizers
Deutschebahn's question on Mesa-Optimizers 2
AdibasWakfu's question on Quantilizers
M's question on Intro to AI Safety
Jon Bray's question on Quantilizers
matbmp's question on Intro to AI Safety
Mera Flynn's question on The Windfall Clause
Paper Benni's question on Maximizers and Satisficers
Phil guer's question on Instrumental Convergence
NINJA NAJM's question on Avoiding Negative Side Effects
Marshall White's question on Quantilizers
Musthegreat 94's question on Where do we go now
Poketopa1234's question on Mesa-Optimizers
Ryan Paton's question on Intro to AI Safety
Nathan Kouvalis's question on Quantilizers
Wylliam Judd's question on AI Safety Gridworlds 2
No Google, I don't want to use my real name.'s question on Mesa-Optimizers
drekpaprika's question on Intro to AI Safety
Robert Tuttle's question on Mesa-Optimizers 2
X3 KJ's question on Mesa-Optimizers
Tomáš Růžička's question on The Orthogonality Thesis
Smrt fašizmu's question on Maximizers and Satisficers
Nick Hounsome's question on What Can We Do About Reward Hacking?
Rich Traube's question on WNJ: Think of AGI like a Corporation?
Loz Shamler's question on Mesa-Optimizers
Michael Brown's question on Intro to AI Safety
Nerd Herd's question on Real Inner Misalignment
IkarusKK's question on Real Inner Misalignment
Wilco Verhoef's question on Quantilizers
Glitch gamer's question on Predicting AI
René's question on Reward Modeling
Recoded Zaphod's question on Quantilizers
Loligesgame's question on Quantilizers
Nicod3m0 Otimsis's question on Quantilizers
Peterbrehmj's question on The Orthogonality Thesis
Damien Asmodeus's question on Mesa-Optimizers
Lepus Lunaris's question on Mesa-Optimizers
maccollo's question on Real Inner Misalignment
Dow DayJing's question on Where do we go now
Springlumpy's question on Superintelligence Mod for Civilization V
Matthew Whiteside's question on Specification Gaming
Colopty's question on 10 Reasons to Ignore AI Safety
Curtis brown's question on Maximizers and Satisficers
EastBurningRed's question on Avoiding Positive Side Effects
Chris Canal's question on Yann LeCun on Facebook
John Rutledge's question on Pascal's Mugging
RUBBER BULLET's question on Safe Exploration
Alexey's question on Safe Exploration
Jon H's question on The Windfall Clause
Patricio Martínez's question on The Orthogonality Thesis
Yohann Last's question on Pascal's Mugging
Calen Crawford's question on The Orthogonality Thesis
Crubs's question on 10 Reasons to Ignore AI Safety
S's question on Pascal's Mugging
Maciek300's question on Experts on the Future of AI
Bob Ross's question on The Windfall Clause
The Stupidest Bitch's question on Specification Gaming
Cogwheel42's question on WNJ: Raise AI Like Kids?
Kaze Hikarinaka's question on Safe Exploration
David Valouch's question on The Orthogonality Thesis
Double Orts's question on Avoiding Negative Side Effects
Ballom29's question on Reward Modeling
Ataarono's question on WNJ: Raise AI Like Kids?
Mogul DaMongrel's question on Instrumental Convergence
Tamás Prileszky's question on Empowerment
Timothy Bell's question on Instrumental Convergence
Jürgen Hans's question on AI Safety Gridworlds
Jaskarvin makal's question on Pascal's Mugging
Jason Olshefsky's question on Steven Pinker on AI
Fadi Abu jiries's question on Steven Pinker on AI
Marvin Purtorab's question on Pascal's Mugging
Straxxxxxx's question on Maximizers and Satisficers
Alexey's question on 10 Reasons to Ignore AI Safety
Katie Byrne's question on Use of Utility Functions
Cafe liu's question on Reward Modeling
Floppsy bunny's question on Reward Modeling
Reimu and Cirno's question on Steven Pinker on AI
Renan Cunha's question on The Windfall Clause
Zachary Barbanell's question on Maximizers and Satisficers
Kevin George's question on Pascal's Mugging
SlimThrull's question on Maximizers and Satisficers
Andrey Medina's question on The Orthogonality Thesis
Luciano Fabio's question on Maximizers and Satisficers
Sean Clarke's question on The Windfall Clause
Moopsish's question on Maximizers and Satisficers
Benjamin Nelson's question on Pascal's Mugging
Logan graham's question on Maximizers and Satisficers
Joshua Lyons's question on WNJ: Raise AI Like Kids?
Adam Filinovich's question on The Orthogonality Thesis
SafetySkull's question on Instrumental Convergence
Paulo Jose Castro's question on Specification Gaming
Slikrx's question on Avoiding Positive Side Effects
Carl Lewis's question on Experts on the Future of AI
Matt T's question on What can AGI do?
Bill Bainbridge's question on Pascal's Mugging
Ryan W's question on Iterated Distillation and Amplification
NeatNit's question on Reward Hacking
Barry Mitchell's question on Maximizers and Satisficers
Intet Mane's question on Use of Utility Functions
Ethan Greenhaw's question on Safe Exploration
Harsh Deshpande's question on Pascal's Mugging
Saalthor Jrundelius's question on Specification Gaming
Gary Teano's question on Maximizers and Satisficers
Prabhjeet Singh Arora's question on The Orthogonality Thesis
Terence Alderson's question on Pascal's Mugging
JimPlaysGames's question on The Orthogonality Thesis
Abenezer Tassew's question on 10 Reasons to Ignore AI Safety
Géraud Henrion's question on Reward Modeling
Alexander Kennedy's question on Maximizers and Satisficers
TanKer BloodBrothers's question on Predicting AI
Manny Manito's question on What Can We Do About Reward Hacking?
Jlrinc's question on Steven Pinker on AI
Daniel Lancet's question on The Orthogonality Thesis
Kieron George's question on Maximizers and Satisficers
Newmaidumosa's question on Experts on the Future of AI
Fred Eisele's question on The Orthogonality Thesis
Nuada the silver hand's question on What Can We Do About Reward Hacking?
Luke Fabis's question on Maximizers and Satisficers
Max Mouse's question on Pascal's Mugging
Robot 1g5's question on Maximizers and Satisficers
Ryanofottawa's question on The Orthogonality Thesis
You tou's question on The Orthogonality Thesis
Remi Caron's question on Superintelligence Mod for Civilization V
Ian Edmonds's question on WNJ: Think of AGI like a Corporation?
Joshuawhere's question on Experts on the Future of AI
Sigma Reaver's question on Maximizers and Satisficers
Peter Smythe's question on The Orthogonality Thesis
L W's question on Where do we go now
Jesus Holland Christ's question on Instrumental Convergence
Cc nj's question on The Orthogonality Thesis
Mattew Lefty's question on Pascal's Mugging
Poseclop q's question on Steven Pinker on AI
Saxbend's question on Safe Exploration
Rex Kenny's question on Specification Gaming
What'a'nerd's question on Maximizers and Satisficers
Corey Copeland's question on Specification Gaming
Dan's question on Avoiding Negative Side Effects
Okay's question on The Orthogonality Thesis
Richard Collins's question on What Can We Do About Reward Hacking?
Thisnicklldo's question on The Orthogonality Thesis
Hammad Sheikh's question on Steven Pinker on AI
Matt V's question on The Orthogonality Thesis
TheNoodlyAppendage's question on WNJ: Think of AGI like a Corporation?
Vladhin's question on Reward Hacking Reloaded
ClockworkGearhead's question on Pascal's Mugging
Felix Merz's question on Empowerment
Neasiac's question on Maximizers and Satisficers
Salec's question on The Orthogonality Thesis
Tomaten salat's question on Avoiding Negative Side Effects
Hang da clown's question on What can AGI do?
Smiley P's question on 10 Reasons to Ignore AI Safety
Jop Mens's question on Iterated Distillation and Amplification
Aditya Shankarling's question on What Can We Do About Reward Hacking?
Firefox Metzger's question on Are AI Risks like Nuclear Risks?
David Brosnahan's question on Experts on the Future of AI
Elliot Prescott's question on WNJ: Raise AI Like Kids?
Naþan Ø's question on WNJ: Think of AGI like a Corporation?
Roman R's question on 10 Reasons to Ignore AI Safety
Leutrim D's question on WNJ: Think of AGI like a Corporation?
Robert nantze's question on Pascal's Mugging
14OF12's question on AI learns to Create Cat Pictures
Peter Smythe's question on The Orthogonality Thesis
Tim Haldane's question on Pascal's Mugging
Dixie Whiskey's question on Iterated Distillation and Amplification
Srelma's question on The Windfall Clause
AV3NG3R00's question on The Orthogonality Thesis
Chrysippus's question on Reward Modeling
Ciroluiro's question on WNJ: Raise AI Like Kids?
Veggiet2009's question on Avoiding Negative Side Effects
Reckless Roges's question on Avoiding Negative Side Effects
A CLOSED ECONOMY DOESN'T LEAD TO SUFFERING & DEATH?'s question on Pascal's Mugging
Andew Tarjanyi's question on WNJ: Think of AGI like a Corporation?
Wingedalpha's question on The Windfall Clause
Charly Krahmer's question on Pascal's Mugging
Lordious's question on WNJ: Raise AI Like Kids?
Дмитрий Лжетцов's question on The Windfall Clause
Sebastjans Slavitis's question on The Orthogonality Thesis
Vladhin's question on Use of Utility Functions
Harry Aristodemou's question on Steven Pinker on AI
4729 Zex's question on Reward Hacking
Dan Green's question on Reward Modeling
Krymson kyng's question on WNJ: Raise AI Like Kids?
Austin Glugla's question on Experts on the Future of AI
Jason Sargent's question on Instrumental Convergence
Dmitrii Sapelkin's question on The Orthogonality Thesis
Keanu coetzee's question on WNJ: Raise AI Like Kids?
Kalebomb's question on Pascal's Mugging
Adam Richard's question on The Orthogonality Thesis
Flake28's question on 10 Reasons to Ignore AI Safety
Nulono's question on Avoiding Negative Side Effects
Acerba's question on Pascal's Mugging
Lamb Of Demyelination's question on Where do we go now
Davesoft's question on Reward Modeling
THE Mithrandir09's question on The Orthogonality Thesis
Liveaboard's question on AI Safety Gridworlds 2
Wiktor Migaszewski's question on Empowerment
First last's question on Pascal's Mugging
Frank Anzalone's question on Reward Hacking Reloaded
Iwer Sonsch's question on 10 Reasons to Ignore AI Safety
Sluppie's question on The Orthogonality Thesis
Sergio's question on Scalable Supervision
Julian Danzer's question on Maximizers and Satisficers
VitruvianSasquatch's question on The Windfall Clause
MatrixStuff's question on Reward Modeling
Lexter Victorio's question on The Orthogonality Thesis
Alexander The Magnifcent's question on Specification Gaming
Linux Gaming in FullHD 60FPS's question on Pascal's Mugging
Jeffrooow's question on Avoiding Negative Side Effects
CybershamanX's question on Steven Pinker on AI
Jqerty's question on Steven Pinker on AI
Ronald Jensen's question on 10 Reasons to Ignore AI Safety
MyOther Soul's question on Instrumental Convergence
Garrett Howell's question on WNJ: Raise AI Like Kids?
Владимир Кузнецов Vovacat17's question on What Can We Do About Reward Hacking?
Josh mizzi's question on Avoiding Negative Side Effects
Brabham Freaman's question on What Can We Do About Reward Hacking?
Tarek Saati's question on WNJ: Raise AI Like Kids?
Melbournaut's question on The Orthogonality Thesis
YtterbiJum's question on Maximizers and Satisficers
9alexua9's question on AI Safety Gridworlds
Zacharie Chiron's question on Status Report
Upcycle Electronics's question on The Windfall Clause
TheDrachlyznardh's question on Reward Modeling
Bosstown Dynamics's question on Maximizers and Satisficers
CommandoDude's question on Pascal's Mugging
Rotem levi's question on 10 Reasons to Ignore AI Safety
Ryan Nowicki's question on The Windfall Clause
Noel Pickering's question on The Orthogonality Thesis
Definitelynot Zyra's question on The Orthogonality Thesis
BobC's question on Avoiding Negative Side Effects
Dark Knight's question on Where do we go now
Tunya's question on The Orthogonality Thesis
Nicolas Cato Strode's question on The Orthogonality Thesis
Androkguz's question on Iterated Distillation and Amplification
Mike Ross's question on Channel Introduction
UsenameTakenWasTaken's question on WNJ: Think of AGI like a Corporation?
Илья Шаров's question on The Orthogonality Thesis
Boblymon's question on The Orthogonality Thesis
Unnamed channel's question on Iterated Distillation and Amplification
Paul A's question on 10 Reasons to Ignore AI Safety
Remilia Scarlet's question on The Orthogonality Thesis
Cory Mck's question on Instrumental Convergence
Woah Dude's question on WNJ: Raise AI Like Kids?
Donald Hobson's question on Empowerment
Grzegorz Kowalik's question on Maximizers and Satisficers
Thordan Ssoa's question on Maximizers and Satisficers
Craftedlavaistrue's question on 10 Reasons to Ignore AI Safety
Steven Victor Neiman's question on WNJ: Raise AI Like Kids?
Ionescu Emi-Marian's question on WNJ: Raise AI Like Kids?
Bloginton Blakley's question on Experts on the Future of AI
Niels Kloppenburg's question on Are AI Risks like Nuclear Risks?
400cc MIRUKU's question on Reward Hacking
Rat Utoplan's question on 10 Reasons to Ignore AI Safety
Nnotm's question on Are AI Risks like Nuclear Risks?
Joshua Martin's question on WNJ: Think of AGI like a Corporation?
Ividboy's question on Avoiding Negative Side Effects
Almost, but not entirely, Unreasonable's question on Safe Exploration
Marscrasher's question on Steven Pinker on AI
Nurali Medew's question on Iterated Distillation and Amplification
David Brosnahan's question on Scalable Supervision
TheJaredtheJaredlong's question on WNJ: Raise AI Like Kids?
J M's question on Killer Robot Arms Race
Marine3D's question on 10 Reasons to Ignore AI Safety
Bp56789's question on Maximizers and Satisficers
Joshua Weihe's question on The Orthogonality Thesis
Comebackata2's question on The Orthogonality Thesis
Fat Basterd's question on 10 Reasons to Ignore AI Safety
ElCapitanoBeige's question on Specification Gaming
Pouty MacPotatohead's question on What Can We Do About Reward Hacking?
I's question on Use of Utility Functions
Kmden Rt's question on Instrumental Convergence
Trius's question on The Orthogonality Thesis
Mogul DaMongrel's question on WNJ: Raise AI Like Kids?
Superluminal098's question on The Orthogonality Thesis
Joe C's question on Specification Gaming
RobertsMrtn's question on Reward Hacking
Dave Jacob's question on WNJ: Raise AI Like Kids?
Spacedoohicky's question on WNJ: Raise AI Like Kids?
Robert Glass's question on Instrumental Convergence
Nillie's question on Killer Robot Arms Race
2ndviolin's question on Pascal's Mugging
Joel Cresswell's question on WNJ: Raise AI Like Kids?
Rnbpl's question on Pascal's Mugging
Dominik Tabisz's question on WNJ: Raise AI Like Kids?
Sharad Richardet's question on Reward Hacking
Jack's question on Steven Pinker on AI
Main A's question on Instrumental Convergence
Fejfo's games's question on Scalable Supervision
Sumner Stuart's question on The Orthogonality Thesis
Gertrude Toucheatonq's question on The Orthogonality Thesis
Ben Allison's question on Pascal's Mugging
Just Joey's question on 10 Reasons to Ignore AI Safety
Sean Kelly's question on Instrumental Convergence
Dave Jacob's question on 10 Reasons to Ignore AI Safety
Solrex the Sun King's question on Reward Modeling
Oliver B.'s question on Instrumental Convergence
Finnley Connellan's question on Reward Hacking Reloaded
Martin Verrisin's question on 10 Reasons to Ignore AI Safety
THE BIG BLACK GUN's question on WNJ: Raise AI Like Kids?
Daniel Kilby's question on The Orthogonality Thesis
Eluwien Halla's question on Where do we go now
Chronokun's question on 10 Reasons to Ignore AI Safety
Diphyllum's question on Maximizers and Satisficers
Bob Salita's question on WNJ: Think of AGI like a Corporation?
Haiku Shogi's question on The Orthogonality Thesis
Breaneainn's question on Maximizers and Satisficers
Legotechnic27's question on Steven Pinker on AI
LuciD's question on Steven Pinker on AI
Chaincat33's question on Pascal's Mugging
Jan Samohýl's question on Iterated Distillation and Amplification
Riaan Schoeman's question on Pascal's Mugging
Arthur Guerra's question on Empowerment
Alexito's World's question on Reward Hacking
Griest's question on Reward Modeling
Iceman5613's question on Instrumental Convergence
The Great Steve's question on Experts on the Future of AI
Gamesaucer's question on AI Safety Gridworlds
Double Dragon's question on Safe Exploration
Greniza *'s question on Maximizers and Satisficers
Florian Matel's question on Maximizers and Satisficers
The Happy Greek's question on Reward Modeling
James Williston's question on The Windfall Clause
Racon Vid's question on Status Report
FirstRisingSouI's question on Safe Exploration
Bruh dude's question on The Orthogonality Thesis
Valentin Poussou's question on The Windfall Clause
Roul Duke's question on Reward Modeling
Philipp T's question on The Orthogonality Thesis
Steven Victor Neiman's question on Reward Hacking
HairlessHare's question on Killer Robot Arms Race
Lucas Kook's question on What can AGI do?
Richard Siano's question on Steven Pinker on AI
Wajih bec's question on The Orthogonality Thesis
Jakub Mike's question on 10 Reasons to Ignore AI Safety
Guy Numbers's question on Maximizers and Satisficers
Sagacious03's question on 10 Reasons to Ignore AI Safety
MrSlowestD16's question on Reward Hacking Reloaded
Battery Exhausted's question on What can AGI do?
Daniel Parks's question on Killer Robot Arms Race
Tom Hanlon's question on AI Safety Gridworlds
Brotle1000's question on The Windfall Clause
E b's question on Superintelligence Mod for Civilization V
Edoardo Schnell's question on AI learns to Create Cat Pictures
Tertrih's question on Maximizers and Satisficers
Jason Olshefsky's question on Iterated Distillation and Amplification
Phillip J's question on WNJ: Raise AI Like Kids?
TimeMachine Bikes's question on Experts on the Future of AI
Jiggy Potamus's question on Reward Hacking
Mr. Peanut's question on Avoiding Negative Side Effects
Fisyr's question on Maximizers and Satisficers
Stribika's question on Avoiding Negative Side Effects
Wellington Boobs's question on WNJ: Raise AI Like Kids?
Kekscoreunlimited's question on Reward Hacking
Guest Informant's question on Reward Hacking Reloaded
Bryce Hunter's question on The Orthogonality Thesis
Owen Burns's question on WNJ: Raise AI Like Kids?
SJNaka101's question on Respectability
Pablothe's question on Killer Robot Arms Race
Anthony Chiu's question on Reward Modeling
Hazmat's question on The Windfall Clause
Daniel Cassell's question on Maximizers and Satisficers
Paul Steven Conyngham's question on Killer Robot Arms Race
Polares's question on Steven Pinker on AI
Oliver D7's question on 10 Reasons to Ignore AI Safety
Zebobez's question on Maximizers and Satisficers
Elliot Nolan's question on WNJ: Raise AI Like Kids?
Mark Nassenstein's question on Steven Pinker on AI
Morthim's question on The Orthogonality Thesis
Daniel Adelodun's question on Reward Modeling
Valts Sondors's question on The Windfall Clause
GAPIntoTheGame's question on Maximizers and Satisficers
Thomas Curtis's question on Where do we go now
Andew Tarjanyi's question on Iterated Distillation and Amplification
Rat boii's question on Specification Gaming
Shirley Munro's question on Pascal's Mugging
Jordan McMeow's question on Pascal's Mugging
A8lg6p's question on The Orthogonality Thesis
JumperCzech's question on Maximizers and Satisficers
SlackwareNVM's question on Avoiding Negative Side Effects
Jordan Anderson's question on Specification Gaming
Bøh Mand's question on Maximizers and Satisficers
Beeble2003's question on Specification Gaming
Håkon Egset Harnes's question on Maximizers and Satisficers
Nelson Mir's question on The Orthogonality Thesis
Biel Bestué de Luna's question on Steven Pinker on AI
Matrixar's music workshop's question on Maximizers and Satisficers
Paulo Bardes's question on AI Safety Gridworlds
Mzma's question on The Orthogonality Thesis
Connor Mosley's question on Avoiding Negative Side Effects
Story-Powered Sales's question on Scalable Supervision
SaintCergue's question on The Orthogonality Thesis
Midhunraj R's question on Reward Modeling
Ferenc Gazdag's question on WNJ: Raise AI Like Kids?
Firaro's question on Respectability
See more...