Robert Miles

From Stampy's Wiki
Robert Miles
Description: Videos about Artificial Intelligence Safety Research, for everyone.

AI is leaping forward right now, it's only a matter of time before we develop true Artificial General Intelligence, and there are a lot of different ways that this could go badly wrong for us. Putting aside the science fiction, this channel is about AI Safety research - humanity's best attempt to foresee the problems AI might pose and work out ways to ensure that our AI developments are safe and beneficial.

Subscribers: 89400
Platform: wikipedia:YouTube
Relevant videos: 46
Video nameViewsPublished
"Don't Fear The Terminator" - Yann LeCun on Facebook13314 October 2019 17:27:16
10 Reasons to Ignore AI Safety147,0094 June 2020 15:28:37
9 Examples of Specification Gaming209,40629 April 2020 16:41:20
A Response to Steven Pinker on AI146,35431 March 2019 13:39:12
AI Safety Gridworlds67,34425 May 2018 16:20:46
AI Safety at EAGlobal2017 Conference15,70316 November 2017 19:21:00
AI That Doesn't Try Too Hard - Maximizers and Satisficers145,58923 August 2019 15:05:26
AI learns to Create ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes43,19129 October 2017 11:49:20
Are AI Risks like Nuclear Risks?65,67110 June 2017 15:22:46
Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1129,01618 June 2017 11:02:16
Avoiding Positive Side Effects: Concrete Problems in AI Safety part 1.539,76525 June 2017 09:29:27
Channel Introduction47,24328 February 2017 20:14:23
Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...31,92423 May 2021 20:15:57
Empowerment: Concrete Problems in AI Safety part 252,8209 July 2017 09:24:11
Experts' Predictions about the Future of AI63,79731 March 2018 12:12:37
Friend or Foe? AI Safety Gridworlds extra bit29,36324 June 2018 23:31:07
How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification115,47211 March 2019 12:14:21
Intelligence and Stupidity: The Orthogonality Thesis269,27211 January 2018 19:53:43
Intro to AI Safety, Remastered18,21424 June 2021 15:25:40
Is AI Safety a Pascal's Mugging?223,27016 May 2019 14:11:07
MAXIMUM OVERGEORGIA8,1584 June 2016 00:04:12
Much Better Stampy Test Video7015 March 2021 20:25:14
PC Build Video!72428 April 2017 13:56:19
Predicting AI: RIP Prof. Hubert Dreyfus47,21218 May 2017 12:25:34
Quantilizers: AI That Doesn't Try Too Hard44,01613 December 2020 20:46:21
Reading and Commenting On Pinker's Article18028 March 2019 18:10:03
Respectability51,93827 May 2017 14:06:29
Reward Hacking Reloaded: Concrete Problems in AI Safety Part 3.568,08029 August 2017 10:08:41
Reward Hacking: Concrete Problems in AI Safety Part 376,08012 August 2017 19:24:08
Safe Exploration: Concrete Problems in AI Safety Part 665,70721 September 2018 11:20:53
Scalable Supervision: Concrete Problems in AI Safety Part 537,37829 November 2017 21:47:29
Sharing the Benefits of AI: The Windfall Clause59,4666 July 2020 16:53:53
Status Report15,05318 March 2017 11:40:43
Superintelligence Mod for Civilization V57,48813 February 2018 17:17:58
The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment73,08816 February 2021 18:37:00
The other "Killer Robot Arms Race" Elon Musk should worry about78,96522 August 2017 11:19:33
Training AI Without Writing A Reward Function, with Reward Modelling154,43013 December 2019 16:39:11
Untitled25,97627 August 2017 22:12:27
We Were Right! Real Inner Misalignment42,84410 October 2021
What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 482,26924 September 2017 12:09:54
What can AGI do? I/O and Speed82,77217 October 2017 10:20:35
What's the Use of Utility Functions?50,42427 April 2017 19:35:30
Where do we go now?62,01131 March 2017 20:16:27
Why Not Just: Raise AI Like Kids?134,35322 July 2017 13:58:34
Why Not Just: Think of AGI Like a Corporation?113,20723 December 2018 20:01:39
Why Would AI Want to do Bad Things? Instrumental Convergence157,05124 March 2018 19:51:39