Recent changes

From Stampy's Wiki

Track the most recent changes to the wiki on this page.

Recent changes options Show last 50 | 100 | 250 | 500 changes in last 1 | 3 | 7 | 14 | 30 days
Hide registered users | Hide anonymous users | Hide my edits | Show bots | Hide minor edits
Show new changes starting from 19:20, 2 December 2021
   
List of abbreviations:
N
This edit created a new page (also see list of new pages)
m
This is a minor edit
b
This edit was performed by a bot
(±123)
The page size changed by this number of bytes

2 December 2021

N    15:22  Antryg Revok's question on Steven Pinker on AI diffhist +3,438 Stampy talk contribs Created page with "{{Question |question=Ah! The fundamental reason that the pessimism is justified is: 1. human nature has not changed since the old-testament, AND 2. the cyclical nature of hum..."
 m   00:35  Alternate phrasings diffhist -3 756254556811165756 talk contribs

1 December 2021

N    18:51  What are the style guidelines for writing for Stampy?‎‎ 3 changes history +210 [756254556811165756‎ (3×)]
 m   
18:51 (cur | prev) +1 756254556811165756 talk contribs Marked question as canonical
 m   
18:51 (cur | prev) +88 756254556811165756 talk contribs Set the canonical answer of ‘What are the style guidelines for writing for Stampy?’ to ‘Plex's Answer to What are the style guidelines for writing for Stampy? ’.
N    
18:46 (cur | prev) +121 756254556811165756 talk contribs Created page with "{{Question |tags=Stampy |asked=No |canonical=No |forrob=No |notquestion=No |outofscope=No |asker=plex |date=2021/12/01 }}"
N    18:50  Answer to What are the style guidelines for writing for Stampy? diffhist +573 756254556811165756 talk contribs Created page with "{{Answer |answer=Try to avoid directly referencing the wording of the question in the answer, in order to make the answer more robust to alternate phrasings. For example, that..."
 m   18:50  Form:Answer‎‎ 3 changes history -43 [756254556811165756‎ (3×)]
 m   
18:50 (cur | prev) -11 756254556811165756 talk contribs
 m   
16:33 (cur | prev) -1 756254556811165756 talk contribs
 m   
16:33 (cur | prev) -31 756254556811165756 talk contribs
     17:19  Can you give an AI a goal of “minimally impact the world”? diffhist +140 Robertskmiles talk contribs
     17:13  Can we teach a superintelligence a moral code with machine learning? diffhist +73 Robertskmiles talk contribs
     17:12  Can we specify a code of rules that the AI has to follow? diffhist +69 Robertskmiles talk contribs
     17:10  Can we program the superintelligence to maximize human pleasure or desire satisfaction? diffhist +147 Robertskmiles talk contribs
     17:09  Aren’t there some pretty easy ways to eliminate these potential problems? diffhist +108 Robertskmiles talk contribs
     17:08  Are there types of advanced AI that would be safer than others? diffhist +89 Robertskmiles talk contribs
     17:07  Answer to Are Google, OpenAI etc. aware of the risk? diffhist +43 Robertskmiles talk contribs
     17:06  Are Google, OpenAI etc. aware of the risk? diffhist +123 Robertskmiles talk contribs
N    16:36  elriggs diffhist +37 756254556811165756 talk contribs Redirected page to elriggs Tag: New redirect
 m   16:35  Form:Question‎‎ 2 changes history +4 [756254556811165756‎ (2×)]
 m   
16:35 (cur | prev) -4 756254556811165756 talk contribs
 m   
16:35 (cur | prev) +8 756254556811165756 talk contribs
N    16:24  elriggs diffhist +36 756254556811165756 talk contribs Created page with "{{userstats}} {{AnsweredQuestions}}"

30 November 2021

N    23:37  Robert Tuttle's question on Mesa-Optimizers 2 diffhist +566 Stampy talk contribs Created page with "{{Question |question=Surely in order to know that deception in the optimal goal maximising strategy, the agent would need prior knowledge of the conditions it would encounter..."
N    22:35  Anai Barangan's question on The Orthogonality Thesis diffhist +735 Stampy talk contribs Created page with "{{Question |question=Does this change anything in the world? Intelligence clashes with psychological conditions all the time. Could actually create a God theory about that one..."
 m   20:35  Template:Userstats diffhist +34 756254556811165756 talk contribs
N    20:12  Answer to Would AI alignment be hard with deep learning?‎‎ 4 changes history +328 [484672482016493568‎ (4×)]
     
20:12 (cur | prev) +46 484672482016493568 talk contribs
     
20:11 (cur | prev) -85 484672482016493568 talk contribs
     
20:11 (cur | prev) +96 484672482016493568 talk contribs
N    
20:10 (cur | prev) +271 484672482016493568 talk contribs Created page with "{{Answer |answer=Ajeya Cotra has written an excellent article over at https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/ on this question. |a..."
N    20:11  Would AI alignment be hard with deep learning?‎‎ 3 changes history +202 [484672482016493568‎; 756254556811165756‎ (2×)]
 m   
20:11 (cur | prev) +87 756254556811165756 talk contribs Set the canonical answer of ‘Would AI alignment be hard with deep learning?’ to ‘Nico Hill2's Answer to Would AI alignment be hard with deep learning? ’.
 m   
20:08 (cur | prev) +1 756254556811165756 talk contribs Marked question as canonical
N    
20:07 (cur | prev) +114 484672482016493568 talk contribs Created page with "{{Question |asked=No |canonical=No |forrob=No |notquestion=No |outofscope=No |asker=Nico Hill2 |date=2021/11/30 }}"
N    20:05  JRX diffhist +36 756254556811165756 talk contribs Created page with "{{userstats}} {{AnsweredQuestions}}"
 m   17:54  Davy Jones's question on The Orthogonality Thesis diffhist +12 756254556811165756 talk contribs Marked question as Meh
 m   17:54  Randy Carvalho's question on Pascal's Mugging‎‎ 2 changes history +17 [756254556811165756‎ (2×)]
 m   
17:54 (cur | prev) +16 756254556811165756 talk contribs Marked question as out of scope
 m   
17:54 (cur | prev) +1 756254556811165756 talk contribs Marked as not a question
 m   17:53  Misium's question on Reward Hacking diffhist +12 756254556811165756 talk contribs Marked question as Approved
 m   17:53  Szarvasmarha's question on 10 Reasons to Ignore AI Safety diffhist +12 756254556811165756 talk contribs Marked question as Approved
 m   17:53  Gus Kelty's question on 10 Reasons to Ignore AI Safety‎‎ 2 changes history +12 [756254556811165756‎ (2×)]
 m   
17:53 (cur | prev) 0 756254556811165756 talk contribs Marked question as Excellent
 m   
17:53 (cur | prev) +12 756254556811165756 talk contribs Marked question as Good
 m   17:52  Matthew Whiteside's question on Specification Gaming diffhist +1 756254556811165756 talk contribs Marked question as for Rob
 m   17:52  David Gustavsson's question on What can AGI do? diffhist +12 756254556811165756 talk contribs Marked question as Meh
 m   17:52  Tobias Görgen's question on Maximizers and Satisficers diffhist +12 756254556811165756 talk contribs Marked question as Approved
 m   17:52  Erik Engelhardt's question on Where do we go now diffhist +1 756254556811165756 talk contribs Marked as not a question
 m   17:52  Penny Lane's question on Real Inner Misalignment diffhist +2 756254556811165756 talk contribs Marked as not a question
 m   17:50  Prioritize YouTube questions diffhist -1 756254556811165756 talk contribs
 m   17:50  Questions from YouTube diffhist +33 756254556811165756 talk contribs
 m   17:49  Canonical questions diffhist +84 756254556811165756 talk contribs
     15:49  Answer to AIs aren’t as smart as rats, let alone humans. Isn’t it sort of early to be worrying about this kind of thing? diffhist +52 Robertskmiles talk contribs
     15:48  AIs aren’t as smart as rats, let alone humans. Isn’t it sort of early to be worrying about this kind of thing? diffhist +197 Robertskmiles talk contribs
 m   12:18  Template:AlternatePhrasingList diffhist +4 756254556811165756 talk contribs