Recent changes

From Stampy's Wiki

Track the most recent changes to the wiki on this page.

Recent changes options Show last 50 | 100 | 250 | 500 changes in last 1 | 3 | 7 | 14 | 30 days
Hide registered users | Hide anonymous users | Hide my edits | Show bots | Hide minor edits
Show new changes starting from 03:32, 20 October 2021
   
List of abbreviations:
N
This edit created a new page (also see list of new pages)
m
This is a minor edit
b
This edit was performed by a bot
(±123)
The page size changed by this number of bytes

20 October 2021

N    02:34  David G. Horsman's question on Respectability diffhist +443 Stampy talk contribs Created page with "{{Question |question=No way Miles. You are the go to guy for coherent, concise explanation of the core issues. Did these other folks make an AGI? No. |notquestion=No |canonica..."
N    01:53  Jonathan Tanner's question on Real Inner Misalignment diffhist +973 Stampy talk contribs Created page with "{{Question |question=I may be wrong, but there seem to be 3 layers (chance, skill, strategy) to this game that are being trained simultaneously: procedural generation (chance)..."

19 October 2021

N    23:08  Siris The Dragon's question on Real Inner Misalignment diffhist +774 Stampy talk contribs Created page with "{{Question |question=Researcher: "Ok, so what do you want?" AI: "The coin at the end!" Researcher: "Ok, good!" *Puts the coin at the beginning.* "Ok, now go!" AI: *Still wa..."
N    22:48  Oliver Bergau's question on Quantilizers diffhist +936 Stampy talk contribs Created page with "{{Question |question=In the preevious video a concept was proposed which would give 100 utility for exactly 100 stamps and 0 for anything else which turned the world into cou..."
N    21:05  Jason Yesmarc's question on Real Inner Misalignment diffhist +898 Stampy talk contribs Created page with "{{Question |question=Now I may just be a simple software engineer, but if I had to take a guess, I'd say that the agent only values coins that are to the right of its characte..."
N    17:00  bpansky's question on Real Inner Misalignment diffhist +567 Stampy talk contribs Created page with "{{Question |question=i look at this and i immediately think of all of the ways it can be applied immediately, today, to numerous real world (human) phenomena that are highly r..."
     16:35  Dan Wylie-Sears's question on The Orthogonality Thesis diffhist +11 Stampy talk contribs
N    13:17  Answer to Can we add friendliness to any artificial intelligence design? diffhist +1,119 Plex talk contribs Created page with "{{Answer |answerto=Can we add friendliness to any artificial intelligence design? |date=2021-10-19 |answer=Many AI designs that would generate an intelligence explosion would..."
N    13:16  Can we add friendliness to any artificial intelligence design?‎‎ 2 changes history +172 [Plex‎ (2×)]
     
13:16 (cur | prev) +18 Plex talk contribs
N    
13:15 (cur | prev) +154 Plex talk contribs Created page with "{{Question |date=2021-10-19 |canonical=No |asker=Luke Muehlhauser |origin=MIRI's Intelligence Explosion FAQ |commeturl=https://intelligence.org/ie-faq/ }}"
N    13:15  Answer to What is Coherent Extrapolated Volition? diffhist +2,994 Plex talk contribs Created page with "{{Answer |answerto=What is Coherent Extrapolated Volition? |date=2021-10-19 |answer=Eliezer Yudkowsky has [https://intelligence.org/files/CEV.pdf proposed] Coherent Extrapolat..."
N    13:10  coherent extrapolated volition diffhist +207 Plex talk contribs Created page with "{{Tag |related=What is Coherent Extrapolated Volition? |AlignmentForum=coherent-extrapolated-volition |LessWrong=Yes |Arbital=cev |Wikipedia=Friendly_artificial_intelligence#C..."
 m   13:09  Form:Tag‎‎ 2 changes history +31 [Plex‎ (2×)]
 m   
13:09 (cur | prev) +21 Plex talk contribs
 m   
13:07 (cur | prev) +10 Plex talk contribs
N    13:02  What is Coherent Extrapolated Volition?‎‎ 3 changes history +217 [Plex‎ (3×)]
     
13:02 (cur | prev) +50 Plex talk contribs
 m   
13:02 (cur | prev) +12 Plex talk contribs Marked question as Good
N    
13:00 (cur | prev) +155 Plex talk contribs Created page with "{{Question |date=2021-10-19 |canonical=Yes |asker=Luke Muehlhauser |origin=MIRI's Intelligence Explosion FAQ |commeturl=https://intelligence.org/ie-faq/ }}"
N    12:56  Answer to Can we teach a superintelligence a moral code with machine learning? diffhist +3,141 Plex talk contribs Created page with "{{Answer |answerto=Can we teach a superintelligence a moral code with machine learning? |date=2021-10-19 |answer=Some have proposedhttps://ieeexplore.ieee.org/document/16679..."
N    08:27  Sapphic Hivemind's question on The Windfall Clause‎‎ 3 changes history +1,724 [Stampy‎ (3×)]
     
08:27 (cur | prev) +1 Stampy talk contribs
     
04:34 (cur | prev) +11 Stampy talk contribs
N    
02:19 (cur | prev) +1,712 Stampy talk contribs Created page with "{{Question |question=1. Why would I trust companies more than I trust democratic governments with distributing that money fairly? A lot of charities and nonprofits are funnels..."

18 October 2021

N    22:12  Dan Wylie-Sears's question on The Orthogonality Thesis diffhist +2,451 Stampy talk contribs Created page with "{{Question |question=You're (deliberately?) eliding the difference between general intelligence and task-specific intelligence. Even I can probably write a program that's a w..."
N    21:52  Monk Doppelschwanz Siamese's question on 10 Reasons to Ignore AI Safety diffhist +574 Stampy talk contribs Created page with "{{Question |question=According to game theory we need a mechanism that destroys the AI without humans being able to stop the process. Same happend in the cold war basically. B..."
N    21:31  Monk Doppelschwanz Siamese's question on Mesa-Optimizers diffhist +481 Stampy talk contribs Created page with "{{Question |question=Problem is: The AI thinks like a human. We dont understand Humans. Why would we trust an AI? |notquestion=No |canonical=No |forrob=No |asked=No |asker=Mon..."
N    18:48  Ryan Richters's question on Reward Hacking Reloaded diffhist +559 Stampy talk contribs Created page with "{{Question |question=The bit about altering it's reward function seems to run counter to the whole idea of goal preservation though. How do you reconcile this? Is it just two..."
     16:33  ZappelFly's question on Killer Robot Arms Race diffhist +11 Stampy talk contribs
N    16:03  links to include on Talk:How can I get hired by an organization working on AI alignment?2 changes history +186 [Plex‎ (2×)]
     
16:03 +1 Plex talk contribs
N    
16:03 +185 Plex talk contribs
N    16:02  How can I get hired by an organization working on AI alignment?‎‎ 2 changes history +214 [Plex‎ (2×)]
 m   
16:02 (cur | prev) +12 Plex talk contribs Marked question as Excellent
N    
16:02 (cur | prev) +202 Plex talk contribs Created page with "{{Question |tags=Careers, Organizations |related=I want to work on AI alignment. How can I get funding? |asked=No |canonical=Yes |forrob=No |notquestion=No |outofscope=No |ask..."
 m   16:01  Answer questions diffhist +42 Plex talk contribs
N    15:55  Answer to I want to work on AI alignment. How can I get funding?‎‎ 6 changes history +1,801 [Plex‎ (6×)]
 m   
15:55 (cur | prev) -7 Plex talk contribs
 m   
15:54 (cur | prev) -11 Plex talk contribs
 m   
15:53 (cur | prev) +75 Plex talk contribs
     
15:45 (cur | prev) +13 Plex talk contribs
 m   
15:45 (cur | prev) 0 Plex talk contribs
N    
15:36 (cur | prev) +1,731 Plex talk contribs Created page with "{{Answer |answer=The organizations which most regularly give grants to individuals working towards AI alignment are the [https://funds.effectivealtruism.org/funds/far-future L..."
N    15:51  ai safety support diffhist +22 Plex talk contribs Created page with "{{Tag |LessWrong=No }}"
 m   15:36  I want to work on AI alignment. How can I get funding? diffhist +89 Plex talk contribs Set the canonical answer of ‘I want to work on AI alignment. How can I get funding?’ to ‘Plex's Answer to I want to work on AI alignment. How can I get funding? ’.
N    13:57  Can we teach a superintelligence a moral code with machine learning? diffhist +194 Plex talk contribs Created page with "{{Question |date=2021-10-18 |tags=value learning, Machine learning |canonical=Yes |asker=Luke Muehlhauser |origin=MIRI's Intelligence Explosion FAQ |commeturl=https://intellig..."
N    13:56  Answer to Can we program the superintelligence to maximize human pleasure or desire satisfaction?‎‎ 2 changes history +2,352 [Plex‎ (2×)]
 m   
13:56 (cur | prev) +4 Plex talk contribs
N    
13:55 (cur | prev) +2,348 Plex talk contribs Created page with "{{Answer |answerto=Can we program the superintelligence to maximize human pleasure or desire satisfaction? |date=2021-10-18 |answer=Let’s consider the likely consequences of..."
N    13:51  Can we program the superintelligence to maximize human pleasure or desire satisfaction? diffhist +174 Plex talk contribs Created page with "{{Question |date=2021-10-18 |tags=Why not just |canonical=Yes |asker=Luke Muehlhauser |origin=MIRI's Intelligence Explosion FAQ |commeturl=https://intelligence.org/ie-faq/ }}"
N    13:51  Answer to Can’t we just program the superintelligence not to harm us? diffhist +3,282 Plex talk contribs Created page with "{{Answer |answerto=Can’t we just program the superintelligence not to harm us? |date=2021-10-18 |answer=Science fiction author Isaac Asimov told stories about robots program..."
N    13:49  Can’t we just program the superintelligence not to harm us? diffhist +193 Plex talk contribs Created page with "{{Question |date=2021-10-18 |tags=Superintelligence, Why not just |canonical=Yes |asker=Luke Muehlhauser |origin=MIRI's Intelligence Explosion FAQ |commeturl=https://intellige..."
     13:48  Answer to Why can't we turn the computers off? diffhist +14 Plex talk contribs
     13:48  Why can't we turn the computers off? diffhist +19 Plex talk contribs
N    13:42  MIRI's Answer to Why can’t we just “put the AI in a box” so it can’t influence the outside world?‎‎ 2 changes history +1,430 [Plex‎ (2×)]
 m   
13:42 (cur | prev) +100 Plex talk contribs
N    
13:41 (cur | prev) +1,330 Plex talk contribs Created page with "{{Answer |answer=‘AI-boxing’ is a common suggestion: why not use a superintelligent machine as a kind of question-answering oracle, and never give it access to the interne..."