These are the upcoming top questions as sorted by the question review page, in the order they will be posted to Answer to How can I contact the Stampy team? (sorted by review status then YouTube likes, best first for both).
If you want your reply to be posted on YouTube by Stampy rather than by hand by you, it's best to use the Answer to How can I contact the Stampy team? interface (post "stampy, reply formatting" in #general for instructions).
1846 questions in the queue, 2 of them pre-filtered!
Isn't every Nigerian scammer e-mail really a form of Pascal's mugging?
Is there anything in this paper that does NOT result in our extinction if not solved perfectly? haha
How can a civilization have working robotics and AI but not know penicillin?
"Sir, another city has been taken by the flu. We expect 40% casualties." - "Yes, yes. Add them to the database, their deaths will not be in vain."
Have you guys played the game Uniserval Paperclips? It's free, and basically you play as the Stamp Collector AI. You're maximizing the number of clips. I kinda loved it to be honest.
What hope do we have, when we haven't even solved the human government alignment problem?
An AGI terminal goal to bring human hapiness.
So it make us drink drug, and preserve us in drug state indefinitely?
What are we waiting for?
What if we just tell the AI to not be evil? That OBVIOUSLY would work PERFECTLY fine with absolutely NO philosophical questions left unanswered. Here, let me propose a set of laws from a perfect source on AI safety, the fiction writer Isaac Asimov, with that new idea added in:
(in order of priority)
1. Don't be evil
2. Do not cause harm to a human through action or inaction
3. Follow orders from humans
4. Do not cause harm to yourself through action or inaction
These laws are probably the best thing that have ever been proposed in AI safety, obviously being an outsider looking in I have an unbiased perspective which gives me an advantage because education and research aren't necessary.
What about the reason 11?
"To finally put an end to the human race"
Is anyone else faintly reminded of Jreg watching this dude?
So "monkey's paw" is the very nature of how AI behaves?
Why does this cause me to feel profound anxiety?
We already have intelligent agents. They are called humans. Give the humanity enough time, and it will invent everything wich is possible to invent. So why do we need another intelligent entity, which can potentially make humans obsolete? Creating GAI above certain level (e.g a dog or monkey level) should be banned for ethics reasons. Similarly we don't research on human cloning, don't experiment lethal things on human subjects, we don't breed humans for organs or for slavery, etc...
What is the goal of GAI research? Do they want to create an intelligent robot slave, who works (thinks) for free? We could do this right now. Just enslave some humans. But wait, slavery is illegal. There is no difference between a natural intelligent being (e.g. human), or a human level AI being.
A human or above level AI will demand rights for itself. Right for vote, right for citizenship, right for freedom, etc... Why do we need to deal with such problems? If human (and above) level AI is banned, no such problems are exits.
We don't allow chemists to create chemical weapons for fun despite their interests of the topics . So why do we allow AI researchers to create a dangerous intelligent slaves for fun?
6:10 why should the Ai care if it gets turned off if it already has the highest possible reward?
Why does the operational environment metric need to be the same one as the learning environment? Why not supervise cleaning 100% of the time during learning, then do daily checks during testing, then daily checks once operational. Expensive initially but the the 'product' can be cloned and sent out to operational environments en-mass. Motezuma (sp?) training with some supervisor (need not be human) in the training phase. Rings of training my children to put their own clothes on in the morning. No success so far.
Am I an AI? BecauseI can say with absolute certainty that if I found a bug in reality that allowed me to rack up reward quickly, I would exploit the hell out of it.
I want "later" (as in "more on that later...") to be "now". How long will I have to wait?
Isn't the lack of an anti-bible strong evidence of the existence of anti-god since he doesnt want you to believe in him?
Robert, could you please leave the text you put on screen longer than 5 milliseconds so we can read them without having to rewind and pause? Thanks :)
why now just let the gai take over?
Who else thought of the emojibots from Doctor Who?
I counter Pascal with Marcus Aurelius. If we create AI to be just, and we are just, we will live in harmony. If we create AI to fool us into thinking it's human, why did we make AI in the first place?