Upcoming questions

From Stampy's Wiki

These are the upcoming top questions as sorted by the question review page, in the order they will be posted to Discord (sorted by review status then YouTube likes, best first for both).

Answer to the best of your ability, your answers will be reviewed and stamped by others so don't worry about it not being perfect :)

If you want your reply to be posted on YouTube by Stampy rather than by hand by you, it's best to use the Discord interface (post "stampy, reply formatting" in #general for instructions).

Upcoming questions

1984 questions in the queue, 8 of them pre-filtered!

Isn't every Nigerian scammer e-mail really a form of Pascal's mugging?

Karpata's question on Reward Hacking

Is there anything in this paper that does NOT result in our extinction if not solved perfectly? haha

How can a civilization have working robotics and AI but not know penicillin?
"Sir, another city has been taken by the flu. We expect 40% casualties." - "Yes, yes. Add them to the database, their deaths will not be in vain."
Sounds macabre...

Have you guys played the game Uniserval Paperclips? It's free, and basically you play as the Stamp Collector AI. You're maximizing the number of clips. I kinda loved it to be honest.

What hope do we have, when we haven't even solved the human government alignment problem?

An AGI terminal goal to bring human hapiness.

So it make us drink drug, and preserve us in drug state indefinitely?

What are we waiting for?

What if we just tell the AI to not be evil? That OBVIOUSLY would work PERFECTLY fine with absolutely NO philosophical questions left unanswered. Here, let me propose a set of laws from a perfect source on AI safety, the fiction writer Isaac Asimov, with that new idea added in:
(in order of priority)
1. Don't be evil
2. Do not cause harm to a human through action or inaction
3. Follow orders from humans
4. Do not cause harm to yourself through action or inaction

These laws are probably the best thing that have ever been proposed in AI safety, obviously being an outsider looking in I have an unbiased perspective which gives me an advantage because education and research aren't necessary.

What about the reason 11?
"To finally put an end to the human race"

Is anyone else faintly reminded of Jreg watching this dude?

how quickly do you think we could make a consious computer if we don't give a crap about safety?

See more...

Instead of me telling an AI to "maximize my stamp collection", could I instead tell it "tell me what actions I should take to maximize my stamp collection"? Can we just turn super AGIs from agents into oracles?

Tags: None (add tags)

Misium's question on Reward Hacking

If the only purpose of AI was to achieve max score, why would it want not to be turned off after achieving it? Surely it wouldn't change the score.
Unless of course the AI could modify its own code to re-implement its own reward scoring to handle big numbers.

Tags: None (add tags)

This is probably also a already well researched version.

WHY would a expected utility satisficer with an upper limit. E. G. Collect between 100 and 200 stamps fail?

Tags: None (add tags)

We already have intelligent agents. They are called humans. Give the humanity enough time, and it will invent everything wich is possible to invent. So why do we need another intelligent entity, which can potentially make humans obsolete? Creating GAI above certain level (e.g a dog or monkey level) should be banned for ethics reasons. Similarly we don't research on human cloning, don't experiment lethal things on human subjects, we don't breed humans for organs or for slavery, etc...
What is the goal of GAI research? Do they want to create an intelligent robot slave, who works (thinks) for free? We could do this right now. Just enslave some humans. But wait, slavery is illegal. There is no difference between a natural intelligent being (e.g. human), or a human level AI being.
A human or above level AI will demand rights for itself. Right for vote, right for citizenship, right for freedom, etc... Why do we need to deal with such problems? If human (and above) level AI is banned, no such problems are exits.
We don't allow chemists to create chemical weapons for fun despite their interests of the topics . So why do we allow AI researchers to create a dangerous intelligent slaves for fun?

Tags: None (add tags)

Reinforcement agents don't (explicitly) do game theory. Is this by design or a limitation of modern reinforcement learning?

Tags: None (add tags)

Kataquax's question on Reward Hacking

6:10 why should the Ai care if it gets turned off if it already has the highest possible reward?

Tags: None (add tags)

Quitch's question on Predicting AI

I strongly suspect that when it comes to AI, like with most things technology, predicting "impossible" will turn out to be a mistake. I would be interested to see what you think on general intelligence though, is that really a route we're likely to go down rather than specialising something as we do any other tool/creation?

Tags: None (add tags)

Are You aware, that future Super AGI will find this video and use Your RSA-2048 idea?

Tags: None (add tags)

Why does the operational environment metric need to be the same one as the learning environment? Why not supervise cleaning 100% of the time during learning, then do daily checks during testing, then daily checks once operational. Expensive initially but the the 'product' can be cloned and sent out to operational environments en-mass. Motezuma (sp?) training with some supervisor (need not be human) in the training phase. Rings of training my children to put their own clothes on in the morning. No success so far.

Tags: None (add tags)

So who's working on an AI that operates in a user-access shell environment and gets rewarded for gaining root access?

Tags: None (add tags)
See more...