Unreviewed YouTube questions

From Stampy's Wiki

Back to review questions.

Would it be possible to brute force ideas? If an image is just pixels it should be possible to get a computer to make every possible combination of pixels in a given area. Maybe start with a small area and low resolution and only black and white to test it. Then make an app that's like a game for people to search through the images and tag what they see or think it could be. Maybe even tie it to some kind of cryptocurrency to get more people involved. Somebody do this lol I've been having this idea for a while but I'm too lazy to do it and I'm not even sure how to start.

You know just as well as I do that the guy who collects stamps will not just buy some stamps, he will build The Stamp Collector, and you have just facilitated the end of all humanity :( I would like to ask, on a more serious note, do you have any insights on how this relates to how humans often feel a sense of emptiness after achieving all of their goals. Or, well, I fail to explain it correctly, but there is this idea that humans always need a new goal to feel happy right? Maybe I am completely off, but what I am asking is, yes in an intelligent agent we can have simple, or even really complex goals, but will it ever be able to mimic the way goals are present in humans, a goal that is not so much supposed to be achieved, but more a fuel to make progress, kind of maybe like: a desire?

To put it simply, the smarter the machine, the harder to tell it what you want from it. If you create a machine smarter than yourself, how can you ensure it'll do what you want?

Hi, Robert.

If possible, can you make video about Inverse Reinforcement Learning and/or other ways how we can infer human values just from raw observations.

Have we considered simply abolishing private property so nobody gets to own the AI that inevitably takes over the world?

Why not have the system take into account the likely effort needed to collect stamps and set a penalty for wasted effort? That seems closer to what humans do.

W.r.t. the final point: but how would the mesa optimizer be aware that there is such thing as deployment and how long it would be deployed for? Seems like an oversight that this knowledge would be available

Can we start working on those brain-calculator chips?

Instead of me telling an AI to "maximize my stamp collection", could I instead tell it "tell me what actions I should take to maximize my stamp collection"? Can we just turn super AGIs from agents into oracles?

Would it be possible for you to do a 'jokey' video on the basilisk?

We already have intelligent agents. They are called humans. Give the humanity enough time, and it will invent everything wich is possible to invent. So why do we need another intelligent entity, which can potentially make humans obsolete? Creating GAI above certain level (e.g a dog or monkey level) should be banned for ethics reasons. Similarly we don't research on human cloning, don't experiment lethal things on human subjects, we don't breed humans for organs or for slavery, etc...
What is the goal of GAI research? Do they want to create an intelligent robot slave, who works (thinks) for free? We could do this right now. Just enslave some humans. But wait, slavery is illegal. There is no difference between a natural intelligent being (e.g. human), or a human level AI being.
A human or above level AI will demand rights for itself. Right for vote, right for citizenship, right for freedom, etc... Why do we need to deal with such problems? If human (and above) level AI is banned, no such problems are exits.
We don't allow chemists to create chemical weapons for fun despite their interests of the topics . So why do we allow AI researchers to create a dangerous intelligent slaves for fun?

34:56 Maybe he meant "real world" more like "physical world" instead of "non-imaginary world"?
Edit: But yes, it would have been definitely possible to make that distinction more clear

Why does the operational environment metric need to be the same one as the learning environment? Why not supervise cleaning 100% of the time during learning, then do daily checks during testing, then daily checks once operational. Expensive initially but the the 'product' can be cloned and sent out to operational environments en-mass. Motezuma (sp?) training with some supervisor (need not be human) in the training phase. Rings of training my children to put their own clothes on in the morning. No success so far.

What if you just used more layers?

In part due to your videos, I'm planning to focus on AI in my undergraduate studies (US). I'm returning to school for my final 1.5 years of study after a long break from university. Do you have any recommended reading to help guide/shape/maximize the utility of my studies? Ultimately (in part due to Yudkowsky) I am drawn to this exact field of study: AI safety. I hope that I can make a contribution.

Hopefully this wasn't answered in a previous video and I forgot or failed to understand it: What if we had an AGI that didn't actually execute any strategies itself but instead pitched them to human supervisors for manual review? It wouldn't generate progress as monumentally fast and it would have to learn to explain its strats to humans, but that seems like a fair trade-off to prevent an AIpocalypse.

What if it had a goal to find out it's 'perfect' goal?

This is probably also a already well researched version.

WHY would a expected utility satisficer with an upper limit. E. G. Collect between 100 and 200 stamps fail?

Would an AGI even be capable of trusting? And why would it trust? And how?

Is the outro song from the Mikado?

If the only purpose of AI was to achieve max score, why would it want not to be turned off after achieving it? Surely it wouldn't change the score.
Unless of course the AI could modify its own code to re-implement its own reward scoring to handle big numbers.

Great video, one of the best on this subject!
I wonder, how can the mesa objective be fixed by the meta-optimizer faster than the base optimizer making it learn the base objective? In the last example the AI agent is capable of understanding that it won't be subjected to gradient descent after the learning step so it become deceptive on puprose and yet, it hasn't learned to achieve the simpler objective of going through the exit while it is trained by the base optimizer?

So, the solution to our problem with machine learning is more machine learning, but now we've hit another problem with machine learning. Let me guess, the solution is more machine learning? This feels like it's going to get very recursive very fast.

I love the "Pause the video and take a second to think. What could go wrong?" parts in between. I do pause and think for a bit and that really helps me to actively and critically think about the concepts you mention, instead of just passively absorbing them like with most educational Youtube videos (or lectures IRL, for that matter).

There is an usually overseen aspect of evolution - consciousness. If that is really part of evolution then AI will gain consciousness at some point. Isn't the evolution of machines comparable to natural evolution regarding that aspect already? The first machines only had specific functions, later more complex functionality, even later programs and now some form of intelligence. Kids or AI both learn from us - what will happen then when a super smart machine with detailed memory gains consciousness at some point?

Reinforcement agents don't (explicitly) do game theory. Is this by design or a limitation of modern reinforcement learning?

Is that background at the end from that Important Videos meme video?

What if they just set up a shell company that didn’t sign the agreement?

Can we just argue the null hypothesis for the rest of our lives <3

Couldn't I just invent a similar system where a belief in god sends one to hell and being an atheist sends one to heaven? Equally unfalsifiable. Negates Pascal's wager, bringing the matter back to not believing making more sense.

Would you enable the subtitles' creation option for me please? I want to add Portuguese subtitles into your videos.

I would love to see a video comparing/contrasting the cybernetic ideas of Wiener, Ashby and von Neumann against how we currently envision AI. Is there a place for finite state machines that act due to structure instead of software? How would a structural based utility function (analog line follower for example) behave differently than a processor based one? Are there significant pros/cons to each approach?

So he's already made a mini death-ray machine and an electric battle ax instrument.. Are we really sure we want this aspiring mad scientist to do ai safety research for us? (jokes aside I think a mix of going through papers and having explanations of concrete problems like the stop button example would be good)

I strongly suspect that when it comes to AI, like with most things technology, predicting "impossible" will turn out to be a mistake. I would be interested to see what you think on general intelligence though, is that really a route we're likely to go down rather than specialising something as we do any other tool/creation?

But what about the Silicon rubber problems in AI safety?

6:10 why should the Ai care if it gets turned off if it already has the highest possible reward?

Are You aware, that future Super AGI will find this video and use Your RSA-2048 idea?

I have a question re: general intelligence AI not wanting to be corrected or upgraded. Would that correlate at all to human general intelligence? I'm thinking about it in the sense that as a child, I did not enjoy going to school and did not understand the value of it, while now as an adult, I enjoy learning new things. Would AI be able to reach a point of 'maturity' where it would perceive value in correction, or is that not a likely outcome?

Could you please make some video about generic intelligence concepts and AGI self-learning

Our utility functions do change over time tough. Why should this not be the case for an AGI?
Like I might prefer going to Amsterdam over going to Cairo before I went to both but once I saw Amsterdam I might now be more interested in Cairo and when I traveled there I might have liked Amsterdam more so next vacation I go back there. After 5 vacations in Amsterdam I feel bored with it so I might wanna try out Cairo again.

This doesn't seem stupid to me or conflicting with the 2 rules you put forth. My function is just a function of all my previous functions.

This kind of behavior might prevent your stamp collector gone mad example from overdoing it as at some point it should over time achieve a new utility function that hopefully is less mad.

AGI more but also how deep learning works for the noobs (you're great with coming with good examples that ease understanding) - maybe building an AI from line 1?

Why Not Just: Make more videos?

can you build an ai to edit the videos for you?

Can any AI, above a certain level of general intelligence, be trustworthy? What I mean to say is, like people, unless you place them in a cell or somehow enslave them, they have freewill and with freewill comes danger. Since the risk is, if it can do anything it wants as a free thinking entity, one of those "anythings" is kill you. It would seem that, depending on its level of advancement, it could out think any human interference that might keep it in check.

For instance. If it's free thinking and you build it to where it has to have a certain button pressed ever 24 hours or it dies, it would know it's in its best interest that it not kill you. Well, if it had the resources to do so, it could blackmail someone into re-coding the need for the button press or moving it to a different site without that restriction or any number of other things to circumvent that restriction or any other you put on it.

Basically, the TLDR is "Can we ever really build an AI that isn't dangerous? Since safety is always undermined by freewill."

J M's question on Empowerment

Professor Miles, I wonder if a lot of this AI safety research can be applicable to political systems and how we can trust politicians. Do you know of any connection?

If humans don't have a well-defined utility function because of their inconsistent preferences, is it possible that the most desirable AGI is one that doesn't have consistent preferences?

why now just let the gai take over?

If it's impossible to code what is a chair... It would be possible to make an AI that could observe how humans treat certain objects so they could imitate it and treat that object like humans do? If a human sits on a chair, the AI would understand that that object can be used for sitting, and if a human sit on a rock the same would be done. (Talking without any experience of computer AI or anything, just a random though I had while watching the video)

Super interesting! If this kind of reward hacking exists in current AI, does that have any kind of serious implications if someone wanted to deploy one for the stock market, for example? Like would the AI seek to "cheat" and commit fraud or some gain insider info rather than play the stock market fairly?

I want "later" (as in "more on that later...") to be "now". How long will I have to wait?