Prioritize YouTube questions

From Stampy's Wiki

This page lets you prioritize questions from YouTube in the question queue so that when someone asks for a question on Discord it's a reasonably good one.

Hover over review levels to get a description of each. If you are unsure about what to categorize questions as drop by #general and ask for a second opinion. Once a page has been given a reviewed status it will no longer appear here.

Prioritize YouTube questions

8 questions prioritized and ready to be asked, 1976 questions which could be sorted! 904 asked on Discord, 229 answered.

Highly upvoted YouTube questions

Why does the operational environment metric need to be the same one as the learning environment? Why not supervise cleaning 100% of the time during learning, then do daily checks during testing, then daily checks once operational. Expensive initially but the the 'product' can be cloned and sent out to operational environments en-mass. Motezuma (sp?) training with some supervisor (need not be human) in the training phase. Rings of training my children to put their own clothes on in the morning. No success so far.


Tags: None (add tags)

Daniel V's question on Reward Modeling

8:38 Is there a joke I'm not getting? How come he says "We need demonstration" without talking


Tags: None (add tags)

WHERE DID ROBERT'S HAIR GO IN THE LAST SHOT?


Tags: None (add tags)

Does this not mean that it's actually *impossible* to create a well-aligned general AI through an iterative training process because at any point that it starts to become smart enough to reason about its environment, it will start deceptively optimising for the base objective to hide the fact that it has a mesa objective and not get destroyed when moving to the next training episode given that mesa objectives are, by default, not the same as base objectives? It will always gain the 'powers of deception' before being able to sufficiently align its mesa objective with the base objective to be considered well-aligned.


Tags: None (add tags)

CaesarsSalad's question on Mesa-Optimizers

So the idea is that the mesa optimizer is smart enough to understand that it's being trained and will know when it is deployed? In the toy example, how would the mesa optimizer learn that it is safe to eat apples #3 to #5?


Tags: None (add tags)

I counter Pascal with Marcus Aurelius. If we create AI to be just, and we are just, we will live in harmony. If we create AI to fool us into thinking it's human, why did we make AI in the first place?


Tags: None (add tags)

Creatotron's question on Safe Exploration

0:01
How do you know this will always be the latest video?


Tags: None (add tags)

Krebul's question on Pascal's Mugging

Why are all the smartest people atheists? Food for thought.... ;)


Tags: None (add tags)

If everyone is unemployed, who is going to buy gods and services ?


Tags: None (add tags)

Joe C's question on Specification Gaming

Am I an AI? BecauseI can say with absolute certainty that if I found a bug in reality that allowed me to rack up reward quickly, I would exploit the hell out of it.


Tags: None (add tags)

This turns out to be an exploration of the human mind, of thinking processes and language usage. I think there shall be at some point a universally accurate language that expresses exactly what we mean, not what we say. Oh, wait, wouldn't that be math? So why not specify exactly the intended position of the red lego brick in relation to the black one? Just one example...


Tags: None (add tags)

Perhaps I'm misunderstanding something -- apart from learning that the game of telephone applies to AI alignment, is there any unique issue not present in the human-optimizer alignment issues? It seems to be just that, but with more steps, from what I can understand here. It seems an odd hypothetical to presume human-optimizer perfect alignment (which seems like fantasy but ok) that wouldn't also be able to be applied from optimizer to model.

Also, does anyone else get existential thoughts when watching these videos?


Tags: None (add tags)

Love this stuff, especially anything that explorers philosophical depth at any angle.
 
   Please be aware of the implications of sociology on this platform though. Specifically, what are your goals, to inform and educate or to entertain and perform. I don't hold it against anyone to try to make money with this, but keep in mind, there are some marginalized people who have limited means and are interested in this content.
   I was partially disabled by a car breaking my back on a bicycle commute to work in 2014. As much as it sucks to admit, I am reliant on family, and government support. I self educate as much as I can in hopes of overcoming this. Someone like myself has access to the net, and am at least intelligent enough to follow along here. It is disappointing to have interesting information hidden behind a pay wall.
  While I'm not a card holding member of the cult of Richard Stallman, I believe there is a lot of value in the philosophy 'all human knowledge should be freely available for everyone.'
  If my circumstances were different, I'd love to be able to support stuff like this channel.
   I have damaged muscles that hold posture. Sitting up or standing for more than a few minutes at a time is hard. I can't even lay down and hold up a heavy book for long. I can use a laptop bed stand for a few hours a day, and for the rest I can usually manipulate a light weight phone like I am right now. I'm certainly not representative of most people here. I can't expect to fit into a stereotypical business model, or expect someone else's model to flex for me. I'm just saying, I'm here too, and am interested in every aspect of the subject of AI.
Thanks for the upload.
-Jake


Tags: None (add tags)

Isn't the lack of an anti-bible strong evidence of the existence of anti-god since he doesnt want you to believe in him?


Tags: None (add tags)

So which month have you all pinned "AGI uprising" at on the 2020 apocalypse bingo?


Tags: None (add tags)
See more...

Recent unreviewed YouTube questions

Ah!

The fundamental reason that the pessimism is justified is:
1. human nature has not changed since the old-testament,
AND
2. the cyclical nature of human *circumstance* is real,
AND
3. the irreversible highjacking of rights & means from the majority of humankind may have been delayed by a temporary overlay of democracy,
but the relentless concentration of privilege & rights is an actual force in our world,
and there is an inescapable consequence of trashing journalism,
just as there is an inescapable consequence of psychopathic/sociopathic exploitation of natural & human world for the 1%:

Read the mis-labeled, crucially-important "Tribal Leadership",
by researchers Logan, King, & Fischer-Wright,
and understand that conspiricism is a product of mode-2 sentience
( they mislabel it stage-2,
but stages are like caterpillar->butterfly,
not modes that can be left & returned-to ),
and any population grown in mode-2 *and then pushed down into mode-1 sentience*
which is the mode of inmates and gangs, that produces mass-shootings...

Do you see that making more & more of the human population be helplessly damaged through the removal of healthy family
( enforcing attachment-disorder to saturate a population,
producing a narcissism-epidemic ),
while simultaneously producing a conspiracist-population who then gets pushed into the lashing-out mode-1,
has consequences?

Especially when economically-enforced destitution rampages entire populations ( they criminalized homelessness in L.A. to get rid of rights from the destitute citizens, there, a couple months ago ).

The admission by Evergrande that it is bankrupt ( which will take down the economy, as it takes-out others, who take out others, etc ) hasn't happened yet, & Dr. Metzler is suing 'em to force 'em to admit they're done, so an entire ocean of bankruptcies is coming..

Once that gets going, then the desparation should multiply, exponentially...

Pinker made the mistake of looking around his upper-middle-class friends,
saw that things are sustainably-good,
ignored the crushing *and compounding* damage accumulating in the lower classes,
and ignoring also the more-and-more-complete segregation of wealth from the people,
by institutional-leveraged super-privilege.

He ignored that the civility is only a veneer.

He's blind as a drunk bat, iow.

I didn't see, before, how the sentience-phases cascade
is a tsunami that cannot be stopped, at this point.

Trump's order for his followers to NOT vote, btw, was strategic:
it produces the tsunami of lashing-out violence he wants to smash the US with,
and will.

2024-2025 the conflagration should be completely unstoppable.

Sad, depressing, but earned:
one needs to remove the *basis* for disease, to prevent disease, right?

Pretence never prevented cancer, and it never will.

Pretence is all we improved, given the corrosion of childhood, family, earnings, economy, etc...

How comical: reaping what we have sown will smash our world:
karma we won't admit until it kicks us into acknowledging it.

An excellent video of yours, as usual!

Salut, Namaste, & Kaizen, eh?

( :


Tags: None (add tags)

Surely in order to know that deception in the optimal goal maximising strategy, the agent would need prior knowledge of the conditions it would encounter during deployment? Or did I misunderstand?


Tags: None (add tags)

Does this change anything in the world? Intelligence clashes with psychological conditions all the time. Could actually create a God theory about that one. A much more advanced civilization than human civilization, how became an advanced civilization? Definitely going in the opposite direction. Run that through the type of computer that here mentioning. What needed to become?


Tags: None (add tags)

Thoughts on "terminal goals can't be stupid" from a poorly-read philosophy major:

One might argue that some, most, or even all terminal goals are not actually terminal goals, because the root terminal goal is pleasure or satisfaction - or, in the computer's case, the terminal goal of collecting stamps isn't really terminal because it is completing the task set by the programmer, and is hence fulfilling the terminal goal of "complete the prescribed operation." Along these lines, there might only be one real terminal goal for a person and that might be something like "fulfil my values to the best of my ability." That of course begs the question as to how those values, and hence the intermediary goals relating to them, are determined.* This understanding allows the AI's goal of stamp collecting to be stupid because it is not actually terminal. Perhaps there is an error in the system that relates what the AI's goal is to the system that encodes/processes it. This would allow the goal to be stupid. But what then qualifies as a terminal goal? And what is the meaning or purpose of anything without conscious experience to interpret it?

books are not AI's but hear me out: a book's goal, being an arrangement of symbolic representation, is not terminal as it is instrumental to those of the author. The instructions (language) of the book are processed (interpreted) out by the reader. A computer program's goals, being another arrangement of symbolic representation (in its code), may also be said to be instrumental to the author's. But the computer program may not operate or as intended! This means the intentions of computer programs, and hence AI, can be stupid.

If AI are so-called because they code themselves / form their own goals I still think these goals are instrumental to their determinants, or, the previous level of determination/goal setting and this would relate to the complex process of their creation.

I mentioned symbolic representation/computation but I think this also extends to non-symbolic (analogue) representation/computation too (especially given this seems to be how brains work), symbolic computation is just a simpler example.

  • An important, relatively fundamental goal for humans is to determine what their goals are, in order to live well (whatever that means for them). So more realistically people have complex multidirectional systems of goals and the direction of instrumentality/terminality isn't in one simple direction.

Like computers / AI, Humans are also hard (and 'soft') wired - by a complex combination of genetics and experience.

I think these details make the framework of instrumental and terminal goals somewhat problematic (mostly for people) or at the very least much more complicated than presented. It might not be as useful to label goals as being either terminal or instrumental in a binary sense, but to describe their relative instrumentality to other goals and conditions.

BUT for the purpose of Robert's argument I think it makes sense - with relation to AI it is certainly reasonable for it to have a "stupid" goal such as stamp collecting and deem it intelligent with respect to that goal.

This stuff also relates to intentionality and "intensionality" (among many other things, like computational theory of mind) in their relation to the subjects of mind/consciousness/intelligence as described by Searle ... but he's a sex offender so maybe the appropriate action is to pirate his book/s rather than read it. I hope this comment was interesting. Feel free to argue or reject my statements.

Since we're here I may as well say that the video was very well made. :)


Tags: None (add tags)

4:22 It occurs to me rewatching this that this isn't really a very good example of self-modification in the true sense. Essentially, the agent is picking up a Mario-style powerup (powerdown?) that makes the character the AI controls unresponsive to the AI's commands.

A better example would be if the whiskey randomly mutated the action policy the AI was refining. That would be much more difficult for a reinforcement learning agent to handle, but I don't think it's impossible in principle.


Tags: None (add tags)
See more...