Prioritize YouTube questions
Hover over review levels to get a description of each. If you are unsure about what to categorize questions as drop by #general and ask for a second opinion. Once a page has been given a reviewed status it will no longer appear here.
Prioritize YouTube questions
2 questions prioritized and ready to be asked, 1844 questions which could be sorted! 1010 asked on Discord, 248 answered.
Highly upvoted YouTube questions
Why does the operational environment metric need to be the same one as the learning environment? Why not supervise cleaning 100% of the time during learning, then do daily checks during testing, then daily checks once operational. Expensive initially but the the 'product' can be cloned and sent out to operational environments en-mass. Motezuma (sp?) training with some supervisor (need not be human) in the training phase. Rings of training my children to put their own clothes on in the morning. No success so far.
AGI more but also how deep learning works for the noobs (you're great with coming with good examples that ease understanding) - maybe building an AI from line 1?
Rob, could you make a video about those philosophical problems? (I get this is not your area, but just a quick video enumerating them, for example)
Could influence be quantified as an SI unit? For instance, the ability to impart or extract 1 joule of energy with regards to 1 gram of matter in 1 second constitutes 1 influence unit.
Can't a (or perhaps THE) human utility function be to determine their utility function?
8:38 Is there a joke I'm not getting? How come he says "We need demonstration" without talking
I counter Pascal with Marcus Aurelius. If we create AI to be just, and we are just, we will live in harmony. If we create AI to fool us into thinking it's human, why did we make AI in the first place?
Do you sell those blinding laser robots? I need it for very legitimate and kitten friendly reasons.
The tone Pinker uses in his article is pretty disrespectful. If you don't think AI is dangerous, fine, but why would he so quickly assume that people who do think AI is dangerous are ridiculous morons?
"It takes...a mind debauched by learning to carry the process of making the natural seem strange, so far as to ask for the why of any instinctive human act. To the metaphysician alone can such questions occur as: Why do we smile, when pleased, and not scowl? Why are we unable to talk to a crowd as we talk to a single friend? Why does a particular maiden turn our wits so upside-down? The common man can only say, Of course we smile, of course our heart palpitates at the sight of the crowd, of course we love the maiden, that beautiful soul clad in that perfect form, so palpably and flagrantly made for all eternity to be loved!
And so, probably, does each animal feel about the particular things it tends to do in the presence of particular objects. ... To the lion it is the lioness which is made to be loved; to the bear, the she-bear. To the broody hen the notion would probably seem monstrous that there should be a creature in the world to whom a nestful of eggs was not the utterly fascinating and precious and never-to-be-too-much-sat-upon object which it is to her.
Thus we may be sure that, however mysterious some animals' instincts may appear to us, our instincts will appear no less mysterious to them." (William James, 1890)
Why Not Just: Make more videos?
What would happen if you used a standard distribution for value and then used another standard distribution for probability of choice so the agent attempts to do the thing but not aggressively so.
Am I an AI? BecauseI can say with absolute certainty that if I found a bug in reality that allowed me to rack up reward quickly, I would exploit the hell out of it.
If everyone is unemployed, who is going to buy gods and services ?
why now just let the gai take over?
Recent unreviewed YouTube questions
How do humans change their terminal goals while also having goal preservation as a convergent instrument? Perhaps the first question is can humans change their terminal goals?
The cancer researcher presumably wants to benefit mankind and if they found a more effective research area to achieve that then they would change their goal. But then the terminal goal is benefiting mankind not cancer research so still no change of terminal goal.
Could the cancer researcher decide instead make her life’s work to seek vengeance on someone who took the credit for their research. Is this a reprogramming event? Have they changed their terminal goal permanently?
Perhaps their terminal-goal was to be recognised for benefiting man kind so again, no change.
What is the pizzicato music at the end?
It seems like the only safe general artificial intelligence is one with the mindset of a mildly apathetic 711 clerk.
Edit: wtf, there is already a video about this. This is the first vid I've watched here, but is this channel produced by some kind of bizarre predictive algorithm?
Why are they called *mesa* optimisers?
Anyone terrified that without question politicians look at things like this and think about how they can use it to create AI that automatically agrees with them?
"This is an interesting debate so lets see what the AI says. Oh wow, the AI agrees with me ....... again."