Review questions

From Stampy's Wiki

Questions are reviewed to prioritize them in the question queue and mark unhelpful with warnings. There are six tiers of review:

  • 0. Rejected - This is terrible or not even a question
  • 1. Meh - This question is okay, but should not be prioritized over unreviewed questions
  • 2. Unreviewed - This question has not been reviewed
  • 3. Approved - This question is not obviously terrible
  • 4. Good - This question is reasonably high quality
  • 5. Excellent - This question is very high quality

If you are unsure about what to categorize questions as drop by #general and ask for a second opinion. Once a page has been given a reviewed status it will no longer appear here.

Unreviewed YouTube questions

Question Mark as
Daniel Buzovský's question on Where do we go now

Is AGI avoidable? Is there a way to advance in technology and evolve as a humanity in general without ever coming to point where we turn that thing on. More philosophical one.

No tags (edit tags)





Quality:






Melon Collie's question on The Windfall Clause

Well if I ended up with an AGI or more likely ASI that so happened to be hard coded to do what I want (and it actually listens), what's to stop me from just not paying? I mean with an ASI I could very easily take over the world and nobody could do anything about it since I have an ASI and they don't.

Of course I wouldn't actually do that I'm not a psychopath, but I would probably use it to teach certain people a lesson or two.

No tags (edit tags)





Quality:






Loweren's question on Mesa-Optimizers

Great explanation! I heard about these concepts before, but never really grasped them. So on 19:45, is this kind of scenario a realistic concern for a superintelligent AI? How would a superintelligent AI know that it's still in training? How can it distinguish between training and real data if it never seen real data? I assume programmers won't just freely provide the fact that AI is still being trained.

No tags (edit tags)





Quality:






Ethan Alfonso's question on Safe Exploration

Why does this academic paper on AI safety apply so much to my life?

No tags (edit tags)





Quality:






Martin Verrisin's question on WNJ: Think of AGI like a Corporation?

how did he know the video is 14.5 minutes long???
- Is he shooting parts as he's editing? O.O

No tags (edit tags)





Quality:






Bloergk's question on Mesa-Optimizers

At the end you write that, when reading the article, this was a "new class of problems" to you... But it just seems like an instance of the "sub-agent stability problem" (not sure of the proper terminology) you've explained before on Computerphile https://www.youtube.com/watch?v=3TYT1QfdfsM.
The only difference is that in this case, we are dumb enough to build the A.I. in a way that forces it to ALWAYS create a sub-agent.

No tags (edit tags)





Quality:






Peter Bonnema's question on The Windfall Clause

Why would a company that develops AGI try to align its goals with those of the world? Why not align it with just their own goals? They are sociopaths after all.

No tags (edit tags)





Quality:






Mera Flynn's question on The Windfall Clause

Question, doesn’t this contract be basically useless in the situation that a company creates a super intelligent AI who’s interests are aligned with theirs? Wouldn’t it very likely try and succeed at getting them out of this contract?

No tags (edit tags)





Quality:






Matt's question on WNJ: Think of AGI like a Corporation?

Would it be possible to brute force ideas? If an image is just pixels it should be possible to get a computer to make every possible combination of pixels in a given area. Maybe start with a small area and low resolution and only black and white to test it. Then make an app that's like a game for people to search through the images and tag what they see or think it could be. Maybe even tie it to some kind of cryptocurrency to get more people involved. Somebody do this lol I've been having this idea for a while but I'm too lazy to do it and I'm not even sure how to start.

No tags (edit tags)





Quality:






SbotTV's question on The Orthogonality Thesis

Actually, discussing goals brings up an interesting question in the ethics of AI design. If we're going to have all of these highly intelligent machines running around, is it ethical to give them goals exclusively corresponding to work given to them by humans? Is slavery still wrong if the slaves like it? If you assume that intelligence necessarily implies a consciousness (and, really, things become a bit arbitrary if you don't), do we have a responsibility to grant AIs individuality?

What do you think?

No tags (edit tags)





Quality:






See more...

Recent unreviewed YouTube questions

Question Mark as
MolochDE's question on The Windfall Clause

Can't Deep Mind, OpenAI and other put fine-print on their research in a way open source software has it sometimes, where if you want to use their findings you also have to sign into the windfall clause? That way most company's would sign the clause just to be safe from lawsuits when it comes to their creation being build on the work of others.

No tags (edit tags)





Quality:






some guy's question on Use of Utility Functions

"our inconsistencies don't make us better people" idk.... maybe they actually do? A little randomness sprinkled in here and there can lead to interesting new discoveries. Nevertheless, I guess a utility function could also take that randomness into account.

No tags (edit tags)





Quality:






8Dbaybled8D's question on Intro to AI Safety

Is there any way to teach AI kindness based on George R. Price's equation for altruism in a system?

No tags (edit tags)





Quality:






ImpHax0r's question on Superintelligence Mod for Civilization V

So what happens when the rogue AI comes into play?

No tags (edit tags)





Quality:






Jay Ayerson's question on Intro to AI Safety

Children.

A possible solution may be to give AI under development the same capacity constraints as children, namely that they are social, and therefore dependent on their creators for sources of information, so they have a reason to be honest; that they are not terribly powerful in most respects, especially in that they depend on others for new methods with which to become more powerful, and that this is tied to a social nature.

So, what happens when we place a value on new ideas, and a value on humans as a potential source of new ideas?

No tags (edit tags)





Quality:






Rashid Mostafa's question on 10 Reasons to Ignore AI Safety

You think that human genetic manipulation has been stopped? I think that there are probably many crispr machines chugging away in mainstream scientific organisations doing work within a covering project.

But the rewards are much higher for quickly developing AI. If Mr Putin thinks that the person who gets to AGI first "rules the world", which government will allow another to get there before them? Maybe Australia, because it can't run IT effectively in any government department. Oh, and the UK. And Sudan. But the rest of the world?

No tags (edit tags)





Quality:






Chris's question on Intro to AI Safety

Why can't we model it after the human brain so that it does what a human would do if he had that power? If we could model it after a benevolent human then it's going to act like he would. There's still a chance to mess up but if we model it precisely then it is going to do the same things a benevolent human would do. Maybe this means a disaster because maybe even the most benevolent human would abuse that kind of power on their hands

No tags (edit tags)





Quality:






Midhunraj R's question on Quantilizers

I don't exactly get the idea of 'imitating human'. How to get that normal curve exactly (for specific situations like collecting stamps) ? It seems a harder job than making an AI.

No tags (edit tags)





Quality:






Ceelvain's question on Intro to AI Safety

All the arguments we make about AGI screwing us over could also be made about humans. After all, we *are* an example of AGI.
We are getting there with self improvement, we do care a lot about self preservation and we hate with a passion overt goal tampering. We could understand "making AIs" as a kind of mix between "resource acquisition" (acquiring tools) and "self improvement" (they enhance us).
I think one major thing that prevent us from screwing everything super fast is lazyness. It acts as a regularizer on our actions preventing individuals from going into overdrive. But our tools get better everyday at doing stuff without us feeling the energy spent. Basically, we're bypassing our internal safety mechanism.
So... Is it really AGI we should fear? Or humans?

No tags (edit tags)





Quality:






Sinom Irneja's question on WNJ: Think of AGI like a Corporation?

Question, does inherently Serial tasks exist?

As funny as it is, making a baby is not inherently serial task. trough pregnancy it might be. but you could put the baby together atom by atom, and that makes is parallelizable!

I assume there are proofs for their existance, I just can't really think of one. And wiki is dumb! The claim X is inherently Serial is a universal claim that for all parallelization attempts P each segment would need to arrive at an specific order. Which can not be proven by example. The try to say newton's method, but they seem to assume a specific division of labor P that can not be parallelized.

No tags (edit tags)





Quality:






See more...