Review questions

From Stampy's Wiki

Questions are reviewed to prioritize them in the question queue and mark unhelpful with warnings. There are six tiers of review:

  • 0. Rejected - This is terrible or not even a question
  • 1. Meh - This question is okay, but should not be prioritized over unreviewed questions
  • 2. Unreviewed - This question has not been reviewed
  • 3. Approved - This question is not obviously terrible
  • 4. Good - This question is reasonably high quality
  • 5. Excellent - This question is very high quality

If you are unsure about what to categorize questions as drop by #general and ask for a second opinion. Once a page has been given a reviewed status it will no longer appear here.

Unreviewed YouTube questions

How does nature handle this for humans and other animals?

Review as: Unreviewed

Tags: None (add tags)

Have we considered simply abolishing private property so nobody gets to own the AI that inevitably takes over the world?

Review as: Unreviewed

Tags: None (add tags)

did you just google 'the google' oh my god i love you

Review as: Unreviewed

Tags: None (add tags)

Actually, discussing goals brings up an interesting question in the ethics of AI design. If we're going to have all of these highly intelligent machines running around, is it ethical to give them goals exclusively corresponding to work given to them by humans? Is slavery still wrong if the slaves like it? If you assume that intelligence necessarily implies a consciousness (and, really, things become a bit arbitrary if you don't), do we have a responsibility to grant AIs individuality?

What do you think?

Review as: Unreviewed

Tags: None (add tags)

You know just as well as I do that the guy who collects stamps will not just buy some stamps, he will build The Stamp Collector, and you have just facilitated the end of all humanity :( I would like to ask, on a more serious note, do you have any insights on how this relates to how humans often feel a sense of emptiness after achieving all of their goals. Or, well, I fail to explain it correctly, but there is this idea that humans always need a new goal to feel happy right? Maybe I am completely off, but what I am asking is, yes in an intelligent agent we can have simple, or even really complex goals, but will it ever be able to mimic the way goals are present in humans, a goal that is not so much supposed to be achieved, but more a fuel to make progress, kind of maybe like: a desire?

Review as: Unreviewed

Tags: None (add tags)
See more...

Recent unreviewed YouTube questions

Do you think that the fact that the governments, military and commercial organisations are aiming at general AI precludes a happy ending to this issue?

Review as: Unreviewed

Tags: None (add tags)

Is not it obvious? You just have to precede any AI agents with sort of Guardian Agents of various scales, from individuals to the whole humanity, that will keep collecting information about any circumstances that are encountered and interacted with and ordering the client priorities for all of them. After these become knowledgeable enough it will be completely safe to run any arbitrarily narrow-goaled agent if it will have to consult the whole range of Guardians before taking any action.

Review as: Unreviewed

Tags: None (add tags)

Is that Go! by PSB on uke at the end of the video? Lol 😆👏👏

Review as: Unreviewed

Tags: None (add tags)

I love watching your videos, because sometimes I'll have this moment where I'll pause the video because I've thought of a solution, and feel kinda smug for a second, and then I'd unpause the video and immediately hear you say "And so you think, what if *solution*? Well, the problem with that is...", but you still phrase it and make the videos in such a way that, I don't feel like an idiot for coming up with this flawed solution, because that "no" is always said in a way that's like "It's understandable that you would come up with that solution, given the knowledge and what I've just talked about, however, by teaching you more, and this by you learning more, you'll see why it actually isn't" and darned if that isn't how science works, even a wrong hypothesis usually teaches us something new

It's hard to teach a complex field of study like AI to people who aren't in that field without making them feel dumb, but you are really good at actually making feel smarter.

Review as: Unreviewed

Tags: None (add tags)

What? There's a second channel? My reward function only cares about the first channel. Oh no!

Review as: Unreviewed

Tags: None (add tags)
See more...

Unreviewed wiki questions

I'm not yet convinced that AI safety by itself is a realistic risk. That things will work that well that it can increase. I guess the most interesting way to be convinced is just to read what convinced other people. So it's what I ask most people I meet about a subject I have doubt about

Review as: Unreviewed

Tags: None (add tags)
Review as: Unreviewed

Tags: definitions, cognitive enhancement (create tag) (edit tags)

What are brain-computer interfaces?

Review as: Unreviewed

Tags: definitions, brain-computer interfaces (create tag) (edit tags)
See more...