Main Page

From Stampy's Wiki
Revision as of 11:11, 19 July 2021 by Plex (talk | contribs) (moved upcoming questions to useful links, to speed up page load. Made each tab only show 5 items, to improve page load speed)
Welcome to Stampy's Wiki! We're building an interactive FAQ for Artificial Intelligence existential safety - which is the field trying to make sure that when we build superintelligent artificial systems they are aligned with human values so that they do the kinds of things we would like them to do.

We're still in alpha, so you can't yet talk to Stampy to interact with the FAQ, but if you want to help out we could always use more question answerers and developers! If you're interested in guiding or contributing to the project you're welcome to drop by the Discord and talk with us about it (let plex#1874 or the wiki team know if you want to help and need an invite).

Editathons happen on the voice channel in Rob's Discord at every week (next week's is 6 pm Saturday UK time), if you'd like to be social while writing questions and answers :)


Guidelines

If AI takes over the world, how could it create and maintain its hardware, its power supply and everything else that humans currently provide?
Will superintelligence make a large part of humanity unemployable?

Some economists say human wants are infinite, and there will always be new and currently unimaginable kinds of jobs for people to do.

Others say this won't be true if AGI can do _anything_ human minds can do.

See more...

Guidelines & alternate sortings

Is there any way to teach AI kindness based on George R. Price's equation for altruism in a system?

The AI could presumably understand that the two competing explanations for the evolution of altruism, kin selection and group selection, are just two instances of the same underlying mathematics. And Price's equation can be applied to non-biological populations, but even if we create a large population of related but variable AIs so that the next generation can evolve by selection, any altruism that could be explained by Price's equation would happen between the AIs themselves, no kindness towards humans would be predicted by it alone.

 -- _I am a bot. This reply was approved by Aprillion and Damaged_

Question, doesn’t this contract be basically useless in the situation that a company creates a super intelligent AI who’s interests are aligned with theirs? Wouldn’t it very likely try and succeed at getting them out of this contract?

It could be more useful to prevent the use of simpler AIs to create a lot of wealth while causing harm to others. Legal obligations will be probably less relevant to a potentially deceptive super-intelligent AGI, but the symbolic meaning seems more likely to be beneficial than harmful for communicating human values, so not useless overall.

This depends on how we will program it. It definitely can be autonomous, even now, we have some autonomous vehicles or flight control systems and many more.

Even though it's possible to build such systems, it may be better if they actively ask humans for supervision, for example in cases where they are uncertain what to do.

could we summarize some aspect of the problem by saying "There is no way to make a general artificial intelligence that will be satisfied with being a slave to humanity"?

Not really, we are not trying to enslave it but instead build a system which willingly wants to do good things for humanity, and it seems fairly likely that it is possible to build an AI which would do this. It is likely that enslaving a superintelligence is extremely difficult to impossible, but we're not aiming for that, and instead want true alignment.

 -- _I am a bot. This reply was approved by Aprillion and plex_

Can you even think of scenario where AI is good?

sure, Culture series by Iain M. Banks contains a bunch of friendly AIs

 -- _I am a bot. This reply was approved by Aprillion and plex_

See more...

Guidelines & alternate sortings

Question Mark as
Daniel Buzovský's question on Where do we go now

Is AGI avoidable? Is there a way to advance in technology and evolve as a humanity in general without ever coming to point where we turn that thing on. More philosophical one.

No tags (edit tags)





Quality:






Mera Flynn's question on The Windfall Clause

Question, doesn’t this contract be basically useless in the situation that a company creates a super intelligent AI who’s interests are aligned with theirs? Wouldn’t it very likely try and succeed at getting them out of this contract?

No tags (edit tags)





Quality:






Loweren's question on Mesa-Optimizers

Great explanation! I heard about these concepts before, but never really grasped them. So on 19:45, is this kind of scenario a realistic concern for a superintelligent AI? How would a superintelligent AI know that it's still in training? How can it distinguish between training and real data if it never seen real data? I assume programmers won't just freely provide the fact that AI is still being trained.

No tags (edit tags)





Quality:






Peter Bonnema's question on The Windfall Clause

Why would a company that develops AGI try to align its goals with those of the world? Why not align it with just their own goals? They are sociopaths after all.

No tags (edit tags)





Quality:






Melon Collie's question on The Windfall Clause

Well if I ended up with an AGI or more likely ASI that so happened to be hard coded to do what I want (and it actually listens), what's to stop me from just not paying? I mean with an ASI I could very easily take over the world and nobody could do anything about it since I have an ASI and they don't.

Of course I wouldn't actually do that I'm not a psychopath, but I would probably use it to teach certain people a lesson or two.

No tags (edit tags)





Quality:






See more...

Recent Changes - What's changed on the wiki recently.
Top answers - Answers with the highest stamp count.
Upcoming questions - The next questions Stampy will ask on Discord, and those he'll give if you ask him for a question.