Curate questions

From Stampy's Wiki

Here are the questions which have been added to the wiki and could be re-ordered, and have not been answered.

There are 109 unanswered questions to curate!

Incoming Questions

Mark as:

Tags: communication (create tag) (edit tags)
Mark as:

Tags: intelligence amplification (create tag), research assistants (create tag) (edit tags)
Mark as:

Tags: None (add tags)
Mark as:

Tags: cybersecurity (create tag) (edit tags)

Will AGI be agentic?

Reader UI

Mark as:

Tags: None (add tags)
Mark as:

Tags: metaethics (create tag) (edit tags)
Mark as:

Tags: agi, alignment targets (create tag) (edit tags)
Mark as:

Tags: agi, compute (create tag), algorithmic progress (create tag) (edit tags)
Mark as:

Tags: None (add tags)
Mark as:

Tags: None (add tags)
Mark as:

Tags: acausal trade (create tag) (edit tags)
Mark as:

Tags: None (add tags)

There is a general consensus that any AGI would be very dangerous because not necessarilly aligned. But if the AGI does not have any reward function and is a pattern matcher like GPT, how would it go about to leading to X-risks/not being able to be put into a box/shut down?
I can definitely imagine it being dangerous, or it having continuity in its answer which might be problematic, but the whole going exponential and valuing its own survival does not seem to necessarilly apply?

Mark as:

Tags: agi fire alarm (create tag) (edit tags)

I'm not yet convinced that AI safety by itself is a realistic risk. That things will work that well that it can increase. I guess the most interesting way to be convinced is just to read what convinced other people. So it's what I ask most people I meet about a subject I have doubt about

Mark as:

Tags: outreach (create tag) (edit tags)
Mark as:

Tags: gpt (edit tags)

Why do you like stamps so much?

Reader UI

Mark as:

Tags: None (add tags)
Mark as:

Tags: biological cognitive enhancement (create tag) (edit tags)
Mark as:

Tags: None (add tags)

Is it hard like 'building a secure OS that works on the first try'? Hard like 'the engineering/logistics/implementation portion of the Manhattan Project'? Both? Some other option? Etc.

Mark as:

Tags: agi, differential technological development (create tag) (edit tags)
Mark as:

Tags: government (create tag) (edit tags)
Mark as:

Tags: None (add tags)

A friend who *really* doesn't believe in vast global conspiracies was recently suggesting to me that, at the moment, it very much *looks* to her like there's some kind of vast act of global coordination going on. This led her to propose that someone may have already created an AGI, which is in its early stages of putting things into place for whatever it's going to do next. I.e., a lot of humans sincerely believe themselves to be acting in various ways for their own ends, but this is being subtly coordinated for some non-human purposes. Can we rule this out?

Mark as:

Tags: None (add tags)
Mark as:

Tags: agi fire alarm (create tag) (edit tags)