Back to Review answers.
These 4 answers have been added in the last month.
Vael Gates's project links to lots of example transcripts of persuading senior AI capabilities researchers.
You can include a live-updating version of many definitions from LW using the syntax on Template:TagDesc in the Answer field and Template:TagDescBrief on the Brief Answer field. Similarly, calling Template:TagDescEAF and Template:TagDescEAFBrief will pull from the EAF tag wiki.
When available this should be used as it reduces the duplication of effort and directs all editors to improving a single high quality source.
There have been surveys and opinion polls done. The most comprehensive one was done by The Future of Humanity Institute, where they surveyed 550 of the top experts in AI research. In this survey, when asked "which year do you think the chance of human level artificial intelligence reaches 50%", the mean response was 2081 and the median response was 2040.
Unless there was a way to cryptographically ensure otherwise, whoever runs the emulation has basically perfect control over their environment and can reset them to any state they were previously in. This opens up the possibility of powerful interrogation and torture of digital people.
Imperfect uploading might lead to damage that causes the EM to suffer while still remaining useful enough to be run for example as a test subject for research. We would also have greater ability to modify digital brains. Edits done for research or economic purposes might cause suffering. See this fictional piece for an exploration of how a world with a lot of EM suffering might look like.
These problems are exacerbated by the likely outcome that digital people can be run much faster than biological humans, so it would be plausibly possible to have an EM run for hundreds of subjective years in minutes or hours without having checks on the wellbeing of the EM in question.