I put together a survey to explore what people think will happen after the emergence of Artificial Super Intelligence, assuming that humans survive and there is something resembling human society afterwards.

Take the survey here. Thanks!

Twenty four people from /r/singularity have already taken the survey. These people are essentially Kurzweilians: they are extremely optimistic about AI and they never talk about the alignment problem. I'm hoping that respondents from Less Wrong will balance out the results with more realism and appropriate amounts of pessimism. I might also share the survey in a few other places. Afterwards I'll share charts and graphs of the results with everyone.

In the survey, I construct one future where particular things happen, and ask you to fill in the details. A limitation of this format is that you might feel that technology will progress much faster or much slower than I suggest, or the consequences on society will be dramatically different. For example, one /r/singularity respondent believes that by 2122 the Earth will be transformed into a giant swarm of nanobots (I know), and so the survey couldn't capture that person's true expectations.

(BTW, due to a limitation of Google Forms, I can't change any of the answers now that people have taken the survey, so I can't fix any typos or make corrections.)

My hope is that you'll accept the premise long enough to complete the survey, and use the response boxes I provide if the multiple-choice answers don't capture your sense of what would happen in this scenario. The goal is to elucidate what kinds of problems post-ASI society would still have to deal with even if we get a good-ish outcome. Several people said that the survey gave them new insights, so I expect you'll have fun at least.

Here's the link to the survey again. Thanks!

New Comment