Related to: lesswrong.com/lw/fk/survey_results/
I am currently emailing experts in order to raise and estimate the academic awareness and perception of risks from AI and ask them for permission to publish and discuss their responses. User:Thomas suggested to also ask you, everyone who is reading lesswrong.com, and I thought this was a great idea. If I ask experts to publicly answer questions, to publish and discuss them here on LW, I think it is only fair to do the same.
Answering the questions below will help the SIAI and everyone interested to mitigate risks from AI to estimate the effectiveness with which the risks are communicated.
Questions:
- Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? Feel free to answer 'never' if you believe such a milestone will never be reached.
- What probability do you assign to the possibility of a negative/extremely negative Singularity as a result of badly done AI?
- What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?
- Does friendly AI research, as being conducted by the SIAI, currently require less/no more/little more/much more/vastly more support?
- Do risks from AI outweigh other existential risks, e.g. advanced nanotechnology? Please answer with yes/no/don't know.
- Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?
Note: Please do not downvote comments that are solely answering the above questions.
Some annotations:
2.) I assign a lower probability to an extremely negative outcome because I believe it to be more likely that we will just die rather than survive and suffer. And in the case that someone only gets their AI partly right, I don't think it will be extremely negative. All in all, an extremely negative outcome seems rather unlikely. But negative (we're all dead), is already pretty negative.
4.) I believe that the SIAI currently only needs a little more support because they haven't said what they would do with a lot more support (money...) right... (read more)