I'll have a bash at these questions - for reference purposes. Others may want to too.
1) What probability do you assign to the possibility of us being wiped out by badly done AI?
All humans? Less than 1%. Some due to faith in engineers. Some due to thinking that preserving some humans has substantial Universal Instrumental Value.
2) What probability do you assign to the possibility of a human level AI, respectively sub-human level AI, to self-modify its way up to massive superhuman intelligence within a matter of hours or days?
Less than 1%.
3) Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?
We should put some energy into this area - though the world won't end if we don't. Machine intelligence is an enormous and important task, so the more foresight the better. I don't like this question much - the bit about being "provably friendly" frames the actually issues in this area pretty poorly.
4) What is the current level of awareness of possible risks from AI within the artificial intelligence community, relative to the ideal level?
That's mostly public information. Opinions range from blazee lack of concern through indifference (usually due to it being too far off), to powerful paranoia (from the END OF THE WORLD merchants). I'm not sure there is such a thing as an ideal level of paranoia - a spread probably provides some healthy diversity. Plus optimal paranoia levels are value-dependent.
5) How do risks from AI compare to other existential risks, e.g. advanced nanotechnology?
Machine intelligence and nanotechnology will probably spiral together - due to G-N-R "convergence". However machine intelligence will probably lead to nanotechnology - more than the other way around. So, these risks are pretty linked together. However: machine intelligence is generally the biggest issue we face - what should get the most attention, and what could potentially cause the biggest problems if it does not go well.
I want to raise awareness of risks from AI and the challenges to mitigate those risks by writing experts and asking them questions. The e-Mail below is a template. Please help me improve it and to devise more or better questions.
Dear Mr Minsky,
I am currently trying to learn more about risks from artificial intelligence [1]. In the course of this undertaking I plan to ask various experts and influencers about their opinion. Consequently I am curious about your opinion as a noted author and cognitive scientist in the field of artificial intelligence. But first I want to apologize if I intrude on your privacy, it is not my intention to offend you or to steal your time. If that is the case, please just ignore the rest of this e-Mail.
One of the leading textbooks in artificial intelligence, 'AI: A Modern Approach' [2], states:
In this regard I would like to draw your attention to the Singularity Institute for Artificial Intelligence (SIAI) [3] and their mission to solve the problem of Friendly AI [4]. One example of the research interests of the SIAI is a reflective decision theory [5] of self-modifying decision systems. The SIAI does believe that "it is one of the many fundamental open problems required to build a recursively self-improving [6] Artificial Intelligence with a stable motivational system." [7]
With this in mind, I would like to ask you the following questions:
Further I would also like to ask your permission to publish and discuss your possible answers on LessWrong.com [8], to estimate the public and academic awareness and perception of risks from AI and the effectiveness with which the risks are communicated. This is however completely optional to my curiosity and general interest in your answer. I will respect your decision under any circumstances and keep your opinion private if you wish. Likewise I would be pleased, instead of, or additionally to replying to this e-Mail, with a treatment of the above questions on your homepage, your personal blog or elsewhere. You got my permission to publish my name and this e-Mail in parts or completely.
Full disclosure:
I am not associated with the SIAI or any organisation concerned with research on artificial intelligence, nor do I maintain a formal academic relationship. Given the possible permission to publish your answers they will under no circumstances be used by me in an attempt to cast a damning light on you or your interests but will be exhibited neutrally as the personal opinion of an expert.
References:
[1] "Reducing long-term catastrophic risks from artificial intelligence" http://singinst.org/riskintro/index.html
[2] "AI: A Modern Approach", Chapter 26, section 26.3, (6) "The Success of AI might mean the end of the human race." http://aima.cs.berkeley.edu/
[3] "Singularity Institute for Artificial Intelligence" http://singinst.org/
[4] "Artificial Intelligence as a Positive and Negative Factor in Global Risk." http://yudkowsky.net/singularity/ai-risk
[5] Yudkowsky, Eliezer, "Timeless Decision Theory" http://singinst.org/upload/TDT-v01o.pdf
[6] "Recursive Self-Improvement" http://lesswrong.com/lw/we/recursive_selfimprovement/
[7] "An interview with Eliezer Yudkowsky", parts 1, 2 and 3
[8] "A community blog devoted to refining the art of human rationality." http://lesswrong.com/
[9] http://wiki.lesswrong.com/wiki/Paperclip_maximizer
[10] http://wiki.lesswrong.com/wiki/Intelligence_explosion
Yours sincerely,
NAME
ADDRESS
Revised Version
Dear Professor Minsky,
I am currently trying to learn more about risks from artificial intelligence. Consequently I am curious about your opinion as a noted author and cognitive scientist in the field of artificial intelligence.
I would like to ask you the following questions:
Further I would also like to ask your permission to publish and discuss your possible answers, in order to estimate the academic awareness and perception of risks from AI, but would also be pleased, instead of, or additionally to replying to this email, with a treatment of the above questions on your homepage, your personal blog or elsewhere.
You got my permission to publish my name and this email in parts or completely.
References:
Please let me know if you are interested in more material related to my questions.
Yours sincerely,
NAME
ADDRESS
Second Revision
Dear Professor Minsky,
I am currently trying to learn more about possible risks from artificial intelligence. Consequently I am curious about your opinion as a noted author and cognitive scientist in the field of artificial intelligence.
I would like to ask you the following questions:
Furthermore I would also like to ask your permission to publish and discuss your possible answers, in order to estimate the academic awareness and perception of risks from AI.
Please let me know if you are interested in third-party material that does expand on various aspects of my questions.
Yours sincerely,
NAME
ADDRESS