I want to raise awareness of risks from AI and the challenges to mitigate those risks by writing experts and asking them questions. The e-Mail below is a template. Please help me improve it and to devise more or better questions.
Dear Mr Minsky,
I am currently trying to learn more about risks from artificial intelligence [1]. In the course of this undertaking I plan to ask various experts and influencers about their opinion. Consequently I am curious about your opinion as a noted author and cognitive scientist in the field of artificial intelligence. But first I want to apologize if I intrude on your privacy, it is not my intention to offend you or to steal your time. If that is the case, please just ignore the rest of this e-Mail.
One of the leading textbooks in artificial intelligence, 'AI: A Modern Approach' [2], states:
Omohundro (2008) hypothesizes that even an innocuous chess program could pose a risk to society. Similarly, Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its goal. The moral is that even if you only want you program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards.
In this regard I would like to draw your attention to the Singularity Institute for Artificial Intelligence (SIAI) [3] and their mission to solve the problem of Friendly AI [4]. One example of the research interests of the SIAI is a reflective decision theory [5] of self-modifying decision systems. The SIAI does believe that "it is one of the many fundamental open problems required to build a recursively self-improving [6] Artificial Intelligence with a stable motivational system." [7]
With this in mind, I would like to ask you the following questions:
- Do you agree that risks from artificial intelligence have to be taken very seriously?
- Is it important to raise awareness of those risks within the artificial intelligence community?
- Should we figure out how to make AI provably friendly (non-dangerous [9]), before attempting to solve artificial general intelligence?
- How do risks from AI compare to other existential risks, e.g. advanced nanotechnology?
- What probability do you assign to the possibility of us being wiped out by badly done AI?
- What probability do you assign to the possibility of an intelligence explosion [10]?
- What probability do you assign to the possibility of a human, respectively sub-human, level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours or days?
- ...
Further I would also like to ask your permission to publish and discuss your possible answers on LessWrong.com [8], to estimate the public and academic awareness and perception of risks from AI and the effectiveness with which the risks are communicated. This is however completely optional to my curiosity and general interest in your answer. I will respect your decision under any circumstances and keep your opinion private if you wish. Likewise I would be pleased, instead of, or additionally to replying to this e-Mail, with a treatment of the above questions on your homepage, your personal blog or elsewhere. You got my permission to publish my name and this e-Mail in parts or completely.
Full disclosure:
I am not associated with the SIAI or any organisation concerned with research on artificial intelligence, nor do I maintain a formal academic relationship. Given the possible permission to publish your answers they will under no circumstances be used by me in an attempt to cast a damning light on you or your interests but will be exhibited neutrally as the personal opinion of an expert.
References:
[1] "Reducing long-term catastrophic risks from artificial intelligence" http://singinst.org/riskintro/index.html
[2] "AI: A Modern Approach", Chapter 26, section 26.3, (6) "The Success of AI might mean the end of the human race." http://aima.cs.berkeley.edu/
[3] "Singularity Institute for Artificial Intelligence" http://singinst.org/
[4] "Artificial Intelligence as a Positive and Negative Factor in Global Risk." http://yudkowsky.net/singularity/ai-risk
[5] Yudkowsky, Eliezer, "Timeless Decision Theory" http://singinst.org/upload/TDT-v01o.pdf
[6] "Recursive Self-Improvement" http://lesswrong.com/lw/we/recursive_selfimprovement/
[7] "An interview with Eliezer Yudkowsky", parts 1, 2 and 3
[8] "A community blog devoted to refining the art of human rationality." http://lesswrong.com/
[9] http://wiki.lesswrong.com/wiki/Paperclip_maximizer
[10] http://wiki.lesswrong.com/wiki/Intelligence_explosion
Yours sincerely,
NAME
ADDRESS
Revised Version
Dear Professor Minsky,
I am currently trying to learn more about risks from artificial intelligence. Consequently I am curious about your opinion as a noted author and cognitive scientist in the field of artificial intelligence.
I would like to ask you the following questions:
- What probability do you assign to the possibility of us being wiped out by badly done AI?
- What probability do you assign to the possibility of a human level AI, respectively sub-human level AI, to self-modify its way up to massive superhuman intelligence within a matter of hours or days?
- Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?
- What is the current level of awareness of possible risks from AI within the artificial intelligence community, relative to the ideal level?
- How do risks from AI compare to other existential risks, e.g. advanced nanotechnology?
Further I would also like to ask your permission to publish and discuss your possible answers, in order to estimate the academic awareness and perception of risks from AI, but would also be pleased, instead of, or additionally to replying to this email, with a treatment of the above questions on your homepage, your personal blog or elsewhere.
You got my permission to publish my name and this email in parts or completely.
References:
- Reducing long-term catastrophic risks from artificial intelligence: http://singinst.org/riskintro/index.html
- Artificial Intelligence as a Positive and Negative Factor in Global Risk: http://yudkowsky.net/singularity/ai-risk
- A community blog devoted to refining the art of human rationality: http://lesswrong.com/
Please let me know if you are interested in more material related to my questions.
Yours sincerely,
NAME
ADDRESS
Second Revision
Dear Professor Minsky,
I am currently trying to learn more about possible risks from artificial intelligence. Consequently I am curious about your opinion as a noted author and cognitive scientist in the field of artificial intelligence.
I would like to ask you the following questions:
- What probability do you assign to the possibility of us being wiped out by badly done AI?
- What probability do you assign to the possibility of a human level AI, respectively sub-human level AI, to self-modify its way up to massive superhuman intelligence within a matter of hours or days?
- Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?
- What is the current level of awareness of possible risks from AI within the artificial intelligence community, relative to the ideal level?
- How do risks from AI compare to other existential risks, e.g. advanced nanotechnology?
Furthermore I would also like to ask your permission to publish and discuss your possible answers, in order to estimate the academic awareness and perception of risks from AI.
Please let me know if you are interested in third-party material that does expand on various aspects of my questions.
Yours sincerely,
NAME
ADDRESS
I suggest you to also ask these questions everybody reading this site. In a separate thread, of course.