I think you are letting your own skepticism leak into your research here. The usual FOOM scenario starts from an AI that is already superhuman (say 2x or 10x human IQ)...
No, I simply haven't been able to read the AI FOOM debate, or any of the documents that talk about FOOM, so far. I simply made that inference from my perception of people's opinion about Ben Goertzel's approach of building a toddler AGI to learn more about the nature of intelligence by gathering empirical evidence. As he writes himself:
Look -- what will prevent the first human-level AGIs from self-modifying in a way that will massively increase their intelligence is a very simple thing: they won't be smart enough to do that!
Every actual AGI researcher I know can see that. The only people I know who think that an early-stage, toddler-level AGI has a meaningful chance of somehow self-modifying its way up to massive superhuman intelligence -- are people associated with SIAI.
But I have never heard any remotely convincing arguments in favor of this odd, outlier view of the easiness of hard takeoff!!!
Additionally I posted the questions here, asking for possible corrections and improvements. I do not deliberately sneak in my personal ignorance or interests when I argue or write in the name of third-party beliefs. Just see my post 'References & Resources for LessWrong', I would never attempt to use such a project to propagate my personal opinion.
I want to raise awareness of risks from AI and the challenges to mitigate those risks by writing experts and asking them questions. The e-Mail below is a template. Please help me improve it and to devise more or better questions.
Dear Mr Minsky,
I am currently trying to learn more about risks from artificial intelligence [1]. In the course of this undertaking I plan to ask various experts and influencers about their opinion. Consequently I am curious about your opinion as a noted author and cognitive scientist in the field of artificial intelligence. But first I want to apologize if I intrude on your privacy, it is not my intention to offend you or to steal your time. If that is the case, please just ignore the rest of this e-Mail.
One of the leading textbooks in artificial intelligence, 'AI: A Modern Approach' [2], states:
In this regard I would like to draw your attention to the Singularity Institute for Artificial Intelligence (SIAI) [3] and their mission to solve the problem of Friendly AI [4]. One example of the research interests of the SIAI is a reflective decision theory [5] of self-modifying decision systems. The SIAI does believe that "it is one of the many fundamental open problems required to build a recursively self-improving [6] Artificial Intelligence with a stable motivational system." [7]
With this in mind, I would like to ask you the following questions:
Further I would also like to ask your permission to publish and discuss your possible answers on LessWrong.com [8], to estimate the public and academic awareness and perception of risks from AI and the effectiveness with which the risks are communicated. This is however completely optional to my curiosity and general interest in your answer. I will respect your decision under any circumstances and keep your opinion private if you wish. Likewise I would be pleased, instead of, or additionally to replying to this e-Mail, with a treatment of the above questions on your homepage, your personal blog or elsewhere. You got my permission to publish my name and this e-Mail in parts or completely.
Full disclosure:
I am not associated with the SIAI or any organisation concerned with research on artificial intelligence, nor do I maintain a formal academic relationship. Given the possible permission to publish your answers they will under no circumstances be used by me in an attempt to cast a damning light on you or your interests but will be exhibited neutrally as the personal opinion of an expert.
References:
[1] "Reducing long-term catastrophic risks from artificial intelligence" http://singinst.org/riskintro/index.html
[2] "AI: A Modern Approach", Chapter 26, section 26.3, (6) "The Success of AI might mean the end of the human race." http://aima.cs.berkeley.edu/
[3] "Singularity Institute for Artificial Intelligence" http://singinst.org/
[4] "Artificial Intelligence as a Positive and Negative Factor in Global Risk." http://yudkowsky.net/singularity/ai-risk
[5] Yudkowsky, Eliezer, "Timeless Decision Theory" http://singinst.org/upload/TDT-v01o.pdf
[6] "Recursive Self-Improvement" http://lesswrong.com/lw/we/recursive_selfimprovement/
[7] "An interview with Eliezer Yudkowsky", parts 1, 2 and 3
[8] "A community blog devoted to refining the art of human rationality." http://lesswrong.com/
[9] http://wiki.lesswrong.com/wiki/Paperclip_maximizer
[10] http://wiki.lesswrong.com/wiki/Intelligence_explosion
Yours sincerely,
NAME
ADDRESS
Revised Version
Dear Professor Minsky,
I am currently trying to learn more about risks from artificial intelligence. Consequently I am curious about your opinion as a noted author and cognitive scientist in the field of artificial intelligence.
I would like to ask you the following questions:
Further I would also like to ask your permission to publish and discuss your possible answers, in order to estimate the academic awareness and perception of risks from AI, but would also be pleased, instead of, or additionally to replying to this email, with a treatment of the above questions on your homepage, your personal blog or elsewhere.
You got my permission to publish my name and this email in parts or completely.
References:
Please let me know if you are interested in more material related to my questions.
Yours sincerely,
NAME
ADDRESS
Second Revision
Dear Professor Minsky,
I am currently trying to learn more about possible risks from artificial intelligence. Consequently I am curious about your opinion as a noted author and cognitive scientist in the field of artificial intelligence.
I would like to ask you the following questions:
Furthermore I would also like to ask your permission to publish and discuss your possible answers, in order to estimate the academic awareness and perception of risks from AI.
Please let me know if you are interested in third-party material that does expand on various aspects of my questions.
Yours sincerely,
NAME
ADDRESS