It doesn't know about any threat.
It is a general intelligence that we are considering. It can deduce the threat better than we can.
If you can specify all that, how is it then still likely that it somehow comes up with its own idea that optimization might be to consume the universe if you told it to optimize its software running on a certain supercomputer?
Because it is a general intelligence. It is smart. It is not limited to getting its ideas from you, it can come up with its own. And if the AI has been given the task of optimising its software for performance on a certain computer then it will do whatever it can to do that. This means harnessing external resources to do research on computation theory.
You implicitly assume that it has something equivalent to fear, that it perceives threats.
No he doesn't. He assumes only that it is a general intelligence with an objective. Potentially negative consequences are just part of possible universes that it models like everything else.
I'm not sure what can be done to make this clear:
SELF IMPROVEMENT IS AN INSTRUMENTAL GOAL THAT IS USEFUL FOR ACHIEVING MOST TERMINAL VALUES.
If I tell a human to optimize he might muse to turn the planets into computronium but if I tell a AI to optimize it doesn't know what it means until I tell it what it means and then it still won't care because it isn't equipped with all the evolutionary baggage that humans are equipped with.
You have this approximately backwards. A human knows that if you tell her to create 10 paperclips every day you don't mean take over the world so she can be sure that nobody will interfere with her steady production of paperclips in the future. The AI doesn't.
Major update here.
Related to: Should I believe what the SIAI claims?
Reply to: Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It)
What I ask for:
I want the SIAI or someone who is convinced of the Scary Idea1 to state concisely and mathematically (and with possible extensive references if necessary) the decision procedure that led they to make the development of friendly artificial intelligence their top priority. I want them to state the numbers of their subjective probability distributions2 and exemplify their chain of reasoning, how they came up with those numbers and not others by way of sober calculations.
The paper should also account for the following uncertainties:
Further I would like the paper to include and lay out a formal and systematic summary of what the SIAI expects researchers who work on artificial general intelligence to do and why they should do so. I would like to see a clear logical argument for why people working on artificial general intelligence should listen to what the SIAI has to say.
Examples:
Here are are two examples of what I'm looking for:
The first example is Robin Hanson demonstrating his estimation of the simulation argument. The second example is Tyler Cowen and Alex Tabarrok presenting the reasons for their evaluation of the importance of asteroid deflection.
Reasons:
I'm wary of using inferences derived from reasonable but unproven hypothesis as foundations for further speculative thinking and calls for action. Although the SIAI does a good job on stating reasons to justify its existence and monetary support, it does neither substantiate its initial premises to an extent that an outsider could draw the conclusions about the probability of associated risks nor does it clarify its position regarding contemporary research in a concise and systematic way. Nevertheless such estimations are given, such as that there is a high likelihood of humanity's demise given that we develop superhuman artificial general intelligence without first defining mathematically how to prove the benevolence of the former. But those estimations are not outlined, no decision procedure is provided on how to arrive at the given numbers. One cannot reassess the estimations without the necessary variables and formulas. This I believe is unsatisfactory, it lacks transparency and a foundational and reproducible corroboration of one's first principles. This is not to say that it is wrong to state probability estimations and update them given new evidence, but that although those ideas can very well serve as an urge to caution they are not compelling without further substantiation.
1. If anyone is actively trying to build advanced AGI succeeds, we’re highly likely to cause an involuntary end to the human race.
2. Stop taking the numbers so damn seriously, and think in terms of subjective probability distributions [...], Michael Anissimov (existential.ieet.org mailing list, 2010-07-11)
3. Could being overcautious be itself an existential risk that might significantly outweigh the risk(s) posed by the subject of caution? Suppose that most civilizations err on the side of caution. This might cause them to either evolve much slower so that the chance of a fatal natural disaster to occur before sufficient technology is developed to survive it, rises to 100%, or stops them from evolving at all for being unable to prove something being 100% safe before trying it and thus never taking the necessary steps to become less vulnerable to naturally existing existential risks. Further reading: Why safety is not safe
4. If one pulled a random mind from the space of all possible minds, the odds of it being friendly to humans (as opposed to, e.g., utterly ignoring us, and being willing to repurpose our molecules for its own ends) are very low.
5. Loss or impairment of the ability to make decisions or act independently.
6. The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI's with non-human values without working directly on AGI to test those hypothesis ourselves. If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering. Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.