Sure:
1 P(human-level AI by ? (year) | no wars ∧ no natural disasters ∧ beneficially political and economic development) =
10% - 2050
50% - 2150
80% - 2300
My analysis involves units of "fundamental innovation". A unit of fundamental innovation is a discovery/advance comparable to information theory, Pearlian causality, or the VC-theory. Using this concept, we can estimate the time until AI by 1) estimating the required # of FI units and 2) estimating the rate at which they arrive. I think FIs arrive at about a rate of 1/25 years, and if 3-7 FIs are required, this produces an estimate of 2050-2150. Also, I think that after 2150 the rate of FI appearance will be slower, maybe 1/50 yrs, so 2300 corresponds to 10 FIs.
P(human extinction | badly done AI) = 40%
I don't understand the other question well enough to answer it meaningfully. I think it is highly unlikely that an uFAI will be actively malicious.
P(superhuman intelligence within hours | human-level AI on supercomputer with Internet connection) = 0.01%
P(... within days | ...) = 0.1%
P(... within years | ...) = 3%
I have low estimates for these contingencies because I don't believe in the equation: capability=intelligence*computing power. Human capability rests on many other components, such as culture, vision, dextrous hands, etc. I'm also not sure the concept "human-level intelligence" is well-defined.
How much money does the SIAI currently (this year) require (to be instrumental in maximizing your personal long-term goals, e.g. survive the Singularity by solving friendly AI), less/no more/little more/much more/vastly more?
I think the phrasing of the question is odd. I have donated a small amount to SIAI, and will probably donate more in the future, especially if they come up with a more concrete action plan. I buy the basic SIAI argument (even if probability of success is low, there is enough at stake to make the question worthwhile), but more importantly, I think there is a good chance that SIAI will come up with something cool, even if it's not an FAI design. I doubt SIAI could effectively use vastly more money than it currently has.
What existential risk is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?
My personal goals are much more vulnerable to catastrophic risks such as nuclear war or economic collapse. I am perhaps idiosyncratic among LWers in that it is hard for me to worry much more about existential risk than catastrophic risk - that is to say, if N is the population of the world, I am only about 20x more concerned about a risk that might kill N than I am about a risk that might kill N/10.
Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?
A computer program that is not explicitly designed to play chess defeats a human chess master.
Why should innovation proceed at a constant rate? As far as I can tell, the number of people thinking seriously about difficult technical problems is increasing exponentially. Accordingly, it looks to me like most important theoretical milestones occurred recently in human history, and I would expect them to be more and more tightly packed.
I don't know how fast machine learning / AI research output actually increases, but my first guess would be doubling every 15 years or so, since this seems to be the generic rate at which human output has doubled post-i...
[Click here to see a list of all interviews]
I am emailing experts in order to raise and estimate the academic awareness and perception of risks from AI.
Below you will find some thoughts on the topic by Jürgen Schmidhuber, a computer scientist and AI researcher who wants to build an optimal scientist and then retire.
The Interview:
Q: What probability do you assign to the possibility of us being wiped out by badly done AI?
Jürgen Schmidhuber: Low for the next few months.
Q: What probability do you assign to the possibility of a human level AI, respectively sub-human level AI, to self-modify its way up to massive superhuman intelligence within a matter of hours or days?
Jürgen Schmidhuber: High for the next few decades, mostly because some of our own work seems to be almost there:
Q: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?
Jürgen Schmidhuber: From a paper of mine:
All attempts at making sure there will be only provably friendly AIs seem doomed. Once somebody posts the recipe for practically feasible self-improving Goedel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. Which values are "good"? The survivors will define this in hindsight, since only survivors promote their values.
Q: What is the current level of awareness of possible risks from AI within the artificial intelligence community, relative to the ideal level?
Jürgen Schmidhuber: Some are interested in this, but most don't think it's relevant right now.
Q: How do risks from AI compare to other existential risks, e.g. advanced nanotechnology?
Jürgen Schmidhuber: I guess AI risks are less predictable.
(In his response to my questions he also added the following.)
Jürgen Schmidhuber: Recursive Self-Improvement: The provably optimal way of doing this was published in 2003. From a recent survey paper:
The fully self-referential Goedel machine [1,2] already is a universal AI that is at least theoretically optimal in a certain sense. It may interact with some initially unknown, partially observable environment to maximize future expected utility or reward by solving arbitrary user-defined computational tasks. Its initial algorithm is not hardwired; it can completely rewrite itself without essential limits apart from the limits of computability, provided a proof searcher embedded within the initial algorithm can first prove that the rewrite is useful, according to the formalized utility function taking into account the limited computational resources. Self-rewrites may modify / improve the proof searcher itself, and can be shown to be globally optimal, relative to Goedel's well-known fundamental restrictions of provability. To make sure the Goedel machine is at least asymptotically optimal even before the first self-rewrite, we may initialize it by Hutter's non-self-referential but asymptotically fastest algorithm for all well-defined problems HSEARCH [3], which uses a hardwired brute force proof searcher and (justifiably) ignores the costs of proof search. Assuming discrete input/output domains X/Y, a formal problem specification f : X -> Y (say, a functional description of how integers are decomposed into their prime factors), and a particular x in X (say, an integer to be factorized), HSEARCH orders all proofs of an appropriate axiomatic system by size to find programs q that for all z in X provably compute f(z) within time bound tq(z). Simultaneously it spends most of its time on executing the q with the best currently proven time bound tq(x). Remarkably, HSEARCH is as fast as the fastest algorithm that provably computes f(z) for all z in X, save for a constant factor smaller than 1 + epsilon (arbitrary real-valued epsilon > 0) and an f-specific but x-independent additive constant. Given some problem, the Goedel machine may decide to replace its HSEARCH initialization by a faster method suffering less from large constant overhead, but even if it doesn't, its performance won't be less than asymptotically optimal.
All of this implies that there already exists the blueprint of a Universal AI which will solve almost all problems almost as quickly as if it already knew the best (unknown) algorithm for solving them, because almost all imaginable problems are big enough to make the additive constant negligible. The only motivation for not quitting computer science research right now is that many real-world problems are so small and simple that the ominous constant slowdown (potentially relevant at least before the first Goedel machine self-rewrite) is not negligible. Nevertheless, the ongoing efforts at scaling universal AIs down to the rather few small problems are very much informed by the new millennium's theoretical insights mentioned above, and may soon yield practically feasible yet still general problem solvers for physical systems with highly restricted computational power, say, a few trillion instructions per second, roughly comparable to a human brain power.
[1] J. Schmidhuber. Goedel machines: Fully Self-Referential Optimal Universal Self-Improvers. In B. Goertzel and C. Pennachin, eds.: Artificial General Intelligence, p. 119-226, 2006.
[2] J. Schmidhuber. Ultimate cognition à la Goedel. Cognitive Computation, 1(2):177-193, 2009.
[3] M. Hutter. The fastest and shortest algorithm for all well-defined problems. International Journal of
Foundations of Computer Science, 13(3):431-443, 2002. (On J. Schmidhuber's SNF grant 20-61847).
[4] J. Schmidhuber. Developmental robotics, optimal artificial curiosity, creativity, music, and the fine
arts. Connection Science, 18(2):173-187, 2006.
[5] J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990-2010). IEEE Transactions
on Autonomous Mental Development, 2(3):230-247, 2010.
A dozen earlier papers on (not yet theoretically optimal) recursive self-improvement since 1987 are here: http://www.idsia.ch/~juergen/metalearner.html
Anonymous
At this point I would also like to give a short roundup. Most experts I wrote haven't responded at all so far, although a few did but asked me not to publish their answers. Some of them are well-known even outside of their field of expertise and respected even here on LW.
I will paraphrase some of the responses I got below:
Anonymous expert 01: I think the so-called Singularity is unlikely to come about in the foreseeable future. I already know about the SIAI and I think that the people who are involved with it are well-meaning, thoughtful and highly intelligent. But I personally think that they are naïve as far as the nature of human intelligence goes. None of them seems to have a realistic picture about the nature of thinking.
Anonymous expert 02: My opinion is that some people hold much stronger opinions on this issue than justified by our current state of knowledge.
Anonymous expert 03: I believe that the biggest risk from AI is that at some point we will become so dependent on it that we lose our cognitive abilities. Today people are losing their ability to navigate with maps, thanks to GPS. But such a loss will be nothing compared to what we might lose by letting AI solve more important problems for us.
Anonymous expert 04: I think these are nontrivial questions and that risks from AI have to be taken seriously. But I also believe that many people have made scary-sounding but mostly unfounded speculations. In principle an AI could take over the world, but currently AI presents no threat. At some point, it will become a more pressing issue. In the mean time, we are much more likely to destroy ourselves by other means.