Regarding D) it depends on why the risks are getting varying amounts of attention. Existential risks mainly get derivative attention as a result of more likely/near-term/electorally-salient/commonsense-morality-salient lesser forms. For instance, engineered diseases get countermeasure research because of the threat of non-extinction-level pathogens causing substantial casualties, not the less likely and more distant scenario of a species-killer. Anti-nuclear measures are driven mostly by the expected casualties from nuclear war than the chance of surprisingly powerful nuclear winter, etc. Climate change prevention is mostly justified in non-existential risk terms, and benefits from a single clear observable mechanism already in progress that fits many existing schema for environmentalism and dealing with pollutants.
The beginnings of a similar derivative effort are visible in the emerging "machine ethics" area, which has been energized by the development of Predator drones and the like, although it's noteworthy how little was done on AI risk in the early, heady days of AI, when researchers were relatively confident in success soon.
Regarding A), I'll have more to say at ano...
I interpret this to be a statement of the type "You should believe SIAI's claims (1) and (2) because we're really smart."
No, it's a statement of the type "You should believe SIAI's claims (1) and (2) because we're really rational." Your mathematician may have been smart and not rational. I remember reading about the phenomenon of smart non-rational people, maybe here: http://www.magazine.utoronto.ca/feature/why-people-are-irrational-kurt-kleiner/
Anyway, your mathematician is a terrible example of an irrational person because he was...
EY argues: "... your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't."
and you respond by saying that there have been people smarter than Eliezer that have suffered rationality fails when working outside their domain? Isn't that kinda the point?
EY wasn't arguing "My IQ is so damn high that I just have to be right. Look at my ability to g...
I agree with the overall point, here; but the "argument by authority" section is deeply flawed. In it, intelligence is consistently equated with rationality; and section's whole point seems to depend on that equation. As demonstrated in works like What Intelligence Tests Miss, G and rationality have markedly different effects. I don't think Eliezer would claim to be smarter than Grothendieck or Gödel or Erdős, but he could claim with some justification to be saner than them.
This post seems mis-named. I thought you were going to discuss "other existential risks" like nuclear war, global pandemic, environmental collapse, but mostly the discussion was about how to evaluate SIAI claims.
SIAI's narrow focus on things that "look like HAL" neglects the risks of entities that are formed of humans and computers (and other objects) interacting. These entities already exist, they're already beyond human intelligence, and they're already existential risks.
Indeed, Lesswrong and SIAI are two obvious examples of these entities, and it's not clear at all how to steer them to become Friendly. Increasing individual rationality will help, but we also need to do social engineering - checks and balances and incentives (not just financial, but social incentives such as attention and praise) - and groupware research (e.g. karma and moderation systems, expert aggregation).
I think this is an excellent question. I'm hoping it leads to more actual discussion of the possible timeline of GAI.
Here's my answer, important points first, and not quite as briefly as I'd hoped.
1) even if uFAI isn't the biggest existential risk, the very low investment and interest in it might make it the best marginal value for investment of time or money. As someone noted, having at least a few people thinking about the risk far in advance seems like a great strategy if the risk is unknown.
2) No one but SIAI is taking donations to mitigate the risk ...
With respect to point (E), in Astronomical Waste Bostrom writes:
a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
From this, if a near-existential disaster could cause a delay of, say, 10,000 years in reaching the stars, then a 10% reduction in the risk of such a disaster is worth the same as a 0.0001% reduction in existential risk.
It seems extremely unlikely that we'll have Venusian style runaway global warming anytime in the next few thousand years assuming no major geoengineering occurs. A major part of why that happened on Venus is due to the lack of plate tectonics on Venus. Without that, there are serious limits. Earth could become much more inhospitable to humans but it would be very difficult to even have more than a 20 or 30 degree Farenheit increase. So humans would have to live near the poles, but it wouldn't be fatal.
A more serious long-term obstruction to going to the stars is that it isn't completely clear after a large-scale societal collapse that we will have the resources necessary to bootstrap back up to even current tech levels. Nick Bostrom has discussed this. Essentially, many of the resources we take for granted as necessary for developing a civilization (oil, coal, certain specific ores) have been consumed by civilization. We're already exhausting the easy to reach oil and have exhausted much of the easy to reach coal (we just don't notice it as much with coal because there's so much). A collapse back to bronze age tech, or even late Roman tech might not have enough easy energy sources ...
On the issue of AI timelines:
A quantitative analysis of the sort you seek is really not possible for the specifics of future technological development. If we knew exactly what obstacles stood in the way, we'd be all but there. Hence the reliance instead on antipredictions and disjunctions, which leave a lot of uncertainty but can still point strongly in one direction.
My own reasoning behind an "AI in the next few decades" position is that, even if every other approach people have thought of and will think of bogs down, there's always the ability ...
Suppose pro-friendly AI and anti-uncontrolled-AI advocacy and research is not at this point the most effective mitigation of x-risk. It doesn't follow that nothing at all should be done now.[1]
I would still want something like SIAI funded to some level (just like I would want a few competent people evaluating the utility of planning and preparing for other far-off high-leverage risks/opportunities).
Broadly, the question is: who should be funded, and for how much, to plan/act for our possible far-future benefit. Specifically: holding everything else const...
I definitely think that, alongside the introductory What is the Singularity? and Why work toward the Singularity? pages, SIAI should have a prominent page stating the basic case for donating to SIAI. Why work toward the Singularity? already explains why bringing about a positive Singularity would have a very high humanitarian impact, but it would probably be beneficial to make the additional case that SIAI's research program is likely increase the probability of that outcome, and that donations at its current funding level have a high marginal expected uti...
Although there are an infinite number of existential risks which might cause human extinction, I still think that AI with a utility that conflicts with human existence is the one issue we should spend the most resources to fight. Why? First, an AI would be really useful, so you can be relatively sure that work on it will continue until the job is done. Other disasters like asteroid strikes, nuclear war, and massive pandemics are all possible, but at least they do not have a large economic and social incentive to get us closer to one.
Second, we have already...
Maybe I'm alone on this, but just to speak for the silent majority here:
Existential risk isn't that big a deal. The chances for any of the human civilizational failure modes are slim to none. It's really not something we as a society should be spending any time on.
That's not to say SIAI is a poor cause to contribute to. I've talked to some insiders who have assured me that SIAI has serious plans, over the span of decades, to really ramp up our productive capabilities and put them to good use, not wasteful or destructive use. To butter, not guns. To de...
I'm surprised you bring up Mikhail Gromov as a counterexample to Eliezer, considering that Gromov's solution to existential risk, as presented in the quote above, can be paraphrased as: increase education so someone has a good idea on how to fix everything.
(Actual quote: "People must have ideas and they must prepare now. In two generations people must be educated. Teachers must be educated now, and then the teachers will educate a new generation. Then there will be sufficiently many people to face the difficulties. I am sure this will give a result.&q...
My own impression is that all existential risks are getting very little attention.
This is true, and indeed you refute (D) well with it. Although some particular risks, like cold-war era massive nuclear conflict (with or without the sexed-up nuclear winter scenarios), global warming, and medicine-resistant pandemic, have received magnitudes more serious consideration and media amplification than more things like nano and AI risks.
WRT point D, it should be possible to come up with some sort of formula that gives the relative utility according to maxipok of working on various risks. Something that takes into account
These I think are all that are needed when considering donations. When considering time rather than money, you also need to take into account:
But I see no reason for assigning high probability to notion that a runaway superhuman intelligence will be developed within such a short timescale. In the bloggingheads diavlog Scott Aaronson challenges Eliezer on this point and Eliezer offers some throwaway remarks which I do not find compelling. As far as I know, neither Eliezer nor anybody else at SIAI have provided a detailed explanation for why we should expect runaway superhuman intelligence on such a short timescale.
I think this is a key point. While I think unFriendly AI could be a problem in ...
As I've commented elsewhere, any event which would permanently prevent humans from creating a transhuman paradise is properly conceived of as an existential risk on account of the astronomical waste which would result.
Is there no post somewhere on LW explaining why paradises are bad? A paradise must be all exploitation and no exploration; hence, they are static.
Being highly-optimized requires being static.
I'm not sure why I should believe this. Given that one of the properties that we're presumably optimizing over is 'not being static'.
some LW posters are confident in both (1) and (2), some are confident in neither of (1) and (2) while others are confident in exactly one of (1) and (2)
Logically, this is tautological. I think you're saying that there don't seem to be many who are completely convinced that both (1) and (2) are untrue. I think that's right; both claims are somewhat plausible.
Curious: do people prefer "neither A nor B" or "neither of (A and B)"?
Nitpick: it's not quite tautological, as he asserts that at least one* person exists in each category. It is only a tautology that everyone fits into one of them, not that they're all non-empty.
*or two, depending on your interpretation of 'some'.
I don't think this is a Nitpick - I think this explains why the statement is included in the original post in the first place - to point out that there is a wide variety of position that LW readers hold on these statements.
We clearly don't focus enough on near-term existential risks that we already know about:
Nuclear war
Global warming
Asteroid impact
Supervolcano eruption
Compared to these (which already exist right now and are relatively well-understood), worrying about grey goo and unfriendly AIs does seem a bit beside the point.
[Added 02/24/14: SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.]
Related To: Should I believe what the SIAI claims?, Existential Risk and Public Relations
In his recent post titled Should I believe what the SIAI claims? XiXiDu wrote:
XiXidu's post produced mixed reactions within the LW community. On one hand, some LW members (e.g. orthonormal) felt exasperated with XiXiDu because his post was poorly written, revealed him to be uninformed, and revealed that he has not internalized some of the basic principles of rationality. On the other hand, some LW members (e.g. HughRistik) have long wished that SIAI would attempt to substantiate some of its more controversial claims in detail and were gratified to see somebody call on SIAI to do so. These two categories are not mutually exclusive. I fall into both in some measure. In any case, I give XiXiDu considerable credit for raising such an important topic.
The present post is the first of a several posts in which I will detail my thoughts on SIAI's claims.
One difficulty is that there's some ambiguity as to what SIAI's claims are. I encourage SIAI to make a more detailed public statement of their most fundamental claims. According to the SIAI website:
I interpret SIAI's key claims to be as follows:
(1) At the margin, the best way for an organization with SIAI's resources to prevent global existential catastrophe is to promote research on friendly Artificial Intelligence, work against unsafe Artificial Intelligence, and encourage rational thought.
(2) Donating to SIAI is the most cost-effective way for charitable donors to reduce existential risk.
I arrived at belief that SIAI claims (1) by reading their mission statement and by reading SIAI research fellow Eliezer Yudkowsky's writings, in particular the ones listed under the Less Wrong wiki article titled Shut up and multiply. [Edit (09/09/10): The videos of Eliezer linked in a comment by XiXiDu give some evidence that SIAI claims (2). As Airedale says in her second to last paragraph here, Eliezer and SIAI are not synonymous entities. The question of whether SIAI regards Eliezer as an official representative of SIAI remains]. I'm quite sure that (1) and (2) are in the rough ballpark of what SIAI claims, but encourage SIAI to publicly confirm or qualify each of (1) and (2) so that we can all have a more clear idea of what SIAI claims.
My impression is that some LW posters are confident in both (1) and (2), some are confident in neither of (1) and (2) while others are confident in exactly one of (1) and (2). For clarity, I think that it's sensible to discuss claims (1) and (2) separately. In the remainder of the present post, I'll discuss claim (1'), namely, claim (1) modulo the part about the importance of encouraging rational thought. I will address SIAI's emphasis on encouraging rational thought in a later post.
As I have stated repeatedly, unsafe AI is not the only existential risk. The Future of Humanity Institute has a page titled Global Catastrophic Risks which has a list of lectures given at a 2008 conference on a variety of potential global catastrophic risks. Note that a number of these global catastrophic risks are unrelated to future technologies. Any argument in favor of claim (1') must consist of a quantitative comparison of the effects of focusing on Artificial Intelligence and the effects of focusing on other existential risks. To my knowledge, SIAI has not provided a detailed quantitative analysis of the expected impact of AI research, a detailed quantitative analysis of working to avert other existential risks, and a comparison of the two. If SIAI has made such a quantitative analysis, I encourage them to make it public. At present, I believe that SIAI has not substantiated claim (1').
Remarks on arguments advanced in favor of focusing on AI
(A) Some people claim that there's a high probability that runaway superhuman artificial intelligence will be developed in the near future. For example, Eliezer has said that "it seems pretty obvious to me that some point in the not-too-distant future we're going to build an AI [...] it will be a superintelligence relative to us [...] in one to ten decades and probably on the lower side of that."
I believe that if Eliezer is correct about this assertion, claim (1') is true. But I see no reason for assigning high probability to notion that a runaway superhuman intelligence will be developed within such a short timescale. In the bloggingheads diavlog Scott Aaronson challenges Eliezer on this point and Eliezer offers some throwaway remarks which I do not find compelling. As far as I know, neither Eliezer nor anybody else at SIAI have provided a detailed explanation for why we should expect runaway superhuman intelligence on such a short timescale. LW poster timtyler pointed me to a webpage where he works out his own estimate of the timescale. I will look at this document eventually, but do not expect to find it compelling, especially in light of Carl Shulman's remarks about the survey used suffering from selection bias. So at present, I do not find (A) a compelling reason to focus on the existential risk of AI.
(B) Some people have remarked that if we develop an FAI, the FAI will greatly reduce all other existential risks which humanity faces. For example, timtyler says
I agree with timtyler that it would be very desirable for us to have an FAI to solve our problems. If all else was equal, then this would give special reason to favor focus on AI over existential risks that are not related to Artificial Intelligence. But this factor by itself is not a compelling reason for focus on Artificial Intelligence. In particular, human-level AI may be so far off in the future that if we want to survive, we have to address other existential risks right now without the aid of AI.
(C) An inverse of the view mentioned in (B) is the idea that if we're going to survive in the over the long haul, we must eventually build an FAI, so we might as well focus on FAI since if we don't get FAI right, we're doomed anyway. This is an aspect of Vladimir_Nesov's position which is emerges the linked threads [1], [2]. I think that there's something to this idea. Of course research on FAI may come at the opportunity cost of the chance to avert short term preventable global catastrophic risks. My understanding is that at present Vladimir_Nesov believes that this cost is outweighed by the benefits. By way of contrast, at present I believe that the benefits are outweighed by the cost. See our discussions for details. Vladimir_Nesov's position is sophisticated and I respect it.
(D) Some people have said that existential risk due to advanced technologies is getting disproportionately little attention relative to other existential risks so that at the margin one should focus on advanced technologies. For example, see Vladimir_Nesov's comment and ciphergoth's comment. I don't find this sort of remark compelling. My own impression is that all existential risks are getting very little attention. I see no reason for thinking that existential risk due to advanced technologies is getting less than its fair share of attention being directed toward existential risk. As I said in response to ciphergoth:
(E) Some people have remarked that most issues raised as potential existential risks (e.g. nuclear war, resource shortage) seem very unlikely to kill everyone and so are not properly conceived of as existential risks. I don't find these sorts of remarks compelling. As I've commented elsewhere, any event which would permanently prevent humans from creating a transhuman paradise is properly conceived of as an existential risk on account of the astronomical waste which would result.
On argument by authority
When XiXiDu raised his questions, Eliezer initially responded by saying:
I interpret this to be a statement of the type "You should believe SIAI's claims (1) and (2) because we're really smart." There are two problems with such a statement. One is that there's no evidence that intelligence leads to correct views about how to ensure the survival of the human species. Alexander Grothendieck is one of the greatest mathematicians of the 20th century. Fields medalist Rene Thom wrote:
Fields Medalist David Mumford said
In Mariana Cook's book titled Mathematicians: An Outer View of the Inner World, Fields Medalist and IAS professor Pierre Deligne wrote
(Emphasis my own.)
These comments should suffice to illustrate that Grothendieck's intellectual power was uncanny.
In a very interesting transcript titled Reminiscences of Grothendieck and his school, Grothendieck's student former student Luc Illusie says:
I think that it's fair to say that Grothendieck's ideas about how to ensure the survival of the human species were greatly misguided. In the second portion of Allyn Jackson's excellent biography of Grothendieck one finds the passage
Just as Grothendieck's algebro-geometric achievements had no bearing on Grothendieck's ability to conceptualize a good plan to lower existential risk, so too does Eliezer's ability to interpret quantum mechanics have no bearing on Eliezer's ability to conceptualize a good plan to lower existential risk.
The other problem with Eliezer's appeal to his intellectual prowess is that Eliezer's demonstrated intellectual prowess pales in comparison with that of other people who are interested in existential risk. I wholeheartedly agree with rwallace's comment:
By the time Grothendieck was Eliezer's age he had already established himself as a leading authority in functional analysis and proven his vast generalization of the Riemann-Roch theorem. Eliezer's intellectual achievements are meager by comparison.
A more contemporary example of a powerful intellect interested in existential risk is Fields Medalist and Abel Prize winner Mikhail Gromov. On the GiveWell research blog there's an excerpt from an interview with Gromov which caught my attention:
I've personally studied some of Gromov's work and find it much more impressive than the portions of Eliezer's work which I've studied. I find Gromov's remarks on existential risk more compelling than Eliezer's remarks on existential risk. Neither Gromov nor Eliezer have substantiated their claims, so by default I take Gromov more seriously than Eliezer. But as I said above, this is really aside from the point. The point is that there's a history of brilliant people being very mistaken in their views about things outside of their areas of expertise and that discussion of existential risk should be based on evidence rather than based on argument by authority. I agree with a remark which Holden Karnofsky made in response to my GiveWell research mailing list post
I encourage Less Wrong readers who have not done so to carefully compare the marginal impact that one can hope to have on existential risk by focusing on AI with the marginal impact that one can hope to have on existential risk by focusing on a specific existential risk unrelated to AI. When one does so, one should beware of confirmation bias. If one came to believe that focusing on AI is a good without careful consideration of alternatives, one should assume oneself to be irrationally biased in favor of focusing on AI.
Bottom line
There's a huge amount of uncertainty as to which existential risks are most likely to strike and what we can hope to do about them. At present reasonable people can hold various views on which existential risks are worthy of the most attention. I personally think that the best way to face the present situation is to gather more information about all existential risks rather than focusing on one particular existential risk, but I might be totally wrong. Similarly, people who believe that AI deserves top priority might be totally wrong. At present there's not enough information available to determine which existential risks deserve top priority with any degree of confidence.
SIAI can credibly claim (1'), but SIAI cannot credibly claim (1') with confidence. Because uncredible claims about existential risk drive people away from thinking about existential risk, SIAI should take special care to avoid the appearance of undue confidence in claim (1').