[Added 02/24/14: SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.]
Related To: Should I believe what the SIAI claims?, Existential Risk and Public Relations
In his recent post titled Should I believe what the SIAI claims? XiXiDu wrote:
I'm already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?
And this is what I'm having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call this a castle in the air.
[...]
I can however follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?
[...]
I'm concerned that although consistently so, the LW community is updating on fictional evidence. This post is meant to inquire the basic principles, the foundation of the sound argumentation's and the basic premises that they are based upon.
XiXidu's post produced mixed reactions within the LW community. On one hand, some LW members (e.g. orthonormal) felt exasperated with XiXiDu because his post was poorly written, revealed him to be uninformed, and revealed that he has not internalized some of the basic principles of rationality. On the other hand, some LW members (e.g. HughRistik) have long wished that SIAI would attempt to substantiate some of its more controversial claims in detail and were gratified to see somebody call on SIAI to do so. These two categories are not mutually exclusive. I fall into both in some measure. In any case, I give XiXiDu considerable credit for raising such an important topic.
The present post is the first of a several posts in which I will detail my thoughts on SIAI's claims.
One difficulty is that there's some ambiguity as to what SIAI's claims are. I encourage SIAI to make a more detailed public statement of their most fundamental claims. According to the SIAI website:
In the coming decades, humanity will likely create a powerful artificial intelligence. The Singularity Institute for Artificial Intelligence (SIAI) exists to confront this urgent challenge, both the opportunity and the risk. Our objectives as an organization are:
- To ensure the development of friendly Artificial Intelligence, for the benefit of all mankind;
- To prevent unsafe Artificial Intelligence from causing harm;
- To encourage rational thought about our future as a species.
I interpret SIAI's key claims to be as follows:
(1) At the margin, the best way for an organization with SIAI's resources to prevent global existential catastrophe is to promote research on friendly Artificial Intelligence, work against unsafe Artificial Intelligence, and encourage rational thought.
(2) Donating to SIAI is the most cost-effective way for charitable donors to reduce existential risk.
I arrived at belief that SIAI claims (1) by reading their mission statement and by reading SIAI research fellow Eliezer Yudkowsky's writings, in particular the ones listed under the Less Wrong wiki article titled Shut up and multiply. [Edit (09/09/10): The videos of Eliezer linked in a comment by XiXiDu give some evidence that SIAI claims (2). As Airedale says in her second to last paragraph here, Eliezer and SIAI are not synonymous entities. The question of whether SIAI regards Eliezer as an official representative of SIAI remains]. I'm quite sure that (1) and (2) are in the rough ballpark of what SIAI claims, but encourage SIAI to publicly confirm or qualify each of (1) and (2) so that we can all have a more clear idea of what SIAI claims.
My impression is that some LW posters are confident in both (1) and (2), some are confident in neither of (1) and (2) while others are confident in exactly one of (1) and (2). For clarity, I think that it's sensible to discuss claims (1) and (2) separately. In the remainder of the present post, I'll discuss claim (1'), namely, claim (1) modulo the part about the importance of encouraging rational thought. I will address SIAI's emphasis on encouraging rational thought in a later post.
As I have stated repeatedly, unsafe AI is not the only existential risk. The Future of Humanity Institute has a page titled Global Catastrophic Risks which has a list of lectures given at a 2008 conference on a variety of potential global catastrophic risks. Note that a number of these global catastrophic risks are unrelated to future technologies. Any argument in favor of claim (1') must consist of a quantitative comparison of the effects of focusing on Artificial Intelligence and the effects of focusing on other existential risks. To my knowledge, SIAI has not provided a detailed quantitative analysis of the expected impact of AI research, a detailed quantitative analysis of working to avert other existential risks, and a comparison of the two. If SIAI has made such a quantitative analysis, I encourage them to make it public. At present, I believe that SIAI has not substantiated claim (1').
Remarks on arguments advanced in favor of focusing on AI
(A) Some people claim that there's a high probability that runaway superhuman artificial intelligence will be developed in the near future. For example, Eliezer has said that "it seems pretty obvious to me that some point in the not-too-distant future we're going to build an AI [...] it will be a superintelligence relative to us [...] in one to ten decades and probably on the lower side of that."
I believe that if Eliezer is correct about this assertion, claim (1') is true. But I see no reason for assigning high probability to notion that a runaway superhuman intelligence will be developed within such a short timescale. In the bloggingheads diavlog Scott Aaronson challenges Eliezer on this point and Eliezer offers some throwaway remarks which I do not find compelling. As far as I know, neither Eliezer nor anybody else at SIAI have provided a detailed explanation for why we should expect runaway superhuman intelligence on such a short timescale. LW poster timtyler pointed me to a webpage where he works out his own estimate of the timescale. I will look at this document eventually, but do not expect to find it compelling, especially in light of Carl Shulman's remarks about the survey used suffering from selection bias. So at present, I do not find (A) a compelling reason to focus on the existential risk of AI.
(B) Some people have remarked that if we develop an FAI, the FAI will greatly reduce all other existential risks which humanity faces. For example, timtyler says
I figure a pretty important thing is to get out of the current vulnerable position as soon as possible. To do that, a major thing we will need is intelligent machines - and so we should allocate resources ot their development.
I agree with timtyler that it would be very desirable for us to have an FAI to solve our problems. If all else was equal, then this would give special reason to favor focus on AI over existential risks that are not related to Artificial Intelligence. But this factor by itself is not a compelling reason for focus on Artificial Intelligence. In particular, human-level AI may be so far off in the future that if we want to survive, we have to address other existential risks right now without the aid of AI.
(C) An inverse of the view mentioned in (B) is the idea that if we're going to survive in the over the long haul, we must eventually build an FAI, so we might as well focus on FAI since if we don't get FAI right, we're doomed anyway. This is an aspect of Vladimir_Nesov's position which is emerges the linked threads [1], [2]. I think that there's something to this idea. Of course research on FAI may come at the opportunity cost of the chance to avert short term preventable global catastrophic risks. My understanding is that at present Vladimir_Nesov believes that this cost is outweighed by the benefits. By way of contrast, at present I believe that the benefits are outweighed by the cost. See our discussions for details. Vladimir_Nesov's position is sophisticated and I respect it.
(D) Some people have said that existential risk due to advanced technologies is getting disproportionately little attention relative to other existential risks so that at the margin one should focus on advanced technologies. For example, see Vladimir_Nesov's comment and ciphergoth's comment. I don't find this sort of remark compelling. My own impression is that all existential risks are getting very little attention. I see no reason for thinking that existential risk due to advanced technologies is getting less than its fair share of attention being directed toward existential risk. As I said in response to ciphergoth:
Are you sure that the marginal contribution that you can make to the issue which is getting the least attention is the greatest? The issues getting the least attention may be getting little attention precisely because people know that there's nothing that can be done about them.
(E) Some people have remarked that most issues raised as potential existential risks (e.g. nuclear war, resource shortage) seem very unlikely to kill everyone and so are not properly conceived of as existential risks. I don't find these sorts of remarks compelling. As I've commented elsewhere, any event which would permanently prevent humans from creating a transhuman paradise is properly conceived of as an existential risk on account of the astronomical waste which would result.
On argument by authority
When XiXiDu raised his questions, Eliezer initially responded by saying:
If you haven't read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't.
I interpret this to be a statement of the type "You should believe SIAI's claims (1) and (2) because we're really smart." There are two problems with such a statement. One is that there's no evidence that intelligence leads to correct views about how to ensure the survival of the human species. Alexander Grothendieck is one of the greatest mathematicians of the 20th century. Fields medalist Rene Thom wrote:
Relations with my colleague Grothendieck were less agreeable for me. His technical superiority was crushing. His seminar attracted the whole of Parisian mathematics, whereas I had nothing new to offer.
Fields Medalist David Mumford said
[Grothendieck] had more than anybody else I’ve ever met this ability to make an absolutely startling leap into something an order of magnitude more abstract…. He would always look for some way of formulating a problem, stripping apparently everything away from it, so you don’t think anything is left. And yet something is left, and he could find real structure in this seeming vacuum.”
In Mariana Cook's book titled Mathematicians: An Outer View of the Inner World, Fields Medalist and IAS professor Pierre Deligne wrote
When I was in Paris as a student, I would go to Grothendieck's seminar at IHES [...] Grothendieck asked me to write up some of the seminars and gave his notes. He was extremely generous with his ideas. One could not be lazy or he would reject you. But if you were really interested and doing this he liked, then he helped you a lot. I enjoyed the atmosphere around him very much. He had the main ideas and the aim was to prove theories and understand a sector of mathematics. We did not care much about priority because Grothendieck had the ideas we were working on and priority would have meant nothing.
(Emphasis my own.)
These comments should suffice to illustrate that Grothendieck's intellectual power was uncanny.
In a very interesting transcript titled Reminiscences of Grothendieck and his school, Grothendieck's student former student Luc Illusie says:
In 1970 he left the IHES and founded the ecological group Survivre et Vivre. At the Nice congress, he was doing propaganda for it, offering documents taken out of a small cardboard suitcase. He was gradually considering mathematics as not being worth of being studied, in view of the more urgent problems of the survival of the human species.
I think that it's fair to say that Grothendieck's ideas about how to ensure the survival of the human species were greatly misguided. In the second portion of Allyn Jackson's excellent biography of Grothendieck one finds the passage
...despite his strong convictions, Grothendieck was never effective in the real world of politics. “He was always an anarchist at heart,” Cartier observed. “On many issues, my basic positions are not very far from his positions. But he was so naive that it was totally impossible to do anything with him politically.” He was also rather ignorant. Cartier recalled that, after an inconclusive presidential election in France in 1965, the newspapers carried headlines saying that de Gaulle had not been elected. Grothendieck asked if this meant that France would no longer have a president. Cartier had to explain to him what a runoff election is. “Grothendieck was politically illiterate,” Cartier said. But he did want to help people: it was not unusual for Grothendieck to give shelter for a few weeks to homeless people or others in need.
[...]
“Even people who were close to his political views or his social views were antagonized by his behavior.…He behaved like a wild teenager.”
[....]
“He was used to people agreeing with his opinions when he was doing algebraic geometry,” Bumby remarked. “When he switched to politics all the people who would have agreed with him before suddenly disagreed with him.... It was something he wasn’t used to.”
Just as Grothendieck's algebro-geometric achievements had no bearing on Grothendieck's ability to conceptualize a good plan to lower existential risk, so too does Eliezer's ability to interpret quantum mechanics have no bearing on Eliezer's ability to conceptualize a good plan to lower existential risk.
The other problem with Eliezer's appeal to his intellectual prowess is that Eliezer's demonstrated intellectual prowess pales in comparison with that of other people who are interested in existential risk. I wholeheartedly agree with rwallace's comment:
If you want to argue from authority, the result of that isn't just tilted against the SIAI, it's flat out no contest.
By the time Grothendieck was Eliezer's age he had already established himself as a leading authority in functional analysis and proven his vast generalization of the Riemann-Roch theorem. Eliezer's intellectual achievements are meager by comparison.
A more contemporary example of a powerful intellect interested in existential risk is Fields Medalist and Abel Prize winner Mikhail Gromov. On the GiveWell research blog there's an excerpt from an interview with Gromov which caught my attention:
If you try to look into the future, 50 or 100 years from now...
50 and 100 is very different. We know more or less about the next 50 years. We shall continue in the way we go. But 50 years from now, the Earth will run out of the basic resources and we cannot predict what will happen after that. We will run out of water, air, soil, rare metals, not to mention oil. Everything will essentially come to an end within 50 years. What will happen after that? I am scared. It may be okay if we find solutions but if we don't then everything may come to an end very quickly!
Mathematics may help to solve the problem but if we are not successful, there will not be any mathematics left, I am afraid!
Are you pessimistic?
I don't know. It depends on what we do. if we continue to move blindly into the future, there will be a disaster within 100 years and it will start to be very critical in 50 years already. Well, 50 is just an estimate. It may be 40 or it may be 70 but the problem will definitely come. If we are ready for the problems and manage to solve them, it will be fantastic. I think there is potential to solve them but this potential should be used and this potential is education. It will not be solved by God. People must have ideas and they must prepare now. In two generations people must be educated. Teachers must be educated now, and then the teachers will educate a new generation. Then there will be sufficiently many people to face the difficulties. I am sure this will give a result. If not, it will be a disaster. It is an exponential process. If we run along an exponential process, it will explode. That is a very simple computation. For example, there will be no soil. Soil is being exhausted everywhere in the world. It is not being said often enough. Not to mention water. It is not an insurmountable problem but requires solutions on a scale we have never faced before, both socially and intellectually.
I've personally studied some of Gromov's work and find it much more impressive than the portions of Eliezer's work which I've studied. I find Gromov's remarks on existential risk more compelling than Eliezer's remarks on existential risk. Neither Gromov nor Eliezer have substantiated their claims, so by default I take Gromov more seriously than Eliezer. But as I said above, this is really aside from the point. The point is that there's a history of brilliant people being very mistaken in their views about things outside of their areas of expertise and that discussion of existential risk should be based on evidence rather than based on argument by authority. I agree with a remark which Holden Karnofsky made in response to my GiveWell research mailing list post
I think it's important not to put too much trust in any single person's view based simply on credentials. That includes [...] Mikhail Gromov [...] among others.
I encourage Less Wrong readers who have not done so to carefully compare the marginal impact that one can hope to have on existential risk by focusing on AI with the marginal impact that one can hope to have on existential risk by focusing on a specific existential risk unrelated to AI. When one does so, one should beware of confirmation bias. If one came to believe that focusing on AI is a good without careful consideration of alternatives, one should assume oneself to be irrationally biased in favor of focusing on AI.
Bottom line
There's a huge amount of uncertainty as to which existential risks are most likely to strike and what we can hope to do about them. At present reasonable people can hold various views on which existential risks are worthy of the most attention. I personally think that the best way to face the present situation is to gather more information about all existential risks rather than focusing on one particular existential risk, but I might be totally wrong. Similarly, people who believe that AI deserves top priority might be totally wrong. At present there's not enough information available to determine which existential risks deserve top priority with any degree of confidence.
SIAI can credibly claim (1'), but SIAI cannot credibly claim (1') with confidence. Because uncredible claims about existential risk drive people away from thinking about existential risk, SIAI should take special care to avoid the appearance of undue confidence in claim (1').
With respect to point (E), in Astronomical Waste Bostrom writes:
From this, if a near-existential disaster could cause a delay of, say, 10,000 years in reaching the stars, then a 10% reduction in the risk of such a disaster is worth the same as a 0.0001% reduction in existential risk.
Yes, I appreciate that point, my concern is with permanent obstructions to technological development.