Wei_Dai comments on The Importance of Self-Doubt - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (726)
A few unrelated points:
Optimising one's lifestyle for the efficient acquisition of power to enable future creation of bulk quantities of paper-clips. For example.
It would be nice if utilitarianism signified that - but in fact it is about the greatest good for the greatest number:
"Utilitarianism (also: utilism) is the idea that the moral worth of an action is determined solely by its utility in providing happiness or pleasure as summed among all sentient beings."
Sorry to take so long to get back to you :)
Obviously humans are extremely ill-suited for being utilitarians (just as humans would be extremely ill-suited for being paperclip maximizers even if they wanted to be.)
When I refer to a "genuinely utilitarian lifestyle" I mean subject to human constraints. There are some people who do this much better than others - for example, Bill Gates and Warren Buffett have done much better than most billionaires.
I think that with a better peer network, Gates and Buffett could have done still better (for example I would have liked to see them take existential risk into serious consideration with their philanthropic efforts).
A key point here is that as I've said elsewhere I don't think that leading a (relatively) utilitarian lifestyle has very much at all to do with personal sacrifice, but rather with realigning one's personal motivational structure in a way that (at least for many people) does not entail a drop in quality of life. If you haven't already done so, see my post on missed opportunities for doing well by doing good.
Thanks for the reference. I edited the end of my posting to clarify what I had in mind.
If that's the kind of criteria you have in mind, why did you say "Eliezer appears to be deviating so sharply from leading a genuinely utilitarian lifestyle"? It seems to me that Eliezer has also done much better than most ... (what's the right reference class here? really smart people who have been raised in a developed country?)
Which isn't to say that he couldn't do better, but your phrasing strikes me as rather unfair...
What I was getting at in my posting is that in exhibiting unwillingness to seriously consider the possibility that he's vastly overestimated his chances of building a Friendly AI it appears that Eliezer is deviating sharply from leading a utilitarian lifestyle (relative to what one can expect from humans).
I was not trying to make a general statement about Eliezer's attainment of utilitarian goals relative to other humans. I think that there's a huge amount of uncertainty on this point to such an extent that it's meaningless to try to make a precise statement.
The statement that I was driving at is a more narrow one.
I think that it would be better for Eliezer and for the world at large if Eliezer seriously considered the possibility that he's vastly overestimated his chances of building a Friendly AI. I strongly suspect that if he did this, his strategy for reducing existential risk would change for the better. If his current views turn out to be right, he can always return to them later on. I think that the expected benefits of him reevaluating his position far outweigh the expected costs.
We haven't heard Eliezer say how likely he believes it is that he creates a Friendly AI. He has been careful to not to discuss that subject. If he thought his chances of success were 0.5% then I would expect him to make exactly the same actions.
(ETA: With the insertion of 'relative' I suspect I would more accurately be considering the position you are presenting.)
Right, so in my present epistemological state I find it extremely unlikely that Eliezer will succeed in building a Friendly AI. I gave an estimate here which proved to be surprisingly controversial.
The main points that inform my thinking here are:
The precedent for people outside of the academic mainstream having mathematical/scientific breakthroughs in recent times is extremely weak. In my own field of pure math I know of only two people without PhD's in math or related fields who have produced something memorable in the last 70 years or so, namely Kurt Heegner and Martin Demaine. And even Heegner and Demaine are (relatively speaking) quite minor figures. It's very common for self-taught amateur mathematicians to greatly underestimate the difficulty of substantive original mathematical research. I find it very likely that the same is true in virtually all scientific fields and thus have an extremely skeptical Bayesian prior against any proposition of the type "amateur intellectual X will solve major scientific problem Y."
From having talked with computer scientists and AI researchers, I have a very strong impression that the consensus is that AGI is way out of reach at present. See for example points #1 and #5 of Scott Aaronson's The Singularity is Far.
The fact that Eliezer does not appear to have seriously contemplated or addressed the the two points above and their implications diminishes my confidence in his odds of success still further.
Regarding your first point, I'm pretty sure Eliezer does not expect to solve FAI by himself. Part of the reason for creating LW was to train/recruit potential FAI researchers, and there are also plenty of Ph.D. students among SIAI visiting fellows.
Regarding the second point, do you want nobody to start researching FAI until AGI is within reach?
Right, but the historical precedent for an amateur scientist even being at all involved in a substantial scientific breakthrough over the past 50 years is very weak.
Also, my confidence in Eliezer's ability to train/recruit potential FAI researchers has been substantially diminished for the reasons that I give in Existential Risk and Public Relations. I personally would be interested in working with Eliezer if he appeared to me to be well grounded. The impressions that I've gotten from my private correspondence with Eliezer and from his comments have given me a very strong impression that I would find him too difficult to work with for me to be able to do productive FAI research with him.
No. I think that it would be worthwhile for somebody to do FAI research in line with Vladimir Nesov's remarks here and here.
But I maintain that the probability of success is very small and that the only justification for doing it is the possibility of enormous returns. If people had established an institute for the solution of Fermat's Last Theorem in the 1800's, the chances of anybody there playing a decisive role in the solution of Fermat's Last Theorem would be very small. I view the situation with FAI as analogous.
Hold on - there are two different definitions of the word "amateur" that could apply here, and they lead to very different conclusions. The definition I think of first, is that an amateur at something is someone who doesn't get paid folr doing it, as opposed to a professional who makes a living at it. By this definition, amateurs rarely achieve anything, and if they do, they usually stop being amateurs. But Eliezer's full-time occupation is writing, thinking, and talking about FAI and related topics, so by this definition, he isn't an amateur (regardless of whether or not you think he's qualified for that occupation).
The other definition of "amateur scientist" would be "someone without a PhD". This definition Eliezer does fit, but by this definition, the amateurs have a pretty solid record. And if you narrow it down to computer software, the amateurs have achieved more than the PhDs have!
I feel like you've taken the connotations of the first definition and unknowingly and wrongly transferred them to the second definition.
Okay, so, I agree with some of what you say above. I think I should have been more precise.
A claim of the type "Eliezer is likely to build a Friendly AI" requires (at least in part) a supporting claim of the type "Eliezer is in group X where people in group X are likely to build a Friendly AI." Even if one finds such a group X, this may not be sufficient because Eliezer may belong to some subgroup of X which is disproportionately unlikely to build a Friendly AI. But one at least has to be able to generate such a group X.
At present I see no group X that qualifies.
1.Taking X to be "humans in the developed world" doesn't work because the average member of X is extremely unlikely to build a Friendly AI.
Taking X to be "people with PhDs a field related to artificial intelligence" doesn't work because Eliezer doesn't have a PhD in artificial intelligence.
Taking X to be "programmers" doesn't work because Eliezer is not a programmer.
Taking X to be "people with very high IQ" is a better candidate, but still doesn't yield a very high probability estimate because very high IQ is not very strongly correlated with technological achievement.
Taking X to be "bloggers about rationality" doesn't work because there's very little evidence that being a blogger about rationality is correlated with skills conducive to building a Friendly AI.
Which suitable group X do you think that Eliezer falls into?
What are we supposed to infer from that? That if you add an amateur scientist to a group of PhDs, that would substantially decrease their chance of making a breakthrough?
SIAI held a 3-day decision theory workshop in March that I attended along with Stuart Armstrong and Gary Drescher as outside guests. I feel pretty safe in saying that none of us found Eliezer particularly difficult to work with. I wonder if perhaps you're generalizing from one example here.
Do you also think it would be worthwhile for somebody to try to build an organization to do FAI research? If so, who do you think should be doing that, if not Eliezer and his supporters? Or is your position more like cousin_it's, namely that FAI research should just be done by individuals on their free time for now?
No, certainly not. I just don't see much evidence that Eliezer is presently adding value to Friendly AI research. I think he could be doing more to reduce existential risk if he were operating under different assumptions.
Of course you could be right here, but the situation is symmetric, the same could be the case for you, Stuart Armstrong and Gary Drescher. Keep in mind that there's a strong selection effect here - if you're spending time with Eliezer you're disproportionately likely to be well suited to working with Eliezer, and people who have difficulty working with Eliezer are disproportionately unlikely be posting on Less Wrong or meeting with Eliezer.
My intuition is that there are a lot of good potential FAI researchers who would not feel comfortable working with Eliezer given his current disposition, but I may be wrong.
Quite possibly it's a good thing for Eliezer and his supporters to be building an organization to do FAI research. On the other hand maybe cousin_it's position is right. I have a fair amount of uncertainty on this point.
The claim that I'm making is quite narrow: that it would be good for the cause of existential risk reduction if Eliezer seriously considered the possibility that he's greatly overestimated his chances of building a Friendly AI.
I'm not saying that it's a bad thing to have an organization like SIAI. I'm not saying that Eliezer doesn't have a valuable role to serve within SIAI. I'm reminded of Robin Hanson's Against Disclaimers though I don't feel comfortable with his condescending tone and am not thinking of you in that light :-).
My reading of this is that before you corresponded privately with Eliezer, you were
And afterward, you became
Is this right? If so, I wonder what he could have said that made you change your mind like that. I guess either he privately came off as much less competent than he appeared in the public writings that drew him to your attention in the first place (which seems rather unlikely), or you took his response as some sort of personal affront and responded irrationally.
So, the situation is somewhat different than the one that you describe. Some points of clarification.
•I first came across Overcoming Bias in 2008. Eliezer was recommended to me by a friend who I respect a great deal. My reactions to the first postings that I read by Eliezer was strong discomfort with his apparent grandiosity and self absorption. This discomfort was sufficiently strong for me to lose interest despite my friend's endorsement.
•I started reading Less Wrong in earnest in the beginning of 2010. This made it clear to me that Eliezer has a lot to offer and that it was unfortunate that I had been pushed away by my initial reaction.
•I never assigned a very high probability to Eliezer making a crucial contribution to an FAI research project. My thinking was that the enormous positive outcome associated with success might be sufficiently great to justify the project despite the small probability.
•I didn't get much of a chance to correspond privately with Eliezer at all. He responded to a couple of my messages with one line dismissive responses and then stopped responding to my subsequent messages. Naturally this lowered the probability that I assigned to being able to collaborating with him. This also lowered my confidence in his ability to attract collaborators in general.
•If Eliezer showed strong ability to attract and work well with collaborators (including elite academics who are working on artificial intelligence research) then I would find it several orders of magnitude more likely that he would make a crucial contribution to an FAI research project. For concreteness I'll throw out the number 10^(-6).
•I feel that the world is very complicated and that randomness plays a very large role. This leads me to assign a very small probability to the proposition that any given individual will play a crucial role in eliminating existential risk.
•I freely acknowledge that I may be influenced by emotional factors. I make an honest effort at being level headed and sober but as I mention elsewhere, my experience posting on Less Wrong has been emotionally draining. I find that I become substantially less rational when people assume that my motives are impure (some sort of self fulfilling prophesy).
You may notice that of my last four posts, the first pair was considerably more impartial than the second pair. (This is reflected in the fact that the first pair was upvoted more than the second pair.) My subjective perception is that I started out thinking quite carefully and became less rational as I read and responded to hostile commentators.
That you have this impression greatly diminishes my confidence in your intuitions on the matter. Are you seriously suggesting that Eliezer has not contemplated AI researchers' opinions about AGI? Or that he hasn't thought about just how much effort should go into a scientific breakthrough?
Someone please throw a few hundred relevant hyperlinks at this person.
I'm not saying that Eliezer has given my two points no consideration. I'm saying that Eliezer has not given my two points sufficient consideration. By all means, send hyperlinks that you find relevant my way - I would be happy to be proven wrong.
I don't think there's any such consensus. Most of those involved know that they don't know with very much confidence. For a range of estimates, see the bottom of:
http://alife.co.uk/essays/how_long_before_superintelligence/
For what it's worth, in saying "way out of reach" I didn't mean "chronologically far away," I meant "far beyond the capacity of all present researchers." I think it's quite possible that AGI is just 50 years away.
I think that the absence of plausibly relevant and concrete directions for AGI/FAI research, the chance of having any impact on the creation of an FAI through research is diminished by many orders of magnitude.
If there are plausibly relevant and concrete directions for AGI/FAI research then the situation is different, but I haven't heard examples that I find compelling.
"Just 50 years?" Shane Legg's explanation of why his mode is at 2025:
http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/
If 15 years is more accurate - then things are a bit different.
Thanks for pointing this out. I don't have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.
I'd recur to CarlShulman's remark about selection bias here. I look forward to seeing the results of the hypothetical Bostrom survey and the SIAI collection of all public predictions.
I agree. There's still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in "crunch" mode (amassing resources specifically directed at future FAI research).
Eliezer addresses point 2 in the comments of the article you linked to in point 2. He's also previously answered the questions of whether he believes he personally could solve FAI and how far out it is -- here, for example.
Thanks for the references, both of which I had seen before.
Concerning Eliezer's response to Scott Aaronson: I agree that there's a huge amount of uncertainty about these things and it's possible that AGI will develop unexpectedly, but don't see how this points in the direction of AGI being likely to be developed within decades. It seems like one could have said the same thing that Eliezer is saying in 1950 or even 1800. See Holden's remarks about noncontingency here.
As for A Premature Word on AI, Eliezer seems to be saying that
Even though the FAI problem is incredibly difficult, it's still worth working on because the returns attached to success would be enormous.
Lots of people who have worked on AGI are mediocre
The field of AI research is not well organized.
Claim (1) might be true. I suspect that both of claims (2) and (3) are true. But by themselves these claims offer essentially no support for the idea that Eliezer is likely to be able to build a Friendly AI.
Edit: Should I turn my three comments starting here into a top level posting? I hesitate to do so in light of how draining I've found the process of making top level postings and especially reading and responding to the ensuing comments, but the topic may be sufficiently important to justify the effort.
Why? What sort of improvement would you expect?
Remember that he is still the one person in the public sphere who takes the problem of Friendly AI (under any name) seriously enough to have devoted his life to it, and who actually has quasi-technical ideas regarding how to achieve it. All this despite the fact that for decades now, in fiction and nonfiction, the human race has been expressing anxiety about the possibility of superhuman AI. Who are his peers, his competitors, his predecessors? If I was writing the history of attempts to think about the problem, Chapter One would be Isaac Asimov with his laws of robotics, Chapter Two would be Eliezer Yudkowsky and the idea of Friendly AI, and everything else would be a footnote.
Three points:
I think that if he had a more accurate estimation of his chances of building a Friendly AI, this would be better for public relations, for the reasons discussed in Existential Risk and Public Relations.
I think that his unreasonably estimate of his ability to build a Friendly AI has decreased his willingness to engage with the academic mainstream to an unreasonable degree. I think that his ability to do Friendly AI research would be heightened if he were more willing to engage with the academic mainstream. I think he'd be more likely to find collaborators and more likely to learn the relevant material.
I think that a more accurate assessment of the chances of him building a Friendly AI might lead him to focus on inspiring others and on existential risk reduction advocacy (things that he has demonstrated capacity to do very well) rather than Friendly AI research. I suspect that if this happened, it would maximize his chances of obstructing global catastrophic risk.
That would absolutely be a waste. If for some reason he was only to engage in advocacy from now on, it should specifically be Friendly AI advocacy. I point again to the huge gaping absence of other people who specialize in this problem and who have worthwhile ideas. The other "existential risks" have their specialized advocates. No-one else remotely comes close to filling that role for the risks associated with superintelligence.
In other words, the important question is not, what are Eliezer's personal chances of success; the important question is, who else is offering competent leadership on this issue? Like wedrifid, I don't even recall hearing a guess from Eliezer about what he thinks the odds of success are. But such guesses are of secondary importance compared to the choice of doing something or doing nothing, in a domain where no-one else is acting. Until other people show up, you have to just go out there and do your best.
I'm pretty sure Eric Drexler went through this already, with nanotechnology. There was a time when Drexler was in a quite unique position, of appreciating the world-shaking significance of molecular machines, having an overall picture of what they imply and how to respond, and possessing a platform (his Foresight Institute) which gave him a little visibility. The situation is very different now. We may still be headed for disaster on that front as well, but at least the ability of society to think about the issues is greatly improved, mostly because broad technical progress in chemistry and nanoscale technology has made it easier for people to see the possibilities and has also clarified what can and can't be done.
As computer science, cognitive science, and neuroscience keep advancing, the same thing will happen in artificial intelligence, and a lot of Eliezer's ideas will seem more natural and constructive than they may now appear. Some of them will be reinvented independently. All of them (that survive) should take on much greater depth and richness (compare the word pictures in Drexler's 1986 book with the calculations in his 1992 book).
Despite all the excesses and distractions, work is being done and foundations for the future are being laid. Also, Eliezer and his colleagues do have many lines into academia, despite the extent to which they exist outside it. So in terms of process, I do consider them to be on track, even if the train shakes violently at times.
Eliezer took exception to my estimate linked in my comment here.
Quite possibly you're right about this.
On this point I agree with SarahC's second comment here.
I would again recur to my point about Eliezer having an accurate view of his abilities and likelihood of success being important for public relations purposes.
Less than 1 in 1 billion! :-) May I ask exactly what the proposition was? At the link you say "probability of ... you succeeding in playing a critical role on the Friendly AI project that you're working on". Now by one reading that probability is 1, since he's already the main researcher at SIAI.
Suppose we analyse your estimate in terms of three factors:
(probability that anyone ever creates Friendly AI) x (conditional probability SIAI contributed) x (conditional probability that Eliezer contributed)
Can you tell us where the bulk of the 10^-9 is located?
And he was right to do so, because that estimate was obviously on the wrong order of magnitude. To make an analogy, if someone says that you weigh 10^5kg, you don't have to reveal your actual weight (or even measure it) to know that 10^5 was wrong.
I agree with
But why is the estimate that I gave obviously on the wrong order of magnitude?
From my point of view, his reaction is an indication that his estimate is obviously on the wrong order of magnitude. But I'm still willing to engage with him and hear what he has to say, whereas he doesn't seem willing to engage with me and hear what I have to say.
What evidence do you have of this? One reason I doubt that it's true is that Eliezer has been relatively good at admitting flaws in his ideas, even when doing so implied that building FAI is harder than he previously thought. I think you could reasonably argue that he's still overconfident about his chances of successfully building FAI, but I don't see how you get "unwillingness to seriously consider the possibility".
Eliezer was not willing to engage with my estimate here. See his response. For the reasons that I point out here, I think that my estimate is well grounded.
Eliezer's apparent lack of willingness to engage with me on this point does not immediately imply that he's unwilling to seriously consider the possibility that I raise. But I do see it as strongly suggestive.
As I said in response to ThomBlake, I would be happy to pointed to any of Eliezer's writings which support the idea that Eliezer has given serious consideration to the two points that I raised to explain my estimate.
Edit: I'll also add that given the amount of evidence that I see against the proposition that Eliezer will build a Friendly AI, I have difficulty imagining how he could be persisting in holding his beliefs without having failed to give serious consideration to the possibility that he might be totally wrong. It seems very likely to me that if he had explored this line of thought, he would have a very different world view than he does at present.
Have you noticed that many (most?) commenters/voters seem to disagree with your estimate? That's not necessarily strong evidence that your estimate is wrong (in the sense that a Bayesian superintelligence wouldn't assign a probability as low as yours), but it does show that many reasonable and smart people disagree with your estimate even after seriously considering your arguments. To me that implies that Eliezer could disagree with your estimate even after seriously considering your arguments, so I don't think his "persisting in holding his beliefs" offers much evidence for your position that Eliezer exhibited "unwillingness to seriously consider the possibility that he's vastly overestimated his chances of building a Friendly AI".
Yes. Of course, there's a selection effect here - the people on LW are more likely to assign a high probability to the proposition that Eliezer will build a Friendly AI (whether or not there's epistemic reason to do so).
The people outside of LW who I talk to on a regular basis have an estimate in line with my own. I trust these people's judgment more than I trust LW posters judgment simply because I have much more information about their positive track records for making accurate real world judgments than I do for the people on LW.
Yes, so I agree that in your epistemological state you should feel this way. I'm explaining why in my epistemological state I feel the way I do.
In your own epistemological state, you may be justified in thinking that Eliezer and other LWers are wrong about his chances of success, but even granting that, I still don't see why you're so sure that Eliezer has failed to "seriously consider the possibility that he's vastly overestimated his chances of building a Friendly AI". Why couldn't he have, like the other LWers apparently did, considered the possibility and then (erroneously, according to your epistemological state) rejected it?
My experience reading Eliezer's writings is that he's very smart and perceptive. I find it implausible that somebody so smart and perceptive could miss something for which there is (in my view) so much evidence for if he had engaged in such consideration. So I think that what you suggest could be the case, but I find is quite unlikely.
Not sure why this would make sense to the OP; the referenced post talks about situations where you say 'yes', Life says 'Don't this so'. I do not think EY's work hit any obvious walls that are of the showstopper kind.