The Importance of Self-Doubt

23 Post author: multifoliaterose 19 August 2010 10:47PM

[Added 02/24/14: After I got feedback on this post, I realized that it carried unnecessary negative connotations (despite conscious effort on my part to avoid them), and if I were to write it again, I would have framed things differently. See Reflections on a Personal Public Relations Failure: A Lesson in Communication for more information. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

Follow-up to: Other Existential Risks, Existential Risk and Public Relations

Related to: Tsuyoku Naritai! (I Want To Become Stronger), Affective Death Spirals, The Proper Use of Doubt, Resist the Happy Death Spiral, The Sin  of Underconfidence

In Other Existential Risks I began my critical analysis of what I understand to be SIAI's most basic claims. In particular I evaluated part of the claim

(1) At the margin, the best way for an organization with SIAI's resources to prevent global existential catastrophe is to promote research on friendly Artificial Intelligence, work against unsafe Artificial Intelligence, and encourage rational thought.

It's become clear to me that before I evaluate the claim

(2) Donating to SIAI is the most cost-effective way for charitable donors to reduce existential risk.

I should (a) articulate my reasons for believing in the importance of self-doubt and (b) give the SIAI staff an opportunity to respond to the points which I raise in the present post as well as my two posts titled Existential Risk and Public Relations and Other Existential Risks.

Yesterday SarahC described to me how she had found Eliezer's post Tsuyoku Naritai! (I Want To Become Stronger) really moving. She explained:

I thought it was good: the notion that you can and must improve yourself, and that you can get farther than you think.

I'm used to the other direction: "humility is the best virtue."

I mean, this is a big fuck-you to the book of Job, and it appeals to me.

I was happy to learn that SarahC had been positively affected by Eliezer's post. Self-actualization is a wonderful thing and it appears as though Eliezer's posting has helped her self-actualize. On the other hand, rereading the post prompted me to notice that there's something about it which I find very problematic. The last few paragraphs of the post read:

Take no pride in your confession that you too are biased; do not glory in your self-awareness of your flaws.  This is akin to the principle of not taking pride in confessing your ignorance; for if your ignorance is a source of pride to you, you may become loathe to relinquish your ignorance when evidence comes knocking.  Likewise with our flaws - we should not gloat over how self-aware we are for confessing them; the occasion for rejoicing is when we have a little less to confess.

Otherwise, when the one comes to us with a plan for correcting the bias, we will snarl, "Do you think to set yourself above us?"  We will shake our heads sadly and say, "You must not be very self-aware."

Never confess to me that you are just as flawed as I am unless you can tell me what you plan to do about it.  Afterward you will still have plenty of flaws left, but that's not the point; the important thing is to do better, to keep moving ahead, to take one more step forward.  Tsuyoku naritai!

There's something to what Eliezer is saying here: when people are too strongly committed to the idea that humans are fallible this can become a self-fulfilling prophecy where humans give up on trying to improve things and as a consequence remain fallible when they could have improved. As Eliezer has said in The Sin of Underconfidence, there are social pressures that push against having high levels of confidence even when confidence is epistemically justified:

To place yourself too high - to overreach your proper place - to think too much of yourself - to put yourself forward - to put down your fellows by implicit comparison - and the consequences of humiliation and being cast down, perhaps publicly - are these not loathesome and fearsome things?

To be too modest - seems lighter by comparison; it wouldn't be so humiliating to be called on it publicly, indeed, finding out that you're better than you imagined might come as a warm surprise; and to put yourself down, and others implicitly above, has a positive tinge of niceness about it, it's the sort of thing that Gandalf would do.

I have personal experience with underconfidence. I'm a careful thinker and when I express a position with confidence my position is typically well considered. For many years I generalized from one example and assumed when people express positions with confidence they've thought their positions out as well as I have. Even after being presented with massive evidence that few people think things through as carefully as I do, I persisted in granting the (statistically ill-considered) positions of others far more weight than they deserved for the very reason that Eliezer describes above. This seriously distorted my epistemology because it led to me systematically giving ill-considered positions substantial weight. I feel that I have improved on this point, but even now, from time to time I notice that I'm exhibiting irrationally low levels of confidence in my positions.

At the same time, I know that at times I've been overconfident as well. In high school I went through a period when I believed that I was a messianic figure whose existence had been preordained by a watchmaker God who planned for me to save the human race. It's appropriate to say that during this period of time I suffered from extreme delusions of grandeur. I viscerally understand how it's possible to fall into an affective death spiral.

In my view one of the central challenges of being human is to find an instrumentally rational balance between subjecting oneself to influences which push one in the direction of overconfidence and subjecting oneself to influences which push one in the direction of underconfidence.

In Tsuyoku Naritai! Eliezer describes how Orthodox Judaism attaches an unhealthy moral significance to humility. Having grown up in a Jewish household and as a consequence having had peripheral acquaintance with orthodox Judaism I agree with Eliezer's analysis of Orthodox Judaism in this regard. In the proper use of doubt, Eliezer describes how the Jesuits allegedly are told to doubt their doubts about Catholicism. I agree with Eliezer that self-doubt can be misguided and abused.

However, reversed stupidity is not intelligence. The fact that it's possible to ascribe too much moral significance to self-doubt and humility does not mean that one should not attach moral significance to self-doubt and humility. I strongly disagree with Eliezer's prescription: "Take no pride in your confession that you too are biased; do not glory in your self-awareness of your flaws."

The mechanism that determines human action is that we do what makes us feel good (at the margin) and refrain from doing what makes us feel bad (at the margin). This principle applies to all humans, from Gandhi to Hilter. Our ethical challenge is to shape what makes us feel good and what makes us feel bad in a way that incentivizes us to behave in accordance with our values. There are times when it's important to recognize that we're biased and flawed. Under such circumstances, we should feel proud that we recognize that we're biased we should glory in our self-awareness of our flaws. If we don't, then we will have no incentive to recognize that we're biased and be aware of our flaws.

We did not evolve to exhibit admirable and noble behavior. We evolved to exhibit behaviors which have historically been correlated with maximizing our reproductive success. Because our ancestral climate was very much a zero-sum situation, the traits that were historically correlated with maximizing our reproductive success had a lot to do with gaining high status within our communities. As Yvain has said, it appears that a fundamental mechanism of the human brain which was historically correlated with gaining high status is to make us feel good when we have high self-image and feel bad when we have low self-image.

When we obtain new data, we fit it into a narrative which makes us feel as good about ourselves as possible; a way conducive to having a high self-image. This mode of cognition can lead to very seriously distorted epistemology. This is what happened to me in high school when I believed that I was a messianic figure sent by a watchmaker God. Because we flatter ourselves by default, it's very important that those of us who aspire to epistemic rationality incorporate a significant element of "I'm the sort of person who engages in self-doubt because it's the right thing to do" into our self-image. If we do this, when we're presented with evidence which entails a drop in our self-esteem, we don't reject it out of hand or minimize it as we've been evolutionarily conditioned to do because wound of properly assimilating data is counterbalanced by the salve of the feeling "At least I'm a good person as evidenced by the fact that I engage in self-doubt" and failing to exhibit self-doubt would itself entail an emotional wound.

This is the only potential immunization to the disease of self-serving narratives which afflicts all utilitarians out of virtue of their being human. Until technology allows us to modify ourselves in a radical way, we cannot hope to be rational without attaching moral significance to the practice of engaging in self-doubt. As the RationalWiki's page on LessWrong says:

A common way for very smart people to be stupid is to think they can think their way out of being apes with pretensions. However, there is no hack that transcends being human...You are an ape with pretensions. Playing a "let's pretend" game otherwise doesn't mean you win all arguments, or any. Even if it's a very elaborate one, you won't transcend being an ape. Any "rationalism" that doesn't expressly take into account humans being apes with pretensions, isn't.


In Existential Risk and Public Relations I suggested that some of Eliezer's remarks convey the impression that Eliezer has an unjustifiably high opinion of himself. In the comments to the post JRMayne wrote

I think the statements that indicate that [Eliezer] is the most important person in human history - and that seems to me to be what he's saying - are so seriously mistaken, and made with such a high confidence level, as to massively reduce my estimated likelihood that SIAI is going to be productive at all.

And that's a good thing. Throwing money into a seriously suboptimal project is a bad idea. SIAI may be good at getting out the word of existential risk (and I do think existential risk is serious, under-discussed business), but the indicators are that it's not going to solve it. I won't give to SIAI if Eliezer stops saying these things, because it appears he'll still be thinking those things.

When Eliezer responded to JRMayne's comment, Eliezer did not dispute the claim that JRMayne attributed to him. I responded to Eliezer saying

If JRMayne has misunderstood you, you can effectively deal with the situation by making a public statement about what you meant to convey.

Note that you have not made a disclaimer which rules out the possibility that you claim that you're the most important person in human history. I encourage you to make such a disclaimer if JRMayne has misunderstood you.

I was disappointed, but not surprised, that Eliezer did not respond. As far as I can tell, Eliezer does have confidence in the idea that he is (at least nearly) the most important person in human history. Eliezer's silence only serves to further confirm my earlier impressions. I hope that Eliezer subsequently proves me wrong. [Edit: As Airedale points out Eliezer has in fact exhibited public self-doubt in his abilities in his posting The Level Above Mine. I find this reassuring and it significantly lowers my confidence that Eliezer claims that he's the most important person in human history. But Eliezer still hasn't made a disclaimer on this matter decisively indicating that he does not hold such a view.] The modern world is sufficiently complicated so that no human no matter how talented can have good reason to believe himself or herself to be the most important person in human history without actually doing something which very visibly and decisively alters the fate of humanity. At present, anybody who holds such a belief is suffering from extreme delusions of grandeur.

There's some sort of serious problem with the present situation. I don't know whether it's a public relations problem or if the situation is that Eliezer actually suffers from extreme delusions of grandeur, but something has gone very wrong. The majority of the people who I know who outside of Less Wrong who have heard of Eliezer and Less Wrong have the impression that Eliezer is suffering from extreme delusions of grandeur. To such people, this fact (quite reasonably) calls into question of the value of SIAI and Less Wrong. On one hand, SIAI looks like an organization which is operating under beliefs which Eliezer has constructed to place himself in as favorable a position as possible rather than with a view toward reducing existential risk. On the other hand, Less Wrong looks suspiciously like the cult of Objectivism: a group of smart people who are obsessed with the writings of a very smart person who is severely deluded and describing these writings and the associated ideology as "rational" although they are nothing of the kind.

My own views are somewhat more moderate. I think that the Less Wrong community and Eliezer are considerably more rational than the Objectivist movement and Ayn Rand (respectively). I nevertheless perceive unsettling parallels.


In the comments to Existential Risk and Public Relations, timtyler said

...many people have inflated views of their own importance. Humans are built that way. For one thing, It helps them get hired, if they claim that they can do the job. It is sometimes funny - but surely not a big deal.

I disagree with timtyler. Anything that has even a slight systematic negative impact on existential risk is a big deal.

Some of my most enjoyable childhood experiences involved playing Squaresoft RPGs. Games like Chrono Trigger, Illusion of Gaia, Earthbound, Xenogears, and the Final Fantasy series are all stories about a group of characters who bond and work together to save the world. I found these games very moving and inspiring. They prompted me to fantasize about meeting allies who I could bond with and work together with to save the world. I was lucky enough to meet one such person in high school who I've been friends with since. When I first encountered Eliezer I found him eerily familiar, as though he was a long lost brother. This is the same feeling that is present between Siegmund and Sieglinde in the Act 1 of Wagner's Die Walküre (modulo erotic connotations). I wish that I could be with Eliezer in a group of characters as in a Squaresoft RPG working to save the world. His writings such as One Life Against the World and Yehuda Yudkowsky, 1985-2004 reveal him to be a deeply humane and compassionate person.

This is why it's so painful for me to observe that Eliezer appears to be deviating so sharply from leading a genuinely utilitarian lifestyle. I feel a sense of mono no aware, wondering how things could have been under different circumstances.

One of my favorite authors is Kazuo Ishiguro, who writes about the themes of self-deception and people's attempts to contribute to society. In a very good interview Ishiguro said

I think that's partly what interests me in people, that we don't just wish to feed and sleep and reproduce then die like cows or sheep. Even if they're gangsters, they seem to want to tell themselves they're good gangsters and they're loyal gangsters, they've fulfilled their 'gangstership' well. We do seem to have this moral sense, however it's applied, whatever we think. We don't seem satisfied, unless we can tell ourselves by some criteria that we have done it well and we haven't wasted it and we've contributed well. So that is one of the things, I think, that distinguishes human beings, as far as I can see.

But so often I've been tracking that instinct we have and actually looking at how difficult it is to fulfill that agenda, because at the same time as being equipped with this kind of instinct, we're not actually equipped. Most of us are not equipped with any vast insight into the world around us. We have a tendency to go with the herd and not be able to see beyond our little patch, and so it is often our fate that we're at the mercy of larger forces that we can't understand. We just do our little thing and hope it works out. So I think a lot of the themes of obligation and so on come from that. This instinct seems to me a kind of a basic thing that's interesting about human beings. The sad thing is that sometimes human beings think they're like that, and they get self-righteous about it, but often, they're not actually contributing to anything they would approve of anyway.

[...]

There is something poignant in that realization: recognizing that an individual's life is very short, and if you mess it up once, that's probably it. But nevertheless, being able to at least take some comfort from the fact that the next generation will benefit from those mistakes. It's that kind of poignancy, that sort of balance between feeling defeated but nevertheless trying to find reason to feel some kind of qualified optimism. That's always the note I like to end on. There are some ways that, as the writer, I think there is something sadly pathetic but also quite noble about this human capacity to dredge up some hope when really it's all over. I mean, it's amazing how people find courage in the most defeated situations.

Ishiguro's quote describes how people often behave in accordance with sincere desire to contribute and end up doing things that are very different from what they thought they were doing (things which are relatively unproductive or even counterproductive). Like Ishiguro I find this phenomenon very sad. As Ishiguro hints at, this phenomenon can also result in crushing disappointment later in life. I feel a deep spiritual desire to prevent this from happening to Eliezer.

Comments (726)

Comment author: Vladimir_Nesov 20 August 2010 12:05:31PM *  43 points [-]

This post suffers from lumping together orthogonal issues and conclusions from them. Let's consider individually the following claims:

  1. The world is in danger, and the feat of saving the world (if achieved) would be very important, more so than most other things we can currently do.
  2. Creating FAI is possible.
  3. Creating FAI, if possible, will be conductive to saving the world.
  4. If FAI is possible, person X's work contributes to developing FAI.
  5. Person X's work contributes to saving the world.
  6. Most people's work doesn't contribute to saving the world.
  7. Person X's activity is more important than that of most other people.
  8. Person X believes their activity is more important than that of most other people.
  9. Person X suffers from delusions of grandeur.

A priori, from (8) we can conclude (9). But assuming the a priori improbable (7), (8) is a rational thing for X to conclude, and (9) doesn't automatically follow. So, at this level of analysis, in deciding whether X is overconfident, we must necessarily evaluate (7). In most cases, (7) is obviously implausible, but the post itself suggests one pattern for recognizing when it isn't:

The modern world is sufficiently complicated so that no human no matter how talented can have good reason to believe himself or herself to be the most important person in human history without actually doing something which very visibly and decisively alters the fate of humanity.

Thus, "doing something which very visibly and decisively alters the fate of humanity" is the kind of evidence that allows to conclude (7). But unfortunately there is no royal road to epistemic rationality, we can't require this particular argument that (7) in all cases. Sometimes the argument has an incompatible form.

In our case, the shape of the argument that (7) is as follows. Assuming (2), from (3) and (4) it follows that (5), and from (1), (5) and (6) we conclude (7). Note that the only claim about a person is (4), that their work contributes to development of FAI. All the other claims are about the world, not about the person.

Given the structure of this argument for the abhorrent (8), something being wrong with the person can only affect the truth of (4), and not of the other claims. In particular, the person is overconfident if person X's work doesn't in fact contribute to FAI (assuming it's possible to contribute to FAI).

Now, the extent of overconfidence in evaluating (4) is not related to the weight of importance conveyed by the object level conclusions (1), (2) and (3). One can be underconfident about (4) and still (8) will follow. In fact, (8) is rather insensitive to the strength of assertion (4): even if you contribute to FAI a little bit, but the other object level claims hold, your work is still very important.

Finally, my impression is that Eliezer is indeed overconfident about his ability to technically contribute to FAI (4), but not to the extent this post suggests, since as I said the strength of claim (8) has nothing to do with the level of overconfidence in (4), and even small contribution to FAI is enough to conclude (8) given other object level assumptions. Indeed, Eliezer never claims that success is assured:

Success is not assured. I'm not sure what's meant by confessing to being "ambitious". Is it like being "optimistic"?

On the other hand, only few people are currently in the position to claim (4) to any extent. One needs to (a) understand the problem statement, (b) be talented enough, and (c) take the problem seriously enough to direct serious effort at it.

My ulterior motive to elaborating this argument is to make the situation a little bit clearer to myself, since I claim the same role, just to a smaller extent. (One reason I don't have much confidence is that each time I "level up", last time around this May, I realize how misguided my past efforts were, and how much time and effort it will take to develop the skillset necessary for the next step.) I don't expect to solve the whole problem (and I don't expect Eliezer or Marcello or Wei to solve the whole problem), but I do expect that over the years, some measure of progress can be made by mine and their efforts, and I expect other people will turn up (thanks to Eliezer's work on communicating the problem statement of FAI and new SIAI's work on spreading the word) whose contributions will be more significant.

Comment author: multifoliaterose 20 August 2010 02:16:12PM 5 points [-]

Your analysis is very careful and I agree with almost everything that you say.

I think that one should be hesitant to claim too much for a single person on account of the issue which Morendil raises - we are all connected. Your ability to work on FAI depends on the farmers who grow your food, the plumbers who ensure that you have access to running water, the teachers who you learned from, the people at Google who make it easier for you to access information, etc.

I believe that you (and others working on the FAI problem) can credibly hold the view that your work has higher expected value to humanity than that of a very large majority (e.g. 99.99%) of the population. Maybe higher.

I don't believe that Eliezer can credibly hold the view that he's the highest expected value human who has ever lived. Note that he has not offered a disclaimer denying the view that JRMayne has attributed to him despite the fact that I have suggested that he do so twice now.

Comment author: Vladimir_Nesov 20 August 2010 09:07:29PM *  7 points [-]

You wrote elsewhere in the thread:

I assign a probability of less than 10^(-9) to [Eliezer] succeeding in playing a critical role on the Friendly AI project that [he's] working on.

Does it mean that we need 10^9 Eliezer-level researchers to make progress? Considering that Eliezer is probably at about 1 in 10000 level of ability (if we forget about other factors that make research in FAI possible, such as getting in the frame of mind of understanding the problem and taking it seriously), we'd need about 1000 times more human beings than currently exists on the planet to produce a FAI, according to your estimate.

How does this claim coexist with the one you've made in the above comment?

I believe that you (and others working on the FAI problem) can credibly hold the view that your work has higher expected value to humanity than that of a very large majority (e.g. 99.99%) of the population. Maybe higher.

It doesn't compute, there is an apparent inconsistency between these two claims. (I see some ways to mend it by charitable interpretation, but I'd rather you make the intended meaning explicit yourself.)

Comment author: Jonathan_Graehl 20 August 2010 10:16:13PM 2 points [-]

Eliezer is probably at about 1 in 10000 level of ability [of G]

Agreed, and I like to imagine that he reads that and thinks to himself "only 10000? thanks a lot!" :)

In case anyone takes the above too seriously, I consider it splitting hairs to talk about how much beyond 1 in 10000 smart anyone is - eventually, motivation, luck, and aesthetic sense / rationality begin to dominate in determining results IMO.

Comment author: whpearson 20 August 2010 02:20:05PM 4 points [-]

Most people's work doesn't contribute to saving the world.

I'd argue that a lot of people's work does. Everybody that contributes to keeping the technological world running (from farmers to chip designers) enables us to potentially save ourselves from the longer term non-anthrogenic existential risks.

Comment author: Vladimir_Nesov 20 August 2010 02:32:29PM *  4 points [-]

Obviously, you need to interpret that statement as "Any given person's work doesn't significantly contribute to saving the world". In other words, if we "subtract" that one person, the future (in the aspect of the world not ending) changes insignificantly.

Comment author: whpearson 20 August 2010 02:46:22PM 2 points [-]

Are you also amending 4) to have the significant clause?

Because there are lots of smart people that have worked on AI, whose work I doubt would be significant. And that is the nearest reference class I have for likely significance of people working on FAI.

Comment author: cata 20 August 2010 01:31:49PM *  0 points [-]

Generally speaking, your argument isn't very persuasive unless you believe that the world is doomed without FAI and that direct FAI research is the only significant contribution you can make to saving it. (EDIT: To clarify slightly after your response, I mean to point out that you didn't directly mention these particular assumptions, and that I think many people take issue with them.)

My personal, rather uninformed belief is that FAI would be a source of enormous good, but it's not necessary for humanity to continue to grow and to overcome x-risk (so 3 is weaker); X may be contributing to the development of FAI, but not that much (so 4 is weaker); and other people engaged in productive pursuits are also contributing a non-zero amount to "save the world" (so 6 is weaker.)

As such, I have a hard time concluding that X's activity is anywhere near the "most important" using your reasoning, although it may be quite important.

Comment author: Vladimir_Nesov 20 August 2010 01:36:26PM *  3 points [-]

Generally speaking, your argument isn't very persuasive unless you believe that the world is doomed without FAI and that direct FAI research is the only significant contribution you can make to saving it.

The argument I gave doesn't include justification of things it assumes (that you referred to). It only serves to separate the issues with claims about a person from issues with claims about what's possible in the world. Both kinds of claims (assumptions in the argument I gave) could be argued with, but necessarily separately.

Comment author: JRMayne 20 August 2010 02:19:38PM *  -1 points [-]

Person X's activity is more important than that of most other people.

Person X believes their activity is more important than that of most other people.

Person X suffers from delusions of grandeur.

Person X believes that their activity is more important than all other people, and that no other people can do it.

Person X also believes that only this project is likely to save the world.

Person X also believes that FAI will save the world on all axes, including political and biological.

--JRM

Comment author: Wei_Dai 28 September 2012 07:22:57PM 1 point [-]

On the other hand, only few people are currently in the position to claim (4) to any extent. One needs to (a) understand the problem statement, (b) be talented enough, and (c) take the problem seriously enough to direct serious effort at it.

(4 here being "If FAI is possible, person X's work contributes to developing FAI.") This seems be a weak part of your argument. A successful FAI attempt will obviously have to use lots of philosophical and technical results that were not developed specifically with FAI in mind. Many people may be contributing to FAI, without consciously intending to do so. For example when I first started thinking about anthropic reasoning I was mainly thinking about human minds being copyable in the future and trying to solve philosophical puzzles related to that.

Another possibility is that the most likely routes to FAI go through intelligence enhancement or uploading, so people working in those fields are actually making more contributions to FAI than people like you and Eliezer.

Comment author: Airedale 20 August 2010 12:37:51AM 21 points [-]

give the SIAI staff an opportunity to respond to the points which I raise in the present post as well as my two posts titled Existential Risk and Public Relations and Other Existential Risks.

Indeed, given how busy everyone at SIAI has been with the Summit and the academic workshop following it, it is not surprising that there has not been much response from SIAI. I was only involved as an attendee of the Summit, and even I am only now able to find time to sit down and write something in response. At any rate, as a donor and former visiting fellow, I am only loosely affiliated with SIAI, and my comments here are solely my own, although my thoughts are certainly influenced by observations of the organization and conversation with those at SIAI. I don’t have the time/knowledge to address everything in your posts, but I wanted to say a couple of things.

I don’t disagree with you that SIAI has certain public relations problems. (Frankly, I doubt anyone at SIAI would disagree with that.) There is a lot of attention and discussion at SIAI about how to best spread knowledge about existential risks and to avoid sounding like a fringe/doomsday organization in doing so. It’s true that SIAI does consider the development of a general artificial intelligence to be the most serious existential risk facing humanity. But at least from what I have seen, much of SIAI’s current approach is to seed awareness of various existential risks among audiences that are in a position to effectively join the work in decreasing that risk.

Unfortunately, gaining recognition of existential risk is a hugely difficult task. Recent books from leading intellectuals on these issues (Sir Martin Rees’s Our Final Hour and Judge Richard Posner’s Catastrophe) don’t seem to have had very much apparent impact, and their ability to influence the general public is much greater than SIAI’s. But through the Summit and various publications, awareness does seem to be gradually increasing, including among important academics like David Chalmers.

Finally, I wanted to address one particular public relations problem, or at least, public relations issue, that is evident from your criticism so far – that is, there is an (understandable) perception that many observers have that SIAI and Eliezer are essentially synonymous. In the past, this perception may have been largely accurate. I don’t think that it currently holds true, but it definitely continues to persist in many people’s minds.

Given this perception, your primary focus on Eliezer to the exclusion of the other work that SIAI does is understandable. Nor, of course, could anyone possibly deny that Eliezer is an important part of SIAI, as its founder, board member, and prominent researcher. But there other SIAI officers, board members, researchers, and volunteers, and there is other work that SIAI is trying to do. The Summit is probably the most notable example of this. SIAI-affiliated people are also working on spreading knowledge of existential risks and the need to face them in academia and more broadly. The evolution of SIAI into an organization not focused solely on EY and his research is still a work in progress; and the rebranding of the organization as such in the minds of the public has not necessarily kept pace with even that gradual progress.

As for EY having delusions of grandeur, I want to address that, although only briefly, because EY is obviously in a much better position to address any of that if he chooses to. My understanding of the video you linked to in your previous post is that EY is commenting on both 1) his ability to work on FAI research and 2) his desire to work on that research. No matter how high EY’s opinion of his ability, and it doubtless is very high, it seems to me that I have seen comments from him recognizing that there are others with equally high (or even higher) ability, e.g., The Level Above Mine. I have no doubt EY would agree that the pool of those with the requisite ability is very limited. But the even greater obstacle to someone carrying on EY’s work is the combination of that rare ability with the also rare desire to do that research and make it one’s life work. And I think that’s why EY answered the way he did. Indeed, the reference to Michael Vassar, it seems to me, primarily makes sense in terms of the desire axis, since Michael Vassar’s expertise is not in developing FAI himself, although he has other great qualities in terms of SIAI’s current mission of spreading existential risk awareness, etc.

Comment author: Morendil 20 August 2010 09:43:21AM *  3 points [-]

I don’t disagree with you that SIAI has certain public relations problems.

Speaking from personal experience, the SIAI's somewhat haphazard response to people answering its outreach calls strikes me as a bigger PR problem than Eliezer's personality. The SIAI strikes me as in general not very good at effective collective action (possibly because that's an area where Eliezer's strengths are, as he admits himself, underdeveloped). One thing I'd suggest to correct that is to massively encourage collaborative posts on LW.

Comment author: Airedale 20 August 2010 03:23:06PM 2 points [-]

Agreed. I think that communication and coordination with many allies and supporters has historically been a weak point for SIAI, due to various reasons including overcommitment of some of those tasked with communications, failure to task anyone with developing or maintaining certain new and ongoing relationships, interpersonal skills being among the less developed skill sets among those at SIAI, and the general growing pains of the organization. My impression is that there has been some improvement in this area recently, but there's still room for a lot more.

More collaborative posts on LW would be great to see. There have also been various discussions about workshops or review procedures for top-level posts that seem to have generated at least some interest. Maybe those discussions should just continue in the open thread or maybe it would be appropriate to have a top-level post where people could be invited to volunteer or could find others interested in collaboration, workshops, or the like.

Comment author: whpearson 20 August 2010 12:03:24AM 17 points [-]

I'd like to vote this up as I agree with lots of the points raised, but I am not comfortable with the personal nature of this article. I'd much rather the bits personal to Eliezer be sent via email.

Probably some strange drama avoidance thing on my part. On the other hand I'm not sure Eliezer would have a problem writing a piece like this about someone else.

I've thought to myself that I have read one too many fantasy books as a kid, so the partying metaphor hits home.

Comment author: multifoliaterose 20 August 2010 04:46:05AM 8 points [-]

I'd like to vote this up as I agree with lots of the points raised, but I am not comfortable with the personal nature of this article. I'd much rather the bits personal to Eliezer be sent via email.

I was conflicted about posting in the way that I did precisely for the reason that you describe, but after careful consideration decided that the benefits outweighed the costs, in part because Eliezer does not appear to be reading the private messages that I send him.

Comment author: JamesAndrix 21 August 2010 03:15:50AM *  3 points [-]

I would say that given an audience that is mostly not Eliezer. the best way to send a personal message to Eliezer is to address how the community ought to relate to Eliezer.

Comment author: Aleksei_Riikonen 20 August 2010 01:34:11AM 16 points [-]

Well, in the category of "criticisms of SIAI and/or Eliezer", this text is certainly among the better ones. I could see this included on a "required reading list" of new SIAI employees or something.

But since we're talking about a Very Important Issue, i.e. existential risks, the text might have benefited from some closing warnings, that whatever people's perceptions of SIAI, it's Very Important that they don't neglect being very seriously interested in existential risks because of issues that they might perceive a particular organization working on the topic to have (and that it might also actually have, but that's not my focus in this comment).

I.e. if people think SIAI sucks and shouldn't be supported, they should anyway be very interested in supporting the Future of Humanity Institute at Oxford, for example. Otherwise they're demonstrating very high levels of irrationality, and with regard to SIAI, are probably just looking for plausible-sounding excuses to latch onto for why they shouldn't pitch in.

Not to say that the criticism you presented mightn't be very valid (or not; I'm not really commenting on that here), but it would be very important for people to first take care that they're contributing to the reduction of existential risks in some way, and then consider to what extent exactly a particular organization such as SIAI might be doing a sub-optimal job (since they can choose a more clear-cut case of an excellent organization for their initial contribution, i.e. Bostrom's FHI as mentioned above).

Comment author: grouchymusicologist 20 August 2010 03:02:38AM 15 points [-]

A number of people have mentioned the seemingly-unimpeachable reputation of the Future of Humanity Institute without mentioning that its director, Nick Bostrom, fairly obviously has a high opinion of Eliezer (e.g., he invited him to contribute not one but two chapters to the volume on Global Catastrophic Risks). Heuristically, if I have a high opinion of Bostrom and the FHI project, that raises my opinion of Eliezer and decreases the probability of Eliezer-as-crackpot.

Comment author: JamesAndrix 21 August 2010 05:11:28AM 5 points [-]

Who else is nearly as good or better at Friendly AI development than Eliezer Yudkowsky?

I mean besides me, obviously.

Comment author: jimrandomh 20 August 2010 12:20:14PM *  5 points [-]

Take no pride in your confession that you too are biased; do not glory in your self-awareness of your flaws. This is akin to the principle of not taking pride in confessing your ignorance; for if your ignorance is a source of pride to you, you may become loathe to relinquish your ignorance when evidence comes knocking. Likewise with our flaws - we should not gloat over how self-aware we are for confessing them; the occasion for rejoicing is when we have a little less to confess.

There's something to what Eliezer is saying here: when people are too strongly committed to the idea that humans are fallible this can become a self-fulfilling prophecy where humans give up on trying to improve things and as a consequence remain fallible when they could have improved.

I actually read this as a literal, technical statement about when to let the reward modules of our minds trigger, and not a statement about whether low or high confidence is desirable. Finding a flaw in oneself is only valuable if it's followed by further investigation into details and fixes, and, as a purely practical matter, that investigation is more likely to happen if you feel good about having found a fix, than if you feel good about having found a flaw.

Comment author: Jonathan_Graehl 20 August 2010 01:23:48AM *  5 points [-]

it's very important that those of us who aspire to epistemic rationality incorporate a significant element of "I'm the sort of person who engages in self-doubt because it's the right thing to do" into our self-image

I think most of us do. Your argument for this is compelling. However, I think Eliezer was just claiming that it's possible to overdo it - at least, that's the defensible core of his insight.

I've wondered if I'm obsessed with Eliezer's writings, and whether I esteem him too highly. Answers: no, and no.

Anything that has even a slight systematic negative impact on existential risk is a big deal.

Probably true. But it's sometimes easy to be on the wrong side of an argument over small differences (of course sometimes you can be certain). I guess such "there's no harm" statements (which I've also made) are biased by a desire to be conciliatory. I don't trust people to behave well when they're annoyed at each other, so I sometimes wish they would minimize the stakes.

Eliezer appears to be deviating so sharply from leading a genuinely utilitarian lifestyle

I doubt I know any utilitarians.

Comment author: wedrifid 20 August 2010 12:21:39AM *  5 points [-]

I have no comment to add but I will say that this is well written and researched. It also prompted a degree of self reflection on my part. At least, that's what I told myself and I feel this warm glow inside. ;)

Comment author: Friendly-HI 29 January 2013 12:41:42AM *  4 points [-]

As of yet Eliezer's importance is just a stochastic variable yet to be realized, for all I know he could be killed in a car accident tomorrow or simply fail at his task of "saving the world" in numerous ways.

Up until now Vasili Arkhipov, Stanislav Petrov and a few other people I do not know the names of (including our earliest ancestors who managed to avoid being killed during their emigration out of Africa) trump Eliezer by a tiny margin of actually saving humanity -or at least civilization.

All that being said Eliezer is still pretty awesome by my standards. And he writes good fanfiction, too.

Comment author: rwallace 20 August 2010 01:14:30AM 4 points [-]

This post is a pretty accurate description of me a few years ago, when I was a Singularitarian. The largest attraction of the belief system, to me, was that it implied as an AI researcher I was not just a hero, but a superhero, potentially capable of almost single-handedly saving the world. (And yes, I loved those video games too.)

Comment author: cousin_it 20 August 2010 07:50:44AM 2 points [-]

What's your current position?

Comment author: rwallace 20 August 2010 09:04:57AM 0 points [-]

Appealing though the belief is, the Singularity unfortunately isn't real. Nothing is going to come along and solve our problems for us, and AI is not going to be a magical exception to the rule that developing technology is hard.

Comment author: Emile 20 August 2010 10:07:11AM 4 points [-]

Nothing is going to come along and solve our problems for us, and AI is not going to be a magical exception to the rule that developing technology is hard.

Do you think many people here think that "something is going to come along and solve our problem for us", or that "developing AI is easy"?

Comment author: rwallace 20 August 2010 11:07:25AM 2 points [-]

Yes. In particular, the SIAI is explicitly founded on the beliefs that

  1. Superintelligent AI will solve all our problems.

  2. Creating same is (unlike other, much less significant technological developments) so easy that it can be done by a single team within our lifetimes.

Comment author: Aleksei_Riikonen 20 August 2010 11:59:04AM 5 points [-]

The following summary of SIAI's position says otherwise:

http://singinst.org/riskintro/index.html

It seems you're confusing what you personally thought earlier with what SIAI currently thinks.

(Though, technically you're partly right that what SIAI folks thought when said institution was founded is closer to what you say than their current position. But it's not particularly interesting what they thought 10 years ago if they've revised their position to be much better since then.)

Comment author: rwallace 20 August 2010 12:52:48PM 4 points [-]

Ah, thanks for the update; you're right, their claims regarding difficulty and timescale have been toned down quite a bit.

Comment author: Emile 20 August 2010 12:24:24PM *  3 points [-]

Do you think many people here think that "something is going to come along and solve our problem for us", or that "developing AI is easy"?

Yes. In particular, the SIAI is explicitly founded on the beliefs that [...]

That isn't really evidence that people here (currently) believe either of those. You're claiming people here believe things even though they go against some of Eliezer's writing (and I don't remember any cries of "No, Eliezer, you're wrong! Creating AI is easy!", but I might be mistaken), and even though quite a few commenters are telling you nobody here believes that.

Comment author: whpearson 20 August 2010 12:44:27PM *  4 points [-]

It depends what you mean by easy and hard. From previous conversations I expect Mr Wallace is thinking something easy is doable by means of a small group over 20-30 years and hard is a couple of generations of the whole of civilizations work.

Comment author: katydee 20 August 2010 10:28:13AM 2 points [-]

I'm wondering where people said AI development was going to be easy.

Comment author: wedrifid 20 August 2010 12:11:22PM 2 points [-]

I'm wondering where people said AI development was going to be easy.

Indeed. There was a post "shut up and do the impossible" for a reason!

Comment author: NancyLebovitz 20 August 2010 12:09:44PM 2 points [-]

And I'm wondering where it was said that superintelligent AI will solve all our problems.

Comment author: Wei_Dai 20 August 2010 01:04:16AM *  4 points [-]

A few unrelated points:

  1. I tend to agree with you on the first section, but I think I'm less confident about it than you are. :)
  2. What is a genuinely utilitarian lifestyle? Is there someone you can cite as living such a lifestyle?
  3. I'm not sure what you're talking about in the last sentence. Prevent what from happening to Eliezer? Failing to lose hope when he should? (He wrote a post about that, BTW.)
Comment author: wedrifid 20 August 2010 01:13:55AM 1 point [-]

What is a genuinely utilitarian lifestyle? Is there someone you can cite as living such a lifestyle?

Optimising one's lifestyle for the efficient acquisition of power to enable future creation of bulk quantities of paper-clips. For example.

Comment author: John_Maxwell_IV 19 August 2010 11:50:52PM 10 points [-]

FWIW, as an entrepreneur type I consider one of my top 3 key advantages the fact that I would actually appreciate it greatly if someone explained in detail why I was wasting my time with my current project. Thinking about this motivates me significantly because I haven't met any other entrepreneur types who I'd guess this is also true for.

Comment author: Jordan 20 August 2010 04:47:53AM 5 points [-]

Semi related:

I keep a big list of ideas I'd like to implement. (Start up ideas, video games ideas, math research topics.. the three things that consume me =)

Quite often I'll find out someone is working on one of these ideas, and my immediate reaction is... relief. Relief, because I found out early enough not to waste my time. But, more than that, I look at my list of ideas like an orphanage: I'm always happy when one of them finds a loving parent =p

Out of curiosity, what do you consider your other two key advantages?

Comment author: simplicio 21 August 2010 05:01:00AM *  9 points [-]

The real bone of contention here seems to be the long chain of inference leading from common scientific/philosophical knowledge to the conclusion that uFAI is a serious existential risk. Any particular personal characteristics of EY would seem irrelevant till we have an opinion on that set of claims.

If EY were working on preventing asteroid impacts with earth, and he were the main driving force behind that effort, he could say "I'm trying to save the world" and nobody would look at him askance. That's because asteroid impacts have definitely caused mass extinctions before, so nobody can challenge the very root of his claim.

The FAI problem, on the other hand, is at the top of a large house of inferential cards, so that Eliezer is saving the world GIVEN that W, X, Y and Z are true.

My bottom line: what we should be discussing is simply "Are W, X, Y and Z true?" Once we have a good idea about how strong that house of cards is, it will be obvious whether Eliezer is in a "permissible" epistemic state, or whatever.

Maybe people who know about these questions should consider a series of posts detailing all the separate issues leading to FAI. As far as I can tell from my not-extremely-tech-savvy vantage point, the weakest pillar in that house is the question of whether strong AI is feasible (note I said "feasible," not "possible").

Comment author: Simulation_Brain 23 August 2010 04:55:28AM *  2 points [-]

Upvoted; the issue of FAI itself is more interesting than whether Eliezer is making an ass of himself and thereby the SIAI message (probably a bit; claiming you're smart isn't really smart, but then he's also doing a pretty good job as publicist).

One form of productive self-doubt is to have the LW community critically examine Eliezer's central claims. Two of my attempted simplifications of those claims are posted here and here on related threads.

Those posts don't really address whether strong AI feasible; I think most AI researchers agree that it will become so, but disagree on the timeline. I believe it's crucial but rarely recognized that the timeline really depends on how many resources are devoted to it. Those appear to be steadily increasing, so it might not be that long.

Comment author: jimrandomh 21 August 2010 04:48:54PM 3 points [-]

My bottom line: what we should be discussing is simply "Are W, X, Y and Z true?" Once we have a good idea about how strong that house of cards is, ...

You shouldn't deny knowledge of how strong claims are, and refer to those claims as "a house of cards" in the same sentence. Those two claims are mutually exclusive, and putting them close together like this set off my propagandometer.

Comment author: luminosity 20 August 2010 02:43:11AM 13 points [-]

I feel that perhaps you haven't considered the best way to maximise your chance of developing Friendly AI if you were Eliezer Yudkowsky; your perspective is very much focussed on how you see it lookin in from the outside. Consider for a moment that you are in a situation where you think you can make a huge positive impact upon the world, and have founded an organisation to help you act upon that.

Your first, and biggest problem is getting paid. You could take time off to work on attaining a fortune through some other means but this is not a certain bet, and will waste years that you could be spending working on the problem instead. Your best bet is to find already wealthy people who can be convinced that you can change the world, that it's for the best, and that they should donate significant sums of money to you, unless you believe this is even less certain than making a fortune yourself. There's already a lot of people in the world with the requisite amount of money to spare. I think seeking donations is the more rational path.

Now, given that you need to persuade people of the importance of your brilliant new idea which no one has really been considering before, and that to most people isn't at all an obvious idea. Is the better fund seeking strategy to admit to people that you're uncertain if you'll accomplish it, and compound that on top of their own doubts? Not really. Confidence is a very strong signal that will help you persuade people that you're worth taking seriously. You asking Eliezer to be more publically doubtful probably puts him in an awkward situation. I'd be very surprised if he doesn't have some doubts, maybe he even agrees with you, but to admit to these doubts would be to lower the confidence of investors in him, which would then lower further the chance of him actually being able to accomplish his goal.

Having confidence in himself is probably also important, incidentally. Talking about doubts would tend to reinforce them, and when you're embarking upon a large and important undertaking, you want to spend as much of your mental effort and time as possible on increasing the chances that you'll bring the project about, rather than dwelling on your doubts and wasting mental energy on motivating yourself to keep working.

So how to mitigate the problem that you might be wrong without running into these problems? Well, he seems ot have done fairly well here. The SIAI has now grown beyond just him, giving further perspectives he can draw upon in his work to mitigate any shortcomings in his own analyses. He's laid down a large body of work explaining the mental processes he is basing his approaches on, which should be helpful both in recruitment for SIAI, and in letting people point out flaws or weaknesses in the work he is doing. Seems to me so far he has laid the groundwork out quite well, and now it just remains to see where he and the SIAI go from here. Importantly, the SIAI has grown to the point where even if he is not considering his doubts strongly enough, even if he becomes a kook, there are others there who may be able to do the same work. And if not there, his reasoning has been fairly well laid out, and there is no reason others can't follow their own take on what needs to be done.

That said, as an outsider obviously it's wise to consider the possibility that SIAI will never meet its goals. Luckily, it doesn't have to be an either/or question. Too few people consider existential risk at all, but those of us who do consider it can spread ourselves over the different risks that we see. To the degree which you think Eliezer and the SIAI are on the right track, you can donate a portion of your disposable income to them. To the extent that you think other types of existential risk prevention matter, you can donate a portion of that money to the Future of Humanity Institute, or other relevant existential risk fighting organisation.

Comment author: ata 20 August 2010 06:09:35AM *  14 points [-]

I'm inclined to think that Eliezer's clear confidence in his own very high intelligence and his apparent high estimation of his expected importance (not the dictionary-definition "expected", but rather, measured as an expected quantity the usual way) are not actually unwarranted, and only violate the social taboo against admitting to thinking highly of one's own intelligence and potential impact on the world, but I hope he does take away from this a greater sense of the importance of a "the customer is always right" attitude in managing his image as a public-ish figure. Obviously the customer is not always right, but sometimes you have to act like they are if you want to get/keep them as your customer... justified or not, there seems to be something about this whole endeavour (including but not limited to Eliezer's writings) that makes people think !!!CRAZY!!! and !!!DOOMSDAY CULT!!!, and even if is really they who are the crazy ones, they are nevertheless the people who populate this crazy world we're trying to fix, and the solution can't always just be "read the sequences until you're rational enough to see why this makes sense".

I realize it's a balance; maybe this tone is good for attracting people who are already rational enough to see why this isn't crazy and why this tone has no bearing on the validity of the underlying arguments, like Eliezer's example of lecturing on rationality in a clown suit. Maybe the people who have a problem with it or are scared off by it are not the sort of people who would be willing or able to help much anyway. Maybe if someone is overly wary of associating with a low-status yet extremely important project, they do not really intuitively grasp its importance or have a strong enough inclination toward real altruism anyway. But reputation will still probably count for a lot toward what SIAI will eventually be able to accomplish. Maybe at the point of hearing and evaluating the arguments, seeming weird or high-self-regard-taboo-violating on the surface level will only screen off people who would not have made important contributions anyway, but it does affect who will get far enough to hear the arguments in the first place. In a world full of physics and math and AI cranks promising imminent world-changing discoveries, reasonably smart people do tend to build up intuitive nonsense-detectors, build up an automatic sense of who's not even worth listening to or engaging with; if we want more IQ 150+ people to get involved in existential risk reduction, then perhaps SIAI needs to make a greater point of seeming non-weird long enough for smart outsiders to switch from "save time by evaluating surface weirdness" mode to "take seriously and evaluate arguments directly" mode.

(Meanwhile, I'm glad Eliezer says "I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me", and I hope he takes that seriously. But unfortunately, it seems that any piece of writing with the implication "This project is very important, and this guy happens, through no fault of his own, to be one of very few people in the world working on it" will always be read by some people as "This guy thinks he's one of the most important people in the world". That's probably something that can't be changed without downplaying the importance of the project, and downplaying the importance of FAI probably increases existential risk enough that the PR hit of sounding overly self-important to probable non-contributors may be well worth it in the end.)

Comment author: multifoliaterose 12 December 2010 08:23:25AM 3 points [-]

I'm inclined to think that Eliezer's clear confidence in his own very high intelligence and his apparent high estimation of his expected importance (not the dictionary-definition "expected", but rather, measured as an expected quantity the usual way) are not actually unwarranted, and only violate the social taboo against admitting to thinking highly of one's own intelligence and potential impact on the world

Leaving aside the question of whether such apparently strong estimation is warranted in the case at hand; I would suggest that there's a serious possibility that the social taboo that you allude to is adaptive; that having a very high opinion of oneself (even if justified) is (on account of the affect heuristic) conducive to seeing a halo around oneself, developing overconfidence bias, rejecting criticisms prematurely, etc. leading to undesirable epistemological skewing.

Meanwhile, I'm glad Eliezer says "I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me", and I hope he takes that seriously.

Same here.

it seems that any piece of writing with the implication "This project is very important, and this guy happens, through no fault of his own, to be one of very few people in the world working on it" will always be read by some people as "This guy thinks he's one of the most important people in the world".

It's easy to blunt this signal.

Suppose that any of:

  1. A billionaire decided to devote most of his or her wealth to funding Friendly AI research.

  2. A dozen brilliant academics became interested in and started doing Friendly AI research.

  3. The probability of Friendly AI research leading to a Friendly AI is sufficiently low so that another existential risk reduction effort (e.g. pursuit of stable whole brain emulation) is many orders of magnitude more cost-effective at reducing existential risk than Friendly AI research.

Then the Eliezer would not (by most estimations) be the highest utilitarian expected value human in the world. If he were to mention such possibilities explicitly this would greatly mute the undesired connotations.

Comment author: Eliezer_Yudkowsky 12 December 2010 08:48:46AM 5 points [-]

If I thought whole-brain emulation were far more effective I would be pushing whole-brain emulation, FOR THE LOVE OF SQUIRRELS!

Comment author: multifoliaterose 12 December 2010 09:26:23AM *  2 points [-]

Good to hear from you :-)

  1. My understanding is that at present there's a great deal of uncertainty concerning how future advanced technologies are going to develop (I've gotten an impression that e.g. Nick Bostrom and Josh Tenenbaum hold this view). In view of such uncertainty, it's easy to imagine new data emerging over the next decades that makes it clear that pursuit of whole-brain emulation (or some currently unimagined strategy) is a far more effective strategy for existential risk reduction than Friendly AI research.

  2. At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.

  3. Various people have suggested to me that initially pursuing Friendly AI might have higher expected value on the chance that it turns out to be easy. So I could imagine that it's rational for you personally to focus your efforts on Friendly AI research (EDIT: even if I'm correct in my estimation in the above point). My remarks in the grandparent above were not intended as a criticism of your strategy.

  4. I would be interested in hearing more about your own thinking about the relative feasibility of Friendly AI vs. stable whole-brain emulation and current arbitrage opportunities for existential risk reduction, whether on or off the record.

Comment author: ata 12 December 2010 10:45:53AM *  2 points [-]

At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.

That's an interesting claim, and you should post your analysis of it (e.g. the evidence and reasoning that you use to form the estimate that a positive singularity is "substantially more likely" given WBE).

Comment author: Eliezer_Yudkowsky 20 August 2010 07:01:08AM 10 points [-]

there seems to be something about this whole endeavour (including but not limited to Eliezer's writings) that makes people think !!!CRAZY!!! and !!!DOOMSDAY CULT!!!,

Yes, and it's called "pattern completion", the same effect that makes people think "Singularitarians believe that only people who believe in the Singularity will be saved".

Comment author: Emile 20 August 2010 09:59:05AM 2 points [-]

This is discussed in Imaginary Positions.

Comment author: timtyler 20 August 2010 05:09:18PM *  7 points [-]

The outside view of the pitch:

  • DOOM! - and SOON!
  • GIVE US ALL YOUR MONEY;
  • We'll SAVE THE WORLD; you'll LIVE FOREVER in HEAVEN;
  • Do otherwise and YOU and YOUR LOVED ONES will suffer ETERNAL OBLIVION!

Maybe there are some bits missing - but they don't appear to be critical components of the pattern.

Indeed, this time there are some extra features not invented by those who went before - e.g.:

  • We can even send you to HEAVEN if you DIE a sinner - IF you PAY MORE MONEY to our partner organisation.
Comment author: CarlShulman 20 August 2010 05:16:31PM *  9 points [-]

Do otherwise and YOU and YOUR LOVED ONES will suffer ETERNAL OBLIVION.

This one isn't right, and is a big difference between religion and threats like extinction-level asteroids or AI disasters: one can free-ride if that's one's practice in collective action problems.

Also: Rapture of the Nerds, Not

Comment author: timtyler 14 May 2011 01:29:51PM *  -3 points [-]

It's now official!

http://en.wikipedia.org/wiki/Rapture_of_the_Nerds

...now leads to a page that is extremely similar to:

http://en.wikipedia.org/wiki/Technological_singularity

...though - curiously - there are some differences between the two pages (count the words in the first sentence). [update: this difference was apparently due to the page being simultaneously cached and updated.]

Comparisons with The Rapture, are insightful, IMHO. I see no good reason to deny them.

It turns out that ETERNAL OBLIVION is too weak. The community now has the doctrine of ETERNAL DAMNATION. For details, see here.

Comment author: ArisKatsaris 15 May 2011 09:04:48AM 0 points [-]

People need to stop being coy. If you know a difference, just spit it out, don't force people to jump through meaningless hoops like "count the words in the first sentence".

Downvoted for wasting people's time with coyness because of a false belief caused by a cache issue.

Comment author: AdeleneDawner 14 May 2011 08:22:46PM *  -2 points [-]

Uh, no it doesn't, and in fact this appears to be an actual lie (EDIT: Nope, cache issue) rather than the RotN page being changed since you checked it.

Comment author: timtyler 14 May 2011 08:33:13PM *  1 point [-]

Before you start flinging accusations around, perhaps check, reconsider - or get a second opinion?

To clarify, for me, http://en.wikipedia.org/wiki/Rapture_of_the_Nerds still gives me:

Technological singularity

From Wikipedia, the free encyclopedia

(Redirected from Rapture of the Nerds)

Comment author: [deleted] 14 May 2011 10:04:26PM 1 point [-]

Since it redirects, the relevant history page is the technological singularity history page. Namely, this one. And there was indeed a recent change to the first sentence. See for example this comparison.

Comment author: cousin_it 20 August 2010 06:45:43PM *  3 points [-]

I don't understand why downvote this. It does sound like an accurate representation of the outside view.

Comment author: Unknowns 20 August 2010 07:30:14PM 4 points [-]

It may have been downvoted for the caps.

Comment author: [deleted] 14 May 2011 10:10:03PM 3 points [-]

Given that a certain fraction of comments are foolish, you can expect that an even larger fraction of votes are foolish, because there are fewer controls on votes (e.g. a voter doesn't risk his reputation while a commenter does).

Comment author: rhollerith_dot_com 15 May 2011 02:54:33AM *  2 points [-]

Which is why Slashdot (which was a lot more worthwhile in the past than it is now) introduced voting on how other people vote (which Slashdot called metamoderation). Worked pretty well: the decline of Slashdot was mild and gradual compared to the decline of almost every other social site that ever reached Slashdot's level of quality.

Comment author: Perplexed 20 August 2010 07:12:44PM 3 points [-]

Perhaps downvoted for suggesting that the salvation-for-cash meme is a modern one. I upvoted, though.

Comment author: Vladimir_Nesov 20 August 2010 09:23:43PM *  12 points [-]

This whole "outside view" methodology, where you insist on arguing from ignorance even where you have additional knowledge, is insane (outside of avoiding the specific biases such as planning fallacy induced by making additional detail available to your mind, where you indirectly benefit from basing your decision on ignorance).

In many cases outside view, and in particular reference class tennis, is a form of filtering the evidence, and thus "not technically" lying, a tool of anti-epistemology and dark arts, fit for deceiving yourself and others.

Comment author: Nick_Tarleton 20 August 2010 09:41:21PM 7 points [-]

We all already know about this pattern match. Its reiteration is boring and detracts from the conversation.

Comment author: timtyler 14 May 2011 04:09:50PM *  2 points [-]

We all already know about this pattern match. Its reiteration is boring and detracts from the conversation.

If this particular critique has been made more clearly elsewhere, perhaps let me know, and I will happily link to there in the future.

Update 2011-05-30: There's now this recent article: The “Rapture” and the “Singularity” Have Much in Common - which makes a rather similar point.

Comment author: Strange7 20 August 2010 03:33:40PM 4 points [-]

if we want more IQ 150+ people to get involved in existential risk reduction, then perhaps SIAI needs to make a greater point of seeming non-weird long enough for smart outsiders to switch from "save time by evaluating surface weirdness" mode to "take seriously and evaluate arguments directly" mode.

What about less-smart people? I mean, self-motivated idealistic genius nerds are certainly necessary for the core functions of programming an FAI, but any sufficiently large organization also needs a certain number of people who mostly just file paperwork, follow orders, answer the phone, etc. and things tend to work out more efficiently when those people are primarily motivated by the organization's actual goals rather than it's willingness to pay.

Comment author: HughRistik 20 August 2010 07:51:01PM *  1 point [-]

Good point. It's the people in the <130 range that SIAI needs to figure out how to attract. That's where you find people like journalists and politicians.

Comment author: wedrifid 31 August 2010 08:19:37AM 6 points [-]

It's the people in the <130 range that SIAI needs to figure out how to attract. That's where you find people like journalists and politicians.

You also find a lot of journalists and politicians in the 130 to 160 range but the important thing with those groups is that they optimise their beliefs and expressions thereof for appeal to a < 130 range audience.

Comment author: Jordan 20 August 2010 06:02:24AM *  3 points [-]

Honestly, I don't think Eliezer would look overly eccentric if it weren't for LessWrong/Overcomingbias. Comp sci is notoriously eccentric, AI research possibly more so. The stigma against Eliezer isn't from his ideas, it isn't from his self confidence, it's from his following.

Kurzweil is a more dulled case: he has good ideas, but is clearly sensational, he has a large following, but that following isn't nearly as dedicated as the one to Eliezer (not necessarily to Eliezer himself, but to his writings and the "practicing of rationality"). And the effect? I have a visceral distaste whenever I hear someone from the Kurzweil camp say something pro-singularity. It's very easy for me to imagine that if I didn't already put stock in the notion of a singularity, that hearing a Kurzweilian talk would bias me against the idea.

Nonetheless, it may very well be the case that Kurzweil has done a net good to the singularity meme (and perhaps net harm to existential risk), spreading the idea wide and far, even while generating negative responses. Is the case with Eliezer the same? I don't know. My gut says no. Taking existential risk seriously is a much harder meme to catch than believing in a dumbed down version of the singularity.

My intuition is that Eliezer by himself, although abrasive in presentation, isn't turning people off by his self confidence and grandioseness. On the contrary, I -- and I suspect many -- love to argue with intelligent people with strong beliefs. In this sense, Eliezer's self assurance is a good bait. On the other hand, when someone with inferior debating skills goes around spurting off the message of someone else, that, to me, is purely repulsive: I have no desire to talk with those people. They're the people spouting off Aether nonsense on physics forums. There's no status to be won, even on the slim chance of victory.

Finally, aside from Eliezer as himself and Eliezer through the proxy of others, there's also Eliezer as a figurehead of SIAI. Here things are different as well, and Eliezer is again no longer merely himself. He speaks for an organisation, and, culturally, we expect serious organisations to temper their outlandish claims. Take cancer research: presumably all researchers want to cure cancer. Presumably at least some of them are optimistic and believe we actually will. But we rarely hear this, and we never hear it from organizations.

I think SIAI, and Eliezer in his capacity as a figure head, probably should temper their claims as well. The idea of existential risks from AI is already pervasive. Hollywood took care of that. What remains is a battle of credibility.

(Unfortunately, I really don't know how to go about tempering claims with the previous claims already on permanent record. But maybe this isn't as important as I think it is.)

Comment author: ata 20 August 2010 06:22:50AM *  2 points [-]

Honestly, I don't think Eliezer would look overly eccentric if it weren't for LessWrong/Overcomingbias. Comp sci is notoriously eccentric, AI research possibly more so. The stigma against Eliezer isn't from his ideas, it isn't from his self confidence, it's from his following.

Would you include SL4 there too? I think there were discussions there years ago (well before OB, and possibly before Kurzweil's overloaded Singularity meme complex became popular) about the perception of SIAI/Singularitarianism as a cult. (I wasn't around for any such discussions, but I've poked around in the archives from time to time. Here is one example.)

Comment author: JamesAndrix 23 August 2010 05:38:43AM 9 points [-]

How would you address this?

http://scienceblogs.com/pharyngula/2010/08/kurzweil_still_doesnt_understa.php

It seems to me like PZ Meyers really doesn't understand information theory. He's attacking Kurzweil and calling him a kook. Initially due to a relatively straightforward complexity estimate.

And I'm pretty confident that Myers is wrong on this, unless there is another information rich source of inheritance besides DNA, which Meyers knows about but Kurzweil and I do not.

This looks to me like a popular science blogger doing huge PR damage to everything singularity related, and being wrong about it. Even if he is later convinced of this point.

I don't see how to avoid this short of just holding back all claims which seem exceptional and that some 'reasonable' person might fail to understand and see as a sign of cultishness. If we can't make claims as basic as the design of the brain being in the genome, then we may as well just remain silent.

But then we wouldn't find out if we're wrong, and we're rationalists.

Comment author: WrongBot 23 August 2010 03:15:39PM 9 points [-]

For instance, you can't measure the number of transistors in an Intel CPU and then announce, "A-ha! We now understand what a small amount of information is actually required to create all those operating systems and computer games and Microsoft Word, and it is much, much smaller than everyone is assuming."

This analogy made me cringe. Myers is disagreeing with the claim that human DNA completely encodes the structure and functioning of the human brain: the hardware and software, roughly. Looking at the complexity of the hardware and making claims about the complexity of the software, as he does here, is completely irrelevant to his disagreement. It serves only to obscure the actual point under debate, and demonstrates that he has no idea what he's talking about.

Comment author: Emile 23 August 2010 08:31:54AM *  4 points [-]

It seems to me like PZ Meyers really doesn't understand information theory. He's attacking Kurzweil and calling him a kook. Initially due to a relatively straightforward complexity estimate.

I see it that way too. The DNA can give us an upper bound on the information needed to create a human brain, but PZ Myers reads that as "Kurzweil is saying we will be able to take a strand of DNA and build a brain from that in the next 10 years!", and then procede to attack that straw man.

This, however:

His timeline is absurd. I'm a developmental neuroscientist; I have a very good idea of the immensity of what we don't understand about how the brain works. No one with any knowledge of the field is claiming that we'll understand how the brain works within 10 years. And if we don't understand all but a fraction of the functionality of the brain, that makes reverse engineering extremely difficult.

... I am quite enclined to trust. I would trust it more if it wasn't followed by wrong statements about information theory (that seem wrong to me, at least).

Looking at the comments is depressing. I wish there was some "sane" ways for two communities (readers of PZ Myers and "singularitarians") to engage without it degenerating into name-calling.

Brian: "We should unite against our common enemy!"

Others: "The Judean People's Front?"

Brian: "No! The Romans!"

Though there are software solutions for that (takeonit and other stuff that's been discussed here), it wouldn't help either if the "leaders" (PZ Myers, Kurzweil, etc.) were a bit more responsible and made a genuine effort to acknowledge the other's points when there are strong. So they could converge or at least agree to disagree on something narrow.

But nooo, it's much more fun to get angry, and it gets you more traffic too!

Comment author: Risto_Saarelma 23 August 2010 11:01:04AM *  7 points [-]

There seems to be a culture clash between computer scientists and biologists with this matter. DNA bit length as a back-of-the-envelope complexity estimate for a heavily compressed AGI source seems obvious to me, and, it seems, to Larry Page. Biologists are quick to jump to the particulars of protein synthesis and ignore the question of extra information, because biologists don't really deal with information theoretical existence proofs.

It really doesn't help the matter that Kurzweil threw out his estimate when talking about getting at AGI by specifically emulating the human brain, instead of just trying to develop a general human-equivalent AI using code suitable for the computation platform used. This seems to steer most people into thinking that Kurzweil was thinking of using the DNA as literal source code instead of just a complexity yardstick.

Myers seems to have pretty much gone into his creationist-bashing attack mode on this, so I don't have a very high hopes for any meaningful dialogue from him.

Comment author: whpearson 23 August 2010 12:24:51PM 3 points [-]

I'm still not sure what people are trying to say with this. Because the kolmogorov complexity of the human brain given the language of the genetic code and physics is low, therefore X? What is that X precisely?

Because of kolmogorov complexities additive constant, which could be anything from 0 to 3^^^3 or higher, I think it only gives us weak evidence for the amount of code we should expect it to take to code an AI on a computer. It is even weaker evidence for the amount of code it would take to code for it with limited resources. E.g. the laws of physics are simple and little information is taken from the womb, but to create an intelligence from them might require a quantum computer the size of the human head to decompress the compressed code. There might be short cuts to do it, but they might be of vastly greater complexity.

We tend to ignore additive constants when talking about Complexity classes, because human designed algorithms tend not to have huge additive constants. Although I have come across some in my time such as this...

Comment author: Emile 23 August 2010 03:45:42PM 3 points [-]

We have something like this going on like:

discrete DNA code -> lots of messy chemistry and biology -> human intelligence

and we're comparing it to :

discrete computer code -> computer -> human intelligence

Kurzweil is arguing that the size of the DNA code can tell us about the max size of the computer code needed to run an intelligent brain simulation (or a human-level AI), and PZ Myers is basically saying "no, 'cause that chemistry and biology is really really messy".

Now, I agree that the computer code and the DNA code are very very different ("a huge amount of enzymes interacting with each other in 3D real time" isn't the kind of thing you easily simulate on a computer), and the additive constant for converting one into the other is likely to be pretty darn big.

But I also don't see a reason for intelligence to be easier to express with messy biology and chemistry than with computer code. The things about intelligence that are the closest to biology (interfacing with the real world, how one neuron functions) are also the kind of things that we can already do quite well with computer programs.

There are some things that are "natural" to code in Prolog, but not natural in Fortran, fotran. So a short program in prolog might require a long program in Fotran to do the same thing, and for different programs it might be the other way around. I don't see any reason to think that it's easier to encode intelligence in DNA than it is in computer code.

(Now, Kurzweil may be overstating his case when he talks about "compressed" DNA, because to be fair you should compare that to compressed (or compiled) computer code, which translates to much more actual code. I still think the size of the DNA is a very reasonable upper limit, especially when you consider that the DNA was coded by a bloody idiot whose main design pattern is "copy-and-paste", resulting in the bloated code we know)

Comment author: Kingreaper 23 August 2010 03:26:45PM 4 points [-]

And I'm pretty confident that Myers is wrong on this, unless there is another information rich source of inheritance besides DNA, which Meyers knows about but Kurzweil and I do not.

The environment is information-rich, especially the social environment.

Meyers make it quite clear that interactions with the environment are an expected input of information in his understanding.

Do you disagree with information input from the environment?

Comment author: JamesAndrix 23 August 2010 05:10:13PM 4 points [-]

Yes, I disagree.

If he's not talking about some stable information that is present in all environments that yield intelligent humans, then what's important is a kind of information that can be mass generated at low complexity cost.

Even language exposure is relatively low complexity, and the key parts might be inferable from brain processes. And we already know how to offer a socially rich environment, so I don't think it should add to the complexity costs of this problem.

And I think a reverse engineering of a newborn baby brain would be quite sufficient for kurzweil's goal.

In short: we know intelligent brains get reliably generated. We know it's very complex. The source of that complexity must be something information rich, stable, and universal. I know of exactly one such source.

Right now I'm reading myers argument as "a big part of human heredity is memetic rather than just genetic, and there is complex interplay between genes and memes, so you've got to count the memes as part of the total complexity."

I say that Kurzweil is trying to create something compatible with human memes in the first plalce, so we can load them the same way we load children (at worst) And even some classes of memes (age appropriate language exposure) do interact tightly with genes, their information content is not all that high.

Comment author: knb 26 August 2010 05:17:21AM *  2 points [-]

Myers has always had a tendency to attack other people's arguments like enemy soldiers. A good example is his take on evolutionary psychology, which he hates so much it is actually funny.

And then look at the source: Satoshi Kanazawa, the Fenimore Cooper of Sociobiology, the professional fantasist of Psychology Today. He's like the poster boy for the stupidity and groundlessness of freakishly fact-free evolutionary psychology. Just ignore anything with Kanazawa's name on it.

He also claims to have desecrated a consecrated host (the sacramental wafers Catholics consider to be the body of Jesus). That will show those evil theists how a good, rational person behaves!

Comment author: ciphergoth 23 August 2010 07:16:39AM 2 points [-]

This was cited to me in a blog discussion as "schoolboy biology EY gets wrong" (he said something similar, apparently).

Comment author: Mitchell_Porter 23 August 2010 07:44:47AM 2 points [-]

I'm pretty confident that Myers is wrong on this, unless there is another information rich source of inheritance besides DNA, which Myers knows about but Kurzweil and I do not.

Myers' thesis is that you are not going to figure out by brute-force physical simulation how the genome gives rise to the organism, knowing just the genomic sequence. On every scale - molecule, cell, tissue, organism - there are very complicated boundary conditions at work. You have to do experimental biology, observe those boundary conditions, and figure out what role they play. I predict he would be a lot more sympathetic if Kurzweil was talking about AIs figuring out the brain by doing experimental biology, rather than just saying genomic sequence + laws of physics will get us there.

Comment author: Perplexed 23 August 2010 04:03:45PM 5 points [-]

Myers' thesis is that you are not going to figure out by brute-force physical simulation how the genome gives rise to the organism, knowing just the genomic sequence.

And he is quite possibly correct. However, that has nothing at all to do with what Kurzweil said.

I predict he would be a lot more sympathetic if Kurzweil was talking about AIs figuring out the brain by doing experimental biology, rather than just saying genomic sequence + laws of physics will get us there.

I predict he would be more sympathetic if he just made the effort to figure out what Kurzweil said. But, of course, we all know there is no chance of that, so "conjecture" might be a better word than "predict".

Comment author: Mitchell_Porter 24 August 2010 11:18:29AM 2 points [-]

Myers doesn't have an argument against Kurzweil's estimate of the brain's complexity. But his skepticism about Kurzweil's timescale can be expressed in terms of the difficulty of searching large spaces. Let's say it does take a million lines of code to simulate the brain. Where is the argument that we can produce the right million lines of code within twenty years? The space of million-line programs is very large.

Comment author: SilasBarta 23 August 2010 03:41:47PM 3 points [-]

I agree, but at the same time, I wish biologists would learn more information theory, since their focus should be identifying the information flows going on, as this is what will lead us to a comprehensible model of human development and functionality.

(I freely admit I don't have years in the trenches, so this may be a naive view, but if my experience with any other scientific turf war is any guide, this is important advice.)

Comment author: [deleted] 20 August 2010 08:57:17PM 15 points [-]

I don't think there's any point doing armchair diagnoses and accusing people of delusions of grandeur. I wouldn't go so far as to claim that Eliezer needs more self-doubt, in a psychological sense. That's an awfully personal statement to make publicly. It's not self-confidence I'm worried about, it's insularity.

Here's the thing. The whole SIAI project is not publicly affiliated with (as far as I've heard) other, more mainstream institutions with relevant expertise. Universities, government agencies, corporations. We don't have guest posts from Dr. X or Think Tank Fellow Y. The ideas related to friendly AI and existential risk have not been shopped to academia or evaluated by scientists in the usual way. So they're not being tested stringently enough.

It's speculative. It feels fuzzy to me -- I'm not an expert in AI, but I have some education in math, and things feel fuzzy around here.

If you want to claim you're working on a project that may save the world, fine. But there's got to be more to show for it, sooner or later, than speculative essays. At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat! Look at climate science. For all the foibles and challenges of the climate change movement, those people actually gather data, create prediction models, predict the results of mitigating policies -- it works more or less like science.

If I'm completely off base here and SIAI is going to get to the science soon, I apologize, and I'll shut up about this for a while.

But look. All this advice about the "sin of underconfidence" is all very well (and actually I've taken it to heart somewhat.) But if you're going to go test your abilities, then test them. Against skeptics. Against people who'll look at you like you're a rotten fish if you don't have a graduate degree. Get something about FAI peer-reviewed or published by a reputable press. Show us something.

Sorry to be so blunt. It's just that I want this to be something. And I have my doubts because there's doesn't seem to be enough in this floating world in the way of unmistakable, concrete achievement.

Comment author: steven0461 20 August 2010 09:48:22PM *  13 points [-]

The whole SIAI project is not publicly affiliated with (as far as I've heard) other, more mainstream institutions with relevant expertise. Universities, government agencies, corporations. We don't have guest posts from Dr. X or Think Tank Fellow Y.

According to the about page, LW is brought to you by the Future of Humanity Institute at Oxford University. Does this count? Many Dr. Xes have spoken at the Singularity Summits.

At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat!

It's not clear how one would use past data to give evidence for or against a UFAI threat in any straightforward way. There's various kinds of indirect evidence that could be presented, and SIAI has indeed been trying more in the last year or two to publish articles and give conference talks presenting such evidence.

Points that SIAI would do better if it had better PR, had more transparency, published more in the scientific literature, etc., are all well-taken, but these things use limited resources, which to me makes it sound strange to use them as arguments to direct funding elsewhere.

Comment author: [deleted] 20 August 2010 10:06:58PM 5 points [-]

My post was by way of explaining why some people (including myself) doubt the claims of SIAI. People doubt claims when, compared to other claims, they're not justified as rigorously, or haven't met certain public standards. Why do I agree with the main post that Eliezer isn't justified in his opinion of his own importance (and SIAI's importance)? Because there isn't (yet) a lot beyond speculation here.

I understand about limited resources. If I were trying to run a foundation like SIAI, I might do exactly what it's doing, at first, and then try to get the academic credentials. But as an outside person, trying to determine: is this worth my time? Is this worth further study? Is this a field I could work in? Is this worth my giving away part of my (currently puny) income in donations? I'm likely to hold off until I see something stronger.

And I'm likely to be turned off by statements with a tone that assumes anyone sufficiently rational should already be on board. Well, no! It's not an obvious, open-and shut deal.

What if there were an organization comprised of idealistic, speculative types, who, unknowingly, got themselves to believe something completely false based on sketchy philosophical arguments? They might look a lot like SIAI. Could an outside observer distinguish fruitful non-mainstream speculation from pointless non-mainstream speculation?

Comment author: Morendil 20 August 2010 09:10:06PM 5 points [-]

We don't have guest posts from Dr. X or Think Tank Fellow Y.

Possibly because this blog is Less Wrong, positioned as "a community blog devoted to refining the art of human rationality", and not as the SIAI blog, or an existential risk blog, or an FAI blog.

Comment author: multifoliaterose 21 August 2010 04:59:32AM 4 points [-]

I don't think there's any point doing armchair diagnoses and accusing people of delusions of grandeur.

I respectfully disagree with this statement, at least as an absolute. I believe that:

(A) In situations in which people are making significant life choices based on person X's claims and person X exhibits behavior which is highly correlated with delusions of grandeur, it's appropriate to raise the possibility that person X's claims arise from delusions of grandeur and ask that person X publicly address this possibility.

(B) When one raises the possibility that somebody is suffering from delusions of grandeur, this should be done in as polite and nonconfrontational way as possible given the nature of the topic.

I believe that if more people adopted these practices, this would would raise the sanity waterline.

I believe that the situation with respect to Eliezer and portions of the LW community is as in (A) and that I made a good faith effort at (B).

Comment author: WrongBot 20 August 2010 09:13:15PM 7 points [-]

Here's the thing. The whole SIAI project is not publicly affiliated with (as far as I've heard) other, more mainstream institutions with relevant expertise.

LessWrong is itself a joint project of the SIAI and the Future of Humanity Institute at Oxford. Researchers at the SIAI have published these academic papers. The Singularity Summit's website includes a lengthy list of partners, including Google and Scientific American.

The SIAI and Eliezer may not have done the best possible job of engaging with the academic mainstream, but they haven't done a terrible one either, and accusations that they aren't trying are, so far as I am able to determine, factually inaccurate.

Comment author: Perplexed 21 August 2010 05:30:53PM *  6 points [-]

Researchers at the SIAI have published these academic papers.

But those don't really qualify as "published academic papers" in the sense that those terms are usually understood in academia. They are instead "research reports" or "technical reports".

The one additional hoop that these high-quality articles should pass through before they earn the status of true academic publications is to actually be published - i.e. accepted by a reputable (paper or online) journal. This hoop exists for a variety of reasons, including the claim that the research has been subjected to at least a modicum of unbiased review, a locus for post-publication critique (at least a journal letters-to-editor column), and a promise of stable curatorship. Plus inclusion in citation indexes and the like.

Perhaps the FHI should sponsor a journal, to serve as a venue and repository for research articles like these.

Comment author: [deleted] 20 August 2010 09:25:43PM 4 points [-]

Okay, I take that back. I did know about the connection between SIAI and FHI and Oxford.

What are these academic papers published in? A lot of them don't provide that information; one is in Global Catastrophic Risks.

At any rate, I exaggerated in saying there isn't any engagement with the academic mainstream. But it looks like it's not very much. And I recall a post of Eliezer's that said, roughly, "It's not that academia has rejected my ideas, it's that I haven't done the work of trying to get academia's attention." Well, why not?

Comment author: WrongBot 20 August 2010 09:53:51PM 4 points [-]

And I recall a post of Eliezer's that said, roughly, "It's not that academia has rejected my ideas, it's that I haven't done the work of trying to get academia's attention." Well, why not?

Limited time and more important objectives, I would assume. Most academic work is not substantially better than trial-and-error in terms of usefulness and accuracy; it gets by on volume. Volume is a detriment in Friendliness research, because errors can have large detrimental effects relative to the size of the error. (Like the accidental creation of a paperclipper.)

Comment author: Eliezer_Yudkowsky 20 August 2010 09:39:34PM 0 points [-]

If you want it done, feel free to do it yourself. :)

Comment author: wedrifid 21 August 2010 10:16:43PM 2 points [-]

I agree with your conclusion but not this part:

If you want to claim you're working on a project that may save the world, fine. But there's got to be more to show for it, sooner or later, than speculative essays. At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat! Look at climate science. For all the foibles and challenges of the climate change movement, those people actually gather data, create prediction models, predict the results of mitigating policies -- it works more or less like science.

I categorically do not want statistical studies of the type you mention done. I do want solid academic research done but not experiments. Some statistics on, for example, human predictions vs actual time till successful completion on tasks of various difficulties would be useful. But these do not appear to be the type of studies you are asking for, and nor do they target the most significant parts of the conclusion.

You are not entitled to that particular proof.

EDIT: The 'entitlement' link was broken.

Comment author: timtyler 21 August 2010 06:55:20AM *  2 points [-]

We don't have guest posts from Dr. X or Think Tank Fellow Y.

There's these fellows:

Some of them have contributed here:

Comment author: Perplexed 21 August 2010 05:29:59AM 1 point [-]

I only wish it were possible to upvote this comment more than once.

Comment author: prase 20 August 2010 10:55:08AM *  7 points [-]

An interesting post, well written, upvoted. Mere existence of such posts here constitutes a proof that LW is still far from Objectivism, not only because Eliezer is way more rational (and compassionate) than Ayn Rand, but mainly because the other people here are aware of dangers of cultism.

However, I am not sure whether the right way to prevent cultish behaviour (whether the risk is real or not) is to issue warning like this to the leader (or any sort of warning, perhaps). The dangers of cultism emerge from simply having a leader; whatever the level of personal rationality, being a single extraordinarily revered person in any group for any longer time probably harm's one's judgement, and the overall atmosphere of reverence is unhealthy for the group. Maybe more generally, the problem not necessarily depends on existence of a leader: if a group is too devoted to some single idea, it faces lots of dangers, the gravest thereof perhaps be separation from reality. Especially if the idea lives in an environment where relevant information is not abundant.

Therefore, I would prefer to see the community concentrate on a broader class of topics, and to continue in the tradition of disseminating rationality started on OB. Mitigating existential risk is a serious business indeed, and it has to be discussed appropriately, but we shouldn't lose perspective and become too fanatic about the issue. There were many statements written on LW in recent months or years, many of them not by EY, declaring absolute preference of existential risk mitigation above everything else; those statements I find unsettling.

Final nitpick: Gandhi is misspelled in the OP.

Comment author: ciphergoth 20 August 2010 11:23:24AM 6 points [-]

There were many statements written on LW in recent months or years, many of them not by EY, declaring absolute preference of existential risk mitigation above everything else; those statements I find unsettling.

The case for devoting all of your altruistic efforts to a single maximally efficient cause seems strong to me, as does the case that existential risk mitigation is that maximally efficient cause. I take it you're familiar with that case (though see eg "Astronomical Waste" if not) so I won't set it all out again here. If you think I'm mistaken, actual counter-arguments would be more useful than emotional reactions.

Comment author: prase 20 August 2010 11:55:52AM *  3 points [-]

I don't object to devoting (almost) all efforts to a single cause generally. I do, however, object to such devotion in case of FAI and the Singularity.

If a person devotes all his efforts to a single cause, his subjective feeling of importance of the cause will probably increase and most people will subsequently overestimate how important the cause is. This danger can be faced by carefully comparing the results of one's deeds with the results of other people's efforts, using a set of selected objective criteria, or measure it using some scale ideally fixed at the beginning, to protect oneself from moving the goalposts.

The problem is, if the cause is put so far in the future and based so much on speculations, there is no fixed point to look at when countering one's own biases, and the risk of a gross overestimation of one's agenda becomes huge. So the reason why I dislike the mentioned suggestions (and I am speaking, for example, about the idea that it is a strict moral duty for everybody who can to support the FAI research as much as they can, which were implicitly present at least in the discussions about the forbidden topic) is not that I reject single-cause devotion in principle (although I like to be wary about it in most situations), but that I assign too low probability to the correctness of the underlying ideas. The whole business is based on future predictions of several tens or possibly hunderts years in advance, which is historically a very unsuccessful discipline. And I can't help but include it in that reference class.

Simultaneously, I don't accept the argument of very huge utility difference between possible outcomes, which should justify one's involvement even if the probability of success (or even probability that the effort has sense) is extremely low. Pascal-wageresque reasoning is unreliable, even if formalised, because it needs careful and precise estimation of probabilities close to 1 or 0, which humans are provably bad at.

Comment author: Wei_Dai 20 August 2010 12:15:16PM 5 points [-]

Pascal-wageresque reasoning is unreliable, even if formalised, because it needs careful and precise estimation of probabilities close to 1 or 0, which humans are provably bad at.

Assuming you're right, why doesn't rejection of Pascal-like wagers also require careful and precise estimation of probabilities close to 1 or 0?

Comment author: prase 20 August 2010 12:21:01PM 2 points [-]

I use a heuristic which tells me to ignore Pascal-like wagers and to do whatever I would do if I haven't learned about the wager (in first approximation). I don't behave like an utilitarian in this case, so I don't need to estimate the probabilities and utilities. (I think if I did, my decision would be fairly random, since the utilities and probabilities included would be almost certainly determined mostly by the anchoring effect).

Comment author: Perplexed 20 August 2010 03:22:31PM *  6 points [-]

I use a heuristic which tells me to ignore Pascal-like wagers

I am not sure exactly what using this heuristic entails. I certainly understand the motivation behind the heuristic:

  • when you multiply an astronomical utility (disutility) by a miniscule probability, you may get an ordinary-sized utility (disutility), apparently suitable for comparison with other ordinary-sized utilities. Don't trust the results of this calculation! You have almost certainly made an error in estimating the probability, or the utility, or both.

But how do you turn that (quite rational IMO) lack of trust into an action principle? I can imagine 4 possible precepts:

  • Don't buy lottery tickets
  • Don't buy insurance
  • Don't sell insurance
  • Don't sell back lottery tickets you already own.

Is it rationally consistent to follow all 4 precepts, or is there an inconsistency?

Comment author: timtyler 20 August 2010 11:46:07PM 4 points [-]

Another red flag is when someone else helpfully does the calculation for you - and then expects you to update on the results. Looking at the long history of Pascal-like wagers, that is pretty likely to be an attempt at manipulation.

Comment author: timtyler 21 August 2010 06:52:10PM 2 points [-]

"I believe SIAI’s probability of success is lower than what we can reasonably conceptualize; this does not rule it out as a good investment (since the hoped-for benefit is so large), but neither does the math support it as an investment (donating simply because the hoped-for benefit multiplied by the smallest conceivable probability is large would, in my view, be a form of falling prey to “Pascal’s Mugging”."

Comment author: ciphergoth 20 August 2010 12:29:39PM 2 points [-]

Which of the axioms of the Von Neumann–Morgenstern utility theorem do you reject?

Comment author: Wei_Dai 20 August 2010 12:44:46PM *  3 points [-]

I think the theorem implicitly assumes logical omniscience, and using heuristics instead of doing explicit expected utility calculations should make sense in at least some types of situations for us. The question is whether it makes sense in this one.

I think this is actually an interesting question. Is there an argument showing that we can do better than prase's heuristic of rejecting all Pascal-like wagers, given human limitations?

Comment author: CarlShulman 20 August 2010 12:37:34PM 3 points [-]

Therefore, I would prefer to see the community concentrate on a broader class of topics, and to continue in the tradition of disseminating rationality started on OB.

The best way to advance this goal being is probably to write an interesting top-level post.

Comment author: prase 20 August 2010 12:50:47PM 4 points [-]

I agree. However not everybody is able to.

Comment author: Eliezer_Yudkowsky 20 August 2010 03:46:39AM 20 points [-]

Unknown reminds me that Multifoliaterose said this:

The modern world is sufficiently complicated so that no human no matter how talented can have good reason to believe himself or herself to be the most important person in human history without actually doing something which very visibly and decisively alters the fate of humanity. At present, anybody who holds such a belief is suffering from extreme delusions of grandeur.

This makes explicit something I thought I was going to have to tease out of multi, so my response would roughly go as follows:

  • If no one can occupy this epistemic state, that implies something about the state of the world - i.e., that it should not lead people into this sort of epistemic state.
  • Therefore you are deducing information about the state of the world by arguing about which sorts of thoughts remind you of your youthful delusions of messianity.
  • Reversed stupidity is not intelligence. In general, if you want to know something about how to develop Friendly AI, you have to reason about Friendly AI, rather than reasoning about something else.
  • Which is why I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me. In other words, I am reluctant to argue on this level not just for the obvious political reasons (it's a sure loss once the argument starts), but because you're trying to extract information about the real world from a class of arguments that can't possibly yield information about the real world.
  • That said, as far as I can tell, the world currently occupies a ridiculous state of practically nobody working on problems like "develop a reflective decision theory that lets you talk about self-modification". I agree that this is ridiculous, but seriously, blame the world, not me. Multi's principle would be reasonable only if the world occupied a much higher level of competence than it in fact does, a point which you can further appreciate by, e.g., reading the QM sequence, or counting cryonics signups, showing massive failure on simpler issues.
  • That reflective decision theory actually is key to Friendly AI is something I can only get information about by thinking about Friendly AI. If I try to get information about it any other way, I'm producing noise in my brain.
  • We can directly apply multi's stated principle to conclude that reflective decision theory cannot be known to be critical to Friendly AI. We were mistaken to start working on it; if no one else is working on it, it must not be knowably critical; because if it were knowably critical, we would occupy a forbidden epistemic state.
  • Therefore we have derived knowledge about which problems are critical in Friendly AI by arguing about personal psychology.
  • This constitutes a reductio of the original principle. QEA. (As was to be argued.)
Comment author: Jonathan_Graehl 20 August 2010 04:18:46AM *  4 points [-]

Upvoted for being clever.

You've (probably) refuted the original statement as an absolute.

You're deciding not to engage the issue of hubris directly.

Does the following paraphrase your position:

  1. Here's what I (and also part of SIAI) intend to work on

  2. I think it's very important (and you should think so for reasons outline in my writings)

  3. If you agree with me, you should support us

? If so, I think it's fine for you to not say the obvious (that you're being quite ambitious, and that success is not assured). It seems like some people are really dying to hear you say the obvious.

Comment author: Eliezer_Yudkowsky 20 August 2010 05:03:09AM 9 points [-]

Success is not assured. I'm not sure what's meant by confessing to being "ambitious". Is it like being "optimistic"? I suppose there are people who can say "I'm being optimistic" without being aware that they are instantiating Moore's Paradox but I am not one of them.

I also disclaim that I do not believe myself to be the protagonist, because the world is not a story, and does not have a plot.

Comment author: Perplexed 20 August 2010 05:14:49AM 1 point [-]

I hope that the double negative in the last sentence was an error.

I introduced the term "protagonist", because at that point we were discussing a hypothetical person who was being judged regarding his belief in a set of three propositions. Everyone recognized, of course, who that hypothetical person represented, but the actual person had not yet stipulated his belief in that set of propositions.

Comment author: wedrifid 20 August 2010 10:57:37AM 2 points [-]

I hope that the double negative in the last sentence was an error.

Interesting. I don't claim great grammatical expertise but my reading puts the last question at reasonable. Am I correct in inferring that you do not believe Eliezer's usage of "I also disclaim" to mean "I include the following disclaimer: " is valid?

Regarding 'protagonist' there is some context for the kind of point Eliezer likes to make about protagonist/story thinking in his Harry Potter fanfic. I don't believe he has expressed the concept coherently as a post yet. (I don't see where you introduced the 'protagonist' word so don't know whether Eliezer read you right. I'm just throwing some background in.)

Comment author: Perplexed 20 August 2010 07:01:47PM 3 points [-]

Regarding "disclaim".

I read "disclaim" as a synonym for "deny". I didn't even consider your interpretation, but upon consideration, I think I prefer it.

My mistake (again!). :(

Comment author: wedrifid 20 August 2010 10:08:09AM *  14 points [-]

Upvoted for being clever.

That's interesting. I downvoted it for being clever. It was a convoluted elaboration of a trivial technicality that only applies if you make the most convenient (for Eliezer) interpretation of multi's words. This kind of response may win someone a debating contest in high school but it certainly isn't what I would expect from someone well versed in the rationalism sequences, much less their author.

I don't pay all that much attention to what multi says (no offence intended to multi) but I pay close attention to what Eliezer does. I am overwhelmingly convinced of Eliezer's cleverness and brilliance as a rationalism theorist. Everything else, well, that's a lot more blurry.

Comment author: Furcas 20 August 2010 10:31:53AM *  2 points [-]

I don't think Eliezer was trying to be clever. He replied to the only real justification multi offered for why we should believe that Eliezer is suffering from delusions of grandeur. What else is he supposed to do?

Comment author: wedrifid 20 August 2010 12:00:48PM 5 points [-]

I got your reply and respect your position. I don't want to engage too much here since it would overlap with discussion surrounding Eliezer's initial reply and potentially be quite frustrating.

What I would like to see is multifoliaterose giving a considered response to the "If not, why not?" question in that link. That would give Eliezer the chance to respond to the meat of the topic at hand. Eliezer has been given a rare opportunity. He can always write posts about himself, giving justifications for whatever degree of personal awesomeness he claims. That's nothing new. But in this situation it wouldn't be perceived as Eliezer grabbing the megaphone for his own self-gratification. He is responding to a challenge, answering a request.

Why would you waste the chance to, say, explain the difference between "SIAI" and "Eliezer Yudkowsky"? Or at least give some treatment of p(someone other than Eliezer Yudkowsky is doing the most to save the world). Better yet, take that chance to emphasise the difference between p(FAI is the most important priority for humanity) and p(Eliezer is the most important human in the world).

Comment author: Unknowns 20 August 2010 09:27:13AM *  1 point [-]

Even if almost everything you say here is right, it wouldn't mean that there is a high probability that if you are killed in a car accident tomorrow, no one else will think about these things (reflective decision theory and so on) in the future, even people who know nothing about you personally. As Carl Shulman points out, if it is necessary to think about these things it is likely that people will, when it becomes more urgent. So it still wouldn't mean that you are the most important person in human history.

Comment author: multifoliaterose 20 August 2010 06:39:52PM 0 points [-]

I agree with khafra. Your response to my post is distortionary. The statement which you quote was a statement about the reference class of people who believe themselves to be the most important person in the world. The statement which you quote was not a statement about FAI.

Any adequate response to the statement which you quote requires that you engage with the last point that khafra made:

Whether this likelihood ratio is large enough to overcome the evidence on AI-related existential risk and the paucity of serious effort dedicated to combating it is an open question.

You have not satisfactorily addressed this matter.

Comment author: Furcas 21 August 2010 03:36:59PM *  4 points [-]

It looks to me like Eliezer gave your post the most generous interpretation possible, i.e. that it actually contained an argument attempting to show that he's deluding himself, rather than just defining a reference class and pointing out that Eliezer fits into it. Since you've now clarified that your post did nothing more than that, there's not much left to do except suggest you read all of Eliezer's posts tagged 'FAI', and this.

Comment author: cousin_it 20 August 2010 01:15:11AM *  10 points [-]

I upvoted this, but I'm torn about this.

In your recent posts you've been slowly, carefully, thoroughly deconstructing one person. Part of me wants to break into applause at the techniques used, and learn from them, because in my whole life of manipulation I've never mounted an attack of such scale. (The paragraph saying "something has gone very wrong" was absolutely epic, to the point of evoking musical cues somewhere at the edge of my hearing. Just like the "greatly misguided" bit in your previous post. Bravo!) But another part of me feels horror and disgust because after traumatic events in my own life I'd resolved to never do this thing again.

It comes down to this: I enjoy LW for now. If Eliezer insists on creating a sealed reality around himself, what's that to me? You don't have to slay every dragon you see. Saving one person from megalomania (real or imagined) is way less important than your own research. Imagine the worst possible world: Eliezer turns into a kook. What would that change, in the grand scheme of things or in your personal life? Are there not enough kooks in AI already?

And lastly, a note about saving people. I think many of us here have had the unpleasant experience (to put it mildly) of trying to save someone from suicide. Looking back at such episodes in my own life, I'm sure that everyone involved would've been better off if I'd just hit "ignore" at the first sign of trouble. Cut and run: in serious cases it always comes to that, no exceptions. People are very stubborn, both consciously and subconsciously - they stay on their track. They will waste their life (or spend it wisely, it's a matter of perspective), but if you join the tug-of-war, you'll waste a big chunk of yours as well.

How's that for other-optimizing?

Comment author: katydee 20 August 2010 10:05:22AM *  16 points [-]

I saved someone from suicide once. While the experience was certainly quite unpleasant at the time, if I had hit "ignore," as you suggest, she would have died. I don't think that I would be better off today if I had let her die, to say nothing of her. The fact that saving people is hard doesn't mean that you shouldn't do it!

Comment author: wedrifid 20 August 2010 01:45:48AM *  12 points [-]

It comes down to this: I enjoy LW for now. If Eliezer insists on creating a sealed reality around himself, what's that to me? You don't have to slay every dragon you see. Saving one person from megalomania (real or imagined) is way less important than your own research. Imagine the worst possible world: Eliezer turns into a kook. What would that change, in the grand scheme of things or in your personal life?

The very fate of the universe, potentially. Purely hypothetically and for the sake of the discussion:

  • If Eliezer did have the potential to provide a strong positive influence on grand scale future outcomes but was crippled by the still hypothetical lack of self-doubt then that is a loss of real value.
  • A bad 'Frodo' can be worse than no Frodo at all. If we were to give the ring to a Frodo who thought he could take on Nazgul in hand to hand combat then we would lose the ring and so the lose the chance to give said ring to someone who could pull it off. Multi (and those for whom he asks such questions) have limited resources (and attention) so it may be worth deliberate investigation of potential recipients of trust.
  • Worse yet than a counterproductive Frodo would be a Frodo whose arrogance pisses of Aragorn, Gandalf, Legolas, Gimli, Merry, Pippin and even Sam so much that they get disgusted with the whole 'save the world' thing and go hang out in the forest flirting with Elven maidens. Further cause to investigate just whose bid for notoriety and influence you wish to support.

I cannot emphasise how much this is only a reply to the literal question cousin_it asked and no endorsement or denial of any of the above claims as they relate to persons real or imagined. For example it may have been good if Frodo was arrogant enough to piss off Aragorn. He may cracked it, taken the ring from Frodo and given it to Arwen. Arwen was crazy enough to give up the immortality she already had and so would be as good a candidate as any for being able to ditch a ring, without being completely useless for basically all purposes.

Comment author: Eliezer_Yudkowsky 20 August 2010 03:24:38AM 26 points [-]

Er... I can't help but notice a certain humor in the idea that it's terrible if I'm self-deluded about my own importance because that means I might destroy the world.

Comment author: wedrifid 20 August 2010 09:35:14AM 5 points [-]

Yes, there is is a certain humor. But I hope you did read the dot points and followed the reasoning. It, among other things, suggests a potential benefit of criticism such as multi's aside from hypothetical benefits of discrediting you should it have been the case that you were not, in fact, competent.

Comment author: John_Baez 20 August 2010 11:02:50AM 5 points [-]

It's some sort of mutant version of "just because you're paranoid doesn't mean they're not out to get you".

Comment author: Perplexed 20 August 2010 03:14:41AM *  8 points [-]

What would that change, in the grand scheme of things or in your personal life?

The very fate of the universe, potentially.

I suppose I could draw from that the inference that you have a rather inflated notion of the importance of what multi is doing here, ... but, in the immortal words of Richard Milhous Nixon, "That would be wrong."

More seriously, I think everyone here realizes that EY has some rough edges, as well as some intellectual strengths. For his own self-improvement, he ought to be working on those rough edges. I suspect he is. However, in the meantime, it would be best if his responsibilities were in areas where his strengths are exploited and his rough edges don't really matter. So, just what are his current responsibilities?

  1. Convincing people that UFAI constitutes a serious existential risk while not giving the whole field of futurism and existential risk reduction a bad rep.

  2. Setting direction for and managing FAI and UFAI-avoidance research at SIAI.

  3. Conducting FAI and UFAI-avoidance research.

  4. Reviewing and doing conceptual QC on the research work product.

To be honest, I don't see EY's "rough edges" as producing any problems at all with his performance on tasks #3 and #4. Only SIAI insiders know whether there has been a problem on task #2. Based on multi's arguments, I suspect he may not be doing so well on #1. So, to me, the indicated response ought to be one of the following:

A. Hire someone articulate (and if possible, even charismatic) to take over task #1 and make whatever minor adjustments are needed regarding task #2.

B. Do nothing. There is no problem!

C. Get some academic papers published so that FAI/anti-UFAI research becomes interesting to the same funding sources that currently support CS, AI, and decision theory research. Then reconstitute SIAI as just one additional research institution which is fighting for that research funding.

I would be interested in what EY thinks of these three possibilities. Perhaps for different reasons, I suspect, so would multi.

[Edited to correct my hallucination of confusing multifoliaterose with wedrifid. As a result of this edit, various comments below may seem confused. Sorry about that, but I judge that making this comment clear is the higher priority.]

Comment author: dclayh 20 August 2010 03:19:19AM 3 points [-]

Veering wildly off-topic:

Arwen was crazy enough to give up the immortality she already had

Come on now. Humans are immortal in Tolkien, they just sit in a different waiting room. (And technically can't come back until the End of Days™, but who cares about that.)

Comment author: cousin_it 20 August 2010 07:45:23AM *  1 point [-]

What Eliezer said. I was arguing from the assumption that he is wrong about FAI and stuff. If he's right about the object level, then he's not deluded in considering himself important.

Comment author: Vladimir_Nesov 20 August 2010 02:03:39PM *  3 points [-]

I was arguing from the assumption that he is wrong about FAI and stuff. If he's right about the object level, then he's not deluded in considering himself important.

But if he is wrong about FAI and stuff, then he is still deluded not specifically about considering himself important, that implication is correct, he is deluded about FAI and stuff.

Comment author: wedrifid 20 August 2010 09:32:22AM *  2 points [-]

If he's right about the object level, then he's not deluded in considering himself important.

Which, of course, would still leave the second two dot points as answers to your question.

Comment author: Vladimir_Nesov 20 August 2010 01:42:06PM 3 points [-]

The previous post was fine, but this one is sloppy, and I don't think it's some kind of Machiavellian plot.

Comment author: xamdam 20 August 2010 01:44:32AM 2 points [-]

But another part of me feels horror and disgust because after traumatic events in my own life I'd resolved to never do this thing again.

Because you were on the giving or on the receiving end of it?

What would that change, in the grand scheme of things or in your personal life? Are there not enough kooks in AI already?

Agreed; personally I de-converted myself from orthodox judaism, but I still find it crazy when people write big scholarly books debunking the bible; it's just useless a waste of energy (part of it is academic incentives).

They will waste their life (or spend it wisely, it's a matter of perspective), but if you join the tug-of-war, you'll waste a big chunk of yours as well.

I haven't been involved in these situations, but taking a cue from drug addicts (who incidentally have high suicide rate) most of them do not recover, but maybe 10% do. So most of the time you'll find frustration, but one in 10 you'd save a life, I am not sure if that's worthless.

Comment author: Wei_Dai 20 August 2010 07:52:23AM 6 points [-]

I find it ironic that multifoliaterose said

I personally think that the best way to face the present situation is to gather more information about all existential risks rather than focusing on one particular existential risk

and then the next post, instead of delineating what he found out about other existential risks (or perhaps how we should go about doing that), is about how to save Eliezer.

Comment author: Morendil 20 August 2010 09:11:16AM 2 points [-]

The mechanism that determines human action is that we do what makes us feel good (at the margin) and refrain from doing what makes us feel bad (at the margin).

"The" mechanism? Citation needed.

a fundamental mechanism of the human brain which was historically correlated with gaining high status is to make us feel good when we have high self-image and feel bad when we have low self-image.

Better, but still unsupported and unclear. What was correlated with what?

Comment author: Eliezer_Yudkowsky 20 August 2010 12:46:34AM 9 points [-]

It seems like an implication of your post that no one is ever allowed to believe they're saving the world. Do you agree that this is an implication? If not, why not?

Comment author: JRMayne 20 August 2010 03:12:00AM 10 points [-]

Not speaking for multi, but, in any x-risk item (blowing up asteroids, stabilizing nuclear powers, global warming, catastrophic viral outbreak, climate change of whatever sort, FAI, whatever) for those working on the problem, there are degrees of realism:

"I am working on a project that may have massive effect on future society. While the chance that I specifically am a key person on the project are remote, given the fine minds at (Google/CDC/CIA/whatever), I still might be, and that's worth doing." - Probably sane, even if misguided.

"I am working on a project that may have massive effect on future society. I am the greatest mind in the field. Still, many other smart people are involved. The specific risk I am worried about may or not occur, but efforts to prevent its occurrence are valuable. There is some real possibility that I will the critical person on the project." - Possibly sane, even if misguided.

"I am working on a project that will save a near-infinite number of universes. In all likelihood, only I can achieve it. All of the people - even people perceived as having better credentials, intelligence, and ability - cannot do what I am doing. All critics of me are either ignorant, stupid, or irrational. If I die, the chance of multiverse collapse is radically increased; no one can do what I do. I don't care if other people view this as crazy, because they're crazy if they don't believe me." - Clinical diagnosis.

You're doing direct, substantial harm to your cause, because you and your views appear irrational. Those who hear about SIAI as the lead dog in this effort who are smart, have money, and are connected, will mostly conclude that this effort must not be worth anything.

I believe you had some language for Roko on the wisdom of damaging the cause in order to show off how smart you are.

I'm a little uncomfortable with the heat of my comment here, but other efforts have not been read the way I intended them by you (Others appeared to understand.) I am hopeful this is clear - and let me once again clarify that I had these views before multi's post. Before. Don't blame him again; blame me.

I'd like existential risk generally to be better received. In my opinion - and I may be wrong - you're actively hurting the cause.

--JRM

Comment author: [deleted] 20 August 2010 04:33:53PM 10 points [-]

I don't think Eliezer believes he's irreplaceable, exactly. He thinks, or I think he thinks, that any sufficiently intelligent AI which has not been built to the standard of Friendliness (as he defines it) is an existential risk. And the only practical means for preventing the development of UnFriendly AI is to develop superintelligent FAI first. The team to develop FAI needn't be SIAI, and Eliezer wouldn't necessarily be the most important contributor to the project, and SIAI might not ultimately be equal to the task. But if he's right about the risk and the solution, and his untimely demise were to doom the world, it would be because no-one else tried to do this, not because he was the only one who could.

Not that this rules out your interpretation. I'm sure he has a high opinion of his abilities as well. Any accusation of hubris should probably mention that he once told Aubrey de Grey "I bet I can solve ALL of Earth's emergency problems before you cure aging."

Comment author: JamesAndrix 21 August 2010 04:57:00AM *  2 points [-]

There may be multiple different projects projects, each necessary to save the world, and each having a key person who knows more about the project, and/or is more driven and/or is more capable than anyone else. Each such person has weirdly high expected utility, and could accurately make a statement like EY's and still not be the person with the highest expected utility. Their actual expected utility would depend on the complexity of the project and the surrounding community, and how much the success of the project alters the value of human survival.

This is similar to the idea that responsibility is not a division of 100%.

http://www.ranprieur.com/essays/mathres.html

Comment author: Jonathan_Graehl 20 August 2010 04:27:31AM 2 points [-]

What you say sounds reasonable, but I feel it's unwise for me to worry about such things. If I were to sound such a vague alarm, I wouldn't expect anyone to listen to me unless I'd made significant contributions in the field myself (I haven't).

Comment author: Unknowns 20 August 2010 03:18:34AM 4 points [-]

Multifoliaterose said this:

The modern world is sufficiently complicated so that no human no matter how talented can have good reason to believe himself or herself to be the most important person in human history without actually doing something which very visibly and decisively alters the fate of humanity. At present, anybody who holds such a belief is suffering from extreme delusions of grandeur.

Note that there are qualifications on this. If you're standing by the button that ends the world, and refuse to press it when urged, or you prevent others from pressing it (e.g. Stanislav Petrov), then you may reasonably believe that you're saving the world. But no, you may not reasonably believe that you are saving the world based on long chains of reasoning based on your intuition, not on anything as certain as mathematics and logic, especially decades in advance of anything happening.

Comment author: Eliezer_Yudkowsky 20 August 2010 03:20:55AM 2 points [-]

It seems like an implication of this and other assumptions made by multi, and apparently shared by you, is that no one can believe themselves to be critical to a Friendly AI project that has a significant chance of success. Do you agree that this is an implication? If not, why not?

Comment author: Unknowns 20 August 2010 03:41:39AM 5 points [-]

No, I don't agree this is an implication. I would say that no one can reasonably believe all of the following at the same time with a high degree of confidence:

1) I am critical to this Friendly AI project that has a significant chance of success. 2) There is no significant chance of Friendly AI without this project. 3) Without Friendly AI, the world is doomed.

But then, as you know, I don't consider it reasonable to put a high degree in confidence in number 3. Nor do many other intelligent people (such as Robin Hanson.) So it isn't surprising that I would consider it unreasonable to be sure of all three of them.

I also agree with Tetronian's points.

Comment author: Eliezer_Yudkowsky 20 August 2010 03:57:56AM 4 points [-]

I would say that no one can reasonably believe all of the following at the same time with a high degree of confidence: 1) I am critical to this Friendly AI project that has a significant chance of success. 2) There is no significant chance of Friendly AI without this project. 3) Without Friendly AI, the world is doomed.

I see. So it's not that any one of these statements is a forbidden premise, but that their combination leads to a forbidden conclusion. Would you agree with the previous sentence?

BTW, nobody please vote down the parent below -2, that will make it invisible. Also it doesn't particularly deserve downvoting IMO.

Comment author: Perplexed 20 August 2010 04:16:27AM 5 points [-]

I would suggest that, in order for this set of beliefs to become (psychiatrically?) forbidden, we need to add a fourth item. 4) Dozens of other smart people agree with me on #3.

If someone believes that very, very few people yet recognize the importance of FAI, then the conjunction of beliefs #1 thru #3 might be reasonable. But after #4 becomes true (and known to our protagonist), then continuing to hold #1 and #2 may be indicative of a problem.

Comment author: Perplexed 20 August 2010 04:29:21AM 2 points [-]

With the hint from EY on another branch, I see a problem in my argument. Our protagonist might circumvent my straitjacket by also believing 5) The key to FAI is TDT, but I have been so far unsuccessful in getting many of those dozens of smart people to listen to me on that subject.

I now withdraw from this conversation with my tail between my legs.

Comment author: katydee 20 August 2010 04:32:30AM *  1 point [-]

All this talk of "our protagonist," as well the weird references to SquareSoft games, is very off-putting for me.

Comment author: Eliezer_Yudkowsky 20 August 2010 05:01:07AM 5 points [-]

Dozens isn't sufficient. I asked Marcello if he'd run into anyone who seemed to have more raw intellectual horsepower than me, and he said that John Conway gave him that impression. So there are smarter people than me upon the Earth, which doesn't surprise me at all, but it might take a wider net than "dozens of other smart people" before someone comes in with more brilliance and a better starting math education and renders me obsolete.

Comment author: [deleted] 20 August 2010 05:27:01AM 9 points [-]
Comment author: Spurlock 20 August 2010 05:26:47PM 8 points [-]

Simply out of curiosity:

Plenty of criticism (some of it reasonable) has been lobbed at IQ tests and at things like the SAT. Is there a method known to you (or anyone reading) that actually measures "raw intellectual horsepower" in a reliable and accurate way? Aside from asking Marcello.

Comment author: thomblake 20 August 2010 06:44:08PM 10 points [-]

Aside from asking Marcello.

I was beginning to wonder if he's available for consultation.

Comment author: rabidchicken 21 August 2010 05:02:22PM *  6 points [-]

Read the source code, and then visualize a few levels from Crysis or Metro 2033 in your head. While you render it, count the average Frames per second. Alternatively, see how quickly you can find the prime factors of every integer from 1 to 1000.

Which is to say... Humans in general have extremely limited intellectual power. instead of calculating things efficiently, we work by using various tricks with caches and memory to find answers. Therefore, almost all tasks are more dependant on practice and interest than they are on intelligence. So, rather then testing the statement "Eliezer is smart" it has more bearing on this debate to confirm "Eliezer has spent a large amount of time optimizing his cache for tasks relating to rationality, evolution, and artificial intelligence". Intelligence is overrated.

Comment author: XiXiDu 20 August 2010 10:29:58AM *  3 points [-]

Sheer curiosity, but have you or anyone ever contacted John Conway about the topic of u/FAI and asked him what the thinks about the topic, the risks associated with it and maybe the SIAI itself?

Comment author: Unknowns 20 August 2010 09:13:42AM 2 points [-]

I wouldn't put it in terms of forbidden premises or forbidden conclusions.

But if each of these statements has a 90% of being true, and if they are assumed to be independent (which admittedly won't be exactly true), then the probability that all three are true would be only about 70%, which is not an extremely high degree of confidence; more like saying, "This is my opinion but I could easily be wrong."

Personally I don't think 1) or 3), taken in a strict way, could reasonably be said to have more than a 20% chance of being true. I do think a probability of 90% is a fairly reasonable assignment for 2), because most people are not going to bother about Friendliness. Accounting for the fact that these are not totally independent, I don't consider a probability assignment of more than 5% for the conjunction to be reasonable. However, since there are other points of view, I could accept that someone might assign the conjunction a 70% chance in accordance with the previous paragraph, without being crazy. But if you assign a probability much more than that I would have to withdraw this.

If the statements are weakened as Carl Shulman suggests, then even the conjunction could reasonably be given a much higher probability.

Also, as long as it is admitted that the probability is not high, you could still say that the possibility needs to be taken seriously because you are talking about the possible (if yet improbable) destruction of the world.

Comment author: Eliezer_Yudkowsky 20 August 2010 06:21:21PM 17 points [-]

I certainly do not assign a probability as high as 70% to the conjunction of all three of those statements.

And in case it wasn't clear, the problem I was trying to point out was simply with having forbidden conclusions - not forbidden by observation per se, but forbidden by forbidden psychology - and using that to make deductions about empirical premises that ought simply to be evaluated by themselves.

I s'pose I might be crazy, but you all are putting your craziness right up front. You can't extract milk from a stone!

Comment author: Unknowns 20 August 2010 06:29:01PM 2 points [-]

That's good to know. I hope multifoliaterose reads this comment, as he seemed to think that you would assign a very high probability to the conjunction (and it's true that you've sometimes given that impression by your way of talking.)

Also, I didn't think he was necessarily setting up forbidden conclusions, since he did add some qualifications allowing that in some circumstances it could be justified to hold such opinions.

Comment author: PaulAlmond 28 August 2010 09:55:00PM *  3 points [-]

Just curious (and not being 100% serious here): Would you have any concerns about the following argument (and I am not saying I accept it)?

  1. Assume that famous people will get recreated as AIs in simulations a lot in the future. School projects, entertainment, historical research, interactive museum exhibits, idols to be worshipped by cults built up around them, etc.
  2. If you save the world, you will be about the most famous person ever in the future.
  3. Therefore there will be a lot of Eliezer Yudkowsky AIs created in the future.
  4. Therefore the chances of anyone who thinks he is Eliezer Yudkowsky actually being the orginal, 21st century one are very small.
  5. Therefore you are almost certainly an AI, and none of the rest of us are here - except maybe as stage props with varying degrees of cognition (and you probably never even heard of me before, so someone like me would probably not get represented in any detail in an Eliezer Yudkowsky simulation). That would mean that I am not even conscious and am just some simple subroutine. Actually, now I have raised the issue to be scary, it looks a lot more alarming for me than it does for you as I may have just argued myself out of existence...
Comment author: wedrifid 29 August 2010 02:45:07AM 2 points [-]

Actually, now I have raised the issue to be scary, it looks a lot more alarming for me than it does for you as I may have just argued myself out of existence...

That doesn't seem scary to me at all. I still know that there is at least one of me that I can consider 'real'. I will continue to act as if I am one of the instances that I consider me/important. I've lost no existence whatsoever.

Comment author: multifoliaterose 20 August 2010 06:53:48PM *  -2 points [-]

To be quite clear about which of Unknowns' points I object, my main objection is to the point:

I am critical to this Friendly AI project that has a significant chance of success

where 'I' is replaced by "Eliezer." I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you're working on. (Maybe even much less than that - I would have to spend some time calibrating my estimate to make a judgment on precisely how low a probability I assign to the proposition.)

My impression is that you've greatly underestimated the difficulty of building a Friendly AI.

Comment author: Eliezer_Yudkowsky 20 August 2010 07:00:52PM 15 points [-]

I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you're working on.

I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.

My impression is that you've greatly underestimated the difficulty of building a Friendly AI.

Out of weary curiosity, what is it that you think you know about Friendly AI that I don't?

And has it occurred to you that if I have different non-crazy beliefs about Friendly AI then my final conclusions might not be so crazy either, no matter what patterns they match in your craziness recognition systems?

Comment author: ata 20 August 2010 07:11:45PM 13 points [-]

I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.

On the other hand, assuming he knows what it means to assign something a 10^-9 probability, it sounds like he's offering you a bet at 1000000000:1 odds in your favour. It's a good deal, you should take it.

Comment author: Unknowns 20 August 2010 07:08:40PM 13 points [-]

I agree it's kind of ironic that multi has such an overconfident probability assignment right after criticizing you for being overconfident. I was quite disappointed with his response here.

Comment author: multifoliaterose 20 August 2010 07:09:42PM 0 points [-]

I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.

I don't understand this remark.

What probability do you assign to your succeeding in playing a critical role on the Friendly AI project that you're working on? I can engage with a specific number. I don't know if your object is that my estimate is off by a single of order of magnitude or by many orders of magnitude.

Out of weary curiosity, what is it that you think you know about Friendly AI that I don't?

I should clarify that my comment applies equally to AGI.

I think that I know the scientific community better than you, and have confidence that if creating an AGI was as easy as you seem to think it is (how easy I don't know because you didn't give a number) then there would be people in the scientific community who would be working on AGI.

And has it occurred to you that if I have different non-crazy beliefs about Friendly AI then my final conclusions might not be so crazy either, no matter what patterns they match in your craziness recognition systems?

Yes, this possibility has certainly occurred to me. I just don't know what your different non-crazy beliefs might be.

Why do you think that AGI research is so uncommon within academia if it's so easy to create an AGI?

Comment author: [deleted] 20 August 2010 10:42:06PM *  2 points [-]

Why are people boggling at the 1-in-a-billion figure? You think it's not plausible that there are three independent 1-in-a-thousand events that would have to go right for EY to "play a critical role in Friendly AI success"? Not plausible that there are 9 1-in-10 events that would have to go right? Don't I keep hearing "shut up and multiply" around here?

Edit: Explain to me what's going on. I say that it seems to me that events A, B are likely to occur with probability P(A), P(B). You are allowed to object that I must have made a mistake, because P(A) times P(B) seems too small to you? (That is leaving aside the idea that 10-to-the-minus-nine counts as one of these too-small-to-be-believed numbers, which is seriously making me physiologically angry, ha-ha.)

Comment author: steven0461 20 August 2010 10:51:06PM *  10 points [-]

The 1-in-a-billion follows not from it being plausible that there are three such events, but from it being virtually certain. Models without such events will end up dominating the final probability. I can easily imagine that if I magically happened upon a very reliable understanding of some factors relevant to future FAI development, the 1 in a billion figure would be the right thing to believe. But I can easily imagine it going the other way, and absent such understanding, I have to use estimates much less extreme than that.

Comment author: CarlShulman 20 August 2010 04:21:41AM *  2 points [-]

1) can be finessed easily on its own with the idea that since we're talking about existential risk even quite small probabilities are significant.

3) could be finessed by using a very broad definition of "Friendly AI" that amounted to "taking some safety measures in AI development and deployment."

But if one uses the same senses in 2), then one gets the claim that most of the probability of non-disastrous AI development is concentrated in one's specific project, which is a different claim than "project X has a better expected value, given what I know now about capacities and motivations, than any of the alternatives (including future ones which will likely become more common as a result of AI advance and meme-spreading independent of me) individually, but less than all of them collectively."

Comment author: WrongBot 20 August 2010 04:29:45AM 5 points [-]

Who else is seriously working on FAI right now? If other FAI projects begin, then obviously updating will be called for. But until such time, the claim that "there is no significant chance of Friendly AI without this project" is quite reasonable, especially if one considers the development of uFAI to be a potential time limit.

Comment author: CarlShulman 20 August 2010 04:45:23AM *  5 points [-]

"there is no significant chance of Friendly AI without this project" Has to mean over time to make sense.

People who will be running DARPA, or Google Research, or some hedge fund's AI research group in the future (and who will know about the potential risks or be able to easily learn if they find themselves making big progress) will get the chance to take safety measures. We have substantial uncertainty about how extensive those safety measures would need to be to work, how difficult they would be to create, and the relevant timelines.

Think about resource depletion or climate change: even if the issues are neglected today relative to an ideal level, as a problem becomes more imminent, with more powerful tools and information to deal with it, you can expect to see new mitigation efforts spring up (including efforts by existing organizations such as governments and corporations).

However, acting early can sometimes have benefits that outweigh the lack of info and resources available further in the future. For example, geoengineering technology can provide insurance against very surprisingly rapid global warming, and cheap plans that pay off big in the event of surprisingly easy AI design may likewise have high expected value. Or, if AI timescales are long, there may be slowly compounding investments, like lines of research or building background knowledge in elites, which benefit from time to grow. And to the extent these things are at least somewhat promising, there is substantial value of information to be had by investigating now (similar to increasing study of the climate to avoid nasty surprises).

Comment author: DanielVarga 20 August 2010 05:38:30AM 2 points [-]

Everyone is allowed to believe they're saving the world. It is two other things, both quite obvious. First, we do not say it out loud if we don't want to appear kooky. Second, if someone really believes that he is literally saving the world, then he can be sure that he has a minor personality disorder [1], regardless of whether he will eventually save the world or not. Most great scientists are eccentric, so this is not a big deal, if you manage to incorporate it into your probability estimates while doing your job. I mean, this bias obviously affects your validity estimate for each and every argument you hear against hard AI takeoff. (I don't think your debaters so far did a good job bringing up such counterarguments, but that's beside the point.)

[1] by the way, in this case (in your case) grandiosity is the correct term, not delusions of grandeur.

Comment author: ciphergoth 20 August 2010 05:58:03AM 6 points [-]

if someone really believes that he is literally saving the world, then he can be sure that he has a minor personality disorder, regardless of whether he will eventually save the world or not.

Stanislav Petrov had this disorder? In thinking he was making the world a safer place, Gorbachev had this disorder? It seems a stretch to me to diagnose a personality disorder based on an accurate view of the world.

Comment author: DanielVarga 20 August 2010 06:55:02AM *  3 points [-]

Gorbachev was leading an actual superpower, so his case is not very relevant in a psychological analysis of grandiosity. At the time of the famous incident, Petrov was too busy to think about his status as a world-savior. And it is not very relevant here what he believed after saving the world.

It seems a stretch to me to diagnose a personality disorder based on an accurate view of the world.

I didn't mean to talk about an accurate view of the world. I meant to talk about a disputed belief about a future outcome. I am not interested in the few minutes while Petrov may had the accurate view that he is currently saving the world.

Comment author: Eliezer_Yudkowsky 20 August 2010 06:58:42AM 8 points [-]

Second, if someone really believes that he is literally saving the world, then he can be sure that he has a minor personality disorder [1], regardless of whether he will eventually save the world or not.

So you'd prohibit someone of accurate belief? I generally regard that as a reductio.

Comment author: Tyrrell_McAllister 20 August 2010 07:32:31PM *  3 points [-]

So you'd prohibit someone of accurate belief? I generally regard that as a reductio.

If a billion people buy into a 1-in-a-billion raffle, each believing that he or she will win, then every one of them has a "prohibited" belief, even though that belief is accurate in one case.

Comment author: simplicio 20 August 2010 01:18:53AM 3 points [-]

In high school I went through a period when I believed that I was a messianic figure whose existence had been preordained by a watchmaker God who planned for me to save the human race. It's appropriate to say that during this period of time I suffered from extreme delusions of grandeur. I viscerally understand how it's possible to fall into an affective death spiral.

Not that the two are exclusive, but this sounds an awful lot like a manic episode. I assume you gave that due consideration?

Comment author: Eneasz 25 August 2010 06:06:10PM 2 points [-]

As far as I can tell, Eliezer does have confidence in the idea that he is (at least nearly) the most important person in human history. Eliezer's silence only serves to further confirm my earlier impressions

I suppose you also believe that Obama must prove he's not a muslim? And must do so again every time someone asserts that he is?

Let me say that Eliezer may have already done more to save the world than most people in history. This is going on the assumption that FAI is a serious existential risk. Even if he is doing it wrong and his work will never directly contribute to FAI in any way, his efforts at popularizing the existence of this threat have vastly increased the pool of people who know of it and want to help in some way.

His skill at explanation and inspiration have brought more attention to this issue than any other single person I know of. The fact that he also has the intellect to work directly on the problem is simply an added bonus. And I strongly doubt that it's driven away anyone who would have otherwise helped.

You said you had delusions of messianic grandeur in high school, but you're better now. But then you post an exceptionally well done personal take-down of someone who YOU believe is too self-confident and who (more importantly) has convinced others that his confidence is justified. I think your delusions of messiah-hood are still present, perhaps unacknowledged, and you are suffering from envy of someone you view as "a more successful messiah".

Comment author: multifoliaterose 25 August 2010 09:37:16PM *  3 points [-]

I suppose you also believe that Obama must prove he's not a muslim? And must do so again every time someone asserts that he is?

I don't see the situation that you cite as comparable. Obama has stated that he's a Christian, and this seriously calls into question the idea that he's a Muslim.

Has Eliezer ever said something which calls my interpretation of the situation into question? If so I'll gladly link a reference to it in my top level post.

(As an aside, I agree with Colin Powell that whether or not Obama is a Muslim has no bearing on whether he's fit to be president.)

Let me say that Eliezer may have already done more to save the world than most people in history. This is going on the assumption that FAI is a serious existential risk. Even if he is doing it wrong and his work will never directly contribute to FAI in any way, his efforts at popularizing the existence of this threat have vastly increased the pool of people who know of it and want to help in some way.

His skill at explanation and inspiration have brought more attention to this issue than any other single person I know of. The fact that he also has the intellect to work directly on the problem is simply an added bonus. And I strongly doubt that it's driven away anyone who would have otherwise helped.

I definitely agree that some of what Eliezer has done has reduced existential risk. As I've said elsewhere, I'm grateful to Eliezer for inspiring me personally to think more about existential risk.

However, as I've said, in my present epistemological state I believe that he's also had (needless) negative effects on existential risk on account of making strong claims with insufficient evidence. See especially my responses to komponisto's comment. I may be wrong about this.

In any case, I would again emphasize that my most recent posts should not be interpreted as personal attacks on Eliezer. I'm happy to support Eliezer to the extent that he does things that I understand to lower existential risk.

You said you had delusions of messianic grandeur in high school, but you're better now. But then you post an exceptionally well done personal take-down of someone who YOU believe is too self-confident and who (more importantly) has convinced others that his confidence is justified. I think your delusions of messiah-hood are still present, perhaps unacknowledged, and you are suffering from envy of someone you view as "a more successful messiah".

My conscious motivation making my most recent string of posts is given in my Transparency and Accountability posting. I have no conscious awareness of having a motivation of the type that you describe.

Of course, I may be deluded about this (just as all humans may be deluded about possessing any given belief). In line with my top level posting, I'm interested in seriously considering the possibility that my unconscious motivations are working against my conscious goals.

However, I see your own impression as very poor evidence that I may be deluded on this particular point in light of your expressed preference for donating to Eliezer and SIAI even if doing so is not socially optimal:

And my priests are Eliezer Yudkowsky and the SIAI fellows. I don't believe they leach off of me, I feel they earn every bit of respect and funding they get. But that's besides the point. The point is that even if the funds I gave were spent sub-optimally, I would STILL give them this money, simply because I want other people to see that MY priests are better taken care of than THEIR priests.

I don't judge you for having this motivation (we're all only human). But the fact that you seem interested in promoting Eliezer and SIAI independently of whether doing so benefits broader society has led me to greatly discount your claims and suggestions which relate to Eliezer and SIAI.

Comment author: Eneasz 26 August 2010 12:08:58AM *  2 points [-]

(As an aside, I agree with Colin Powell that whether or not Obama is a Muslim has no bearing on whether he's fit to be president.)

Does whether Eliezer is over-confident or not have any bearing on whether he's fit to work on FAI?

I believe that he's also had (needless) negative effects on existential risk on account of making strong claims with insufficient evidence. See especially my responses to komponisto's comment. I may be wrong about this.

From the comment:

My claim is that on average Eliezer's outlandish claims repel people from thinking about existential risk.

The claim is not credible. I've seen a few examples given, but with no way to determine if the people "repelled" would have ever been open to mitigating existential risk in the first place. I suspect anyone who actually cares about existential risk wouldn't dismiss an idea out of hand because a well-known person working to reduce risk thinks his work is very valuable. It is unlikely to be their true rejection

In any case, I would again emphasize that my most recent posts should not be interpreted as personal attacks on Eliezer.

The latest post made this clear, and cheers for that. But the previous ones are written as attacks on Eliezer. It's hard to see a diatribe against someone describing them as a cult leader who's increasing existential risk and would do best to shut up and not interpret it as a personal attack.

But the fact that you seem interested in promoting Eliezer and SIAI independently of whether doing so benefits broader society has led me to greatly discount your claims and suggestions which relate to Eliezer and SIAI.

Fair enough, can't blame you for that. I'm happy with my enthusiasm.

Comment author: multifoliaterose 26 August 2010 12:42:02AM 2 points [-]

Does whether Eliezer is over-confident or not have any bearing on whether he's fit to work on FAI?

Oh, I don't think so, see my response to Eliezer here.

The claim is not credible. I've seen a few examples given, but with no way to determine if the people "repelled" would have ever been open to mitigating existential risk in the first place. I suspect anyone who actually cares about existential risk wouldn't dismiss an idea out of hand because a well-known person working to reduce risk thinks his work is very valuable. It is unlikely to be their true rejection

Yes, so here it seems like there's enough ambiguity as to how the publicly available data is properly interpreted so that we may have a legitimate difference of opinion on account of having had different experiences. As Scott Aaronson mentioned in the blogging heads conversation, humans have their information stored in a form (largely subconscious) such that it's not readily exchanged.

All I would add to what I've said is that if you haven't already done so, see the responses to michaelkeenan's comment here (in particular those by myself, bentarm and wedrifid).

If you remain unconvinced, we can agree to disagree without hard feelings :-)