Perplexed comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 20 August 2010 01:45:48AM *  12 points [-]

It comes down to this: I enjoy LW for now. If Eliezer insists on creating a sealed reality around himself, what's that to me? You don't have to slay every dragon you see. Saving one person from megalomania (real or imagined) is way less important than your own research. Imagine the worst possible world: Eliezer turns into a kook. What would that change, in the grand scheme of things or in your personal life?

The very fate of the universe, potentially. Purely hypothetically and for the sake of the discussion:

  • If Eliezer did have the potential to provide a strong positive influence on grand scale future outcomes but was crippled by the still hypothetical lack of self-doubt then that is a loss of real value.
  • A bad 'Frodo' can be worse than no Frodo at all. If we were to give the ring to a Frodo who thought he could take on Nazgul in hand to hand combat then we would lose the ring and so the lose the chance to give said ring to someone who could pull it off. Multi (and those for whom he asks such questions) have limited resources (and attention) so it may be worth deliberate investigation of potential recipients of trust.
  • Worse yet than a counterproductive Frodo would be a Frodo whose arrogance pisses of Aragorn, Gandalf, Legolas, Gimli, Merry, Pippin and even Sam so much that they get disgusted with the whole 'save the world' thing and go hang out in the forest flirting with Elven maidens. Further cause to investigate just whose bid for notoriety and influence you wish to support.

I cannot emphasise how much this is only a reply to the literal question cousin_it asked and no endorsement or denial of any of the above claims as they relate to persons real or imagined. For example it may have been good if Frodo was arrogant enough to piss off Aragorn. He may cracked it, taken the ring from Frodo and given it to Arwen. Arwen was crazy enough to give up the immortality she already had and so would be as good a candidate as any for being able to ditch a ring, without being completely useless for basically all purposes.

Comment author: Perplexed 20 August 2010 03:14:41AM *  8 points [-]

What would that change, in the grand scheme of things or in your personal life?

The very fate of the universe, potentially.

I suppose I could draw from that the inference that you have a rather inflated notion of the importance of what multi is doing here, ... but, in the immortal words of Richard Milhous Nixon, "That would be wrong."

More seriously, I think everyone here realizes that EY has some rough edges, as well as some intellectual strengths. For his own self-improvement, he ought to be working on those rough edges. I suspect he is. However, in the meantime, it would be best if his responsibilities were in areas where his strengths are exploited and his rough edges don't really matter. So, just what are his current responsibilities?

  1. Convincing people that UFAI constitutes a serious existential risk while not giving the whole field of futurism and existential risk reduction a bad rep.

  2. Setting direction for and managing FAI and UFAI-avoidance research at SIAI.

  3. Conducting FAI and UFAI-avoidance research.

  4. Reviewing and doing conceptual QC on the research work product.

To be honest, I don't see EY's "rough edges" as producing any problems at all with his performance on tasks #3 and #4. Only SIAI insiders know whether there has been a problem on task #2. Based on multi's arguments, I suspect he may not be doing so well on #1. So, to me, the indicated response ought to be one of the following:

A. Hire someone articulate (and if possible, even charismatic) to take over task #1 and make whatever minor adjustments are needed regarding task #2.

B. Do nothing. There is no problem!

C. Get some academic papers published so that FAI/anti-UFAI research becomes interesting to the same funding sources that currently support CS, AI, and decision theory research. Then reconstitute SIAI as just one additional research institution which is fighting for that research funding.

I would be interested in what EY thinks of these three possibilities. Perhaps for different reasons, I suspect, so would multi.

[Edited to correct my hallucination of confusing multifoliaterose with wedrifid. As a result of this edit, various comments below may seem confused. Sorry about that, but I judge that making this comment clear is the higher priority.]

Comment author: wedrifid 20 August 2010 09:42:41AM *  1 point [-]

Edit again: not wedrifid

Was the first (unedited) 'you' intended? If so I'll note that I was merely answering a question within a counterfactual framework suggested by the context. I haven't even evaluated what potential importance multi's post may have - but the prior probability I have for 'a given post on LW mattering significantly' is not particularly high.

I like your general analysis by the way and am always interested to know what the SIAI guys are doing along the lines of either your 1,2,3 or your A, B, C. I would seriously like to see C happen. Being able and willing to make that sort of move would be a huge step forward (and something that makes any hints of 'arrogance' seem trivial.)

Comment author: Unknowns 20 August 2010 09:58:44AM 0 points [-]

I think that originally Perplexed didn't look at your comment carefully and thought that multi had written it.

Comment author: Perplexed 20 August 2010 03:36:26PM 1 point [-]

Close. Actually, I had looked at the first part of the comment and then written my response under the delusion that wedrifid had been the OP.

I am now going to edit my comment to cleanly replace the mistaken "you" with "multi"

Comment author: wedrifid 20 August 2010 11:23:39AM *  1 point [-]

I think you are right. I'm just playing the disclaimer game. Since this is a political thread there is always the risk of being condemned for supporting various positions. In this case I gave a literal answer to a rhetorical question directed at multi. Following purely social reasoning that would mean that I:

  • Am challenging cousin_it
  • Condemning Eliezer
  • Agreeing with anything and everything said by multi and probably also with everything said by anyone else who agrees with multi.
  • Almost certainly saying something about the credulity of uFAI risks.
  • In some way think any of this is particularly important to the universe outside the time/abstract-space bubble that is LessWrong this week.

Of course that comment actually lent credence to Eliezer (hence the humor) and was rather orthogonal to multi's position with respect to arrogance.

It's not that I mind too much sticking my neck out risking a social thrashing here or there. It's just that I have sufficient capability for sticking my neck out for things that I actually do mean and for some reason prefer any potential criticism to be correctly targeted. It says something about many nerds that they value being comprehended more highly than approval.

Comment author: Strange7 20 December 2010 12:12:33PM 0 points [-]

Approval based on incomprehension is fragile and unsatisfying.