Eliezer_Yudkowsky comments on A personal history of involvement with effective altruism - Less Wrong

10 Post author: JonahSinick 11 June 2013 04:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 25 June 2013 11:02:18PM *  8 points [-]

I'm not truly impressed with GiveWell's general optimization since they never made a good case that malaria was connected to astronomical benefits or, indeed, seem to have realized that such a case is necessary for effective altruism.

Well, but I'm not sure MIRI can be said to have "made a good case" that its own work is well-connected to astronomical benefits, either. Presumably the argument for that looks something like the FAI Research as Effective Altruism argument, but that argument hasn't been made in much detail, with the key assumptions clearly identified and argued for with clarity and solid evidential backing. E.g.:

  • I'm not aware of a thorough, empirical (written) investigation of whether elites will handle AI just fine.
  • Beckstead's 2013 thesis is the first document I'm aware of that clearly lays out all the assumptions baked into the argument for the overwhelming importance of the far future.
  • My 2013 post When Will AI Be Created? is (I think) the best available piece for capturing the enormous difficulties of predicting AI — with reference to lots of relevant empirical data — while also (barely) making the case for assigning a good chunk of one's probability mass to getting AI this century. But it's still pretty inadequate, and the part making the case for the plausibility of AI this century could be substantially improved if more time was invested. (Compare to Bostrom 1998, which I find inadequate. I also think it will now look naively timelines-optimistic to most observers.)

Moreover, it's not that Givewell (well, Holden) hasn't "realized" that recommended altruistic interventions (e.g. bednets) need to be connected via argument to astronomical benefits. Rather, Holden has been aware of astronomical waste arguments for a long time, and has reasons for rejecting them. He also discussed astronomical waste arguments many times with Beckstead while Beckstead was writing his dissertation. Unfortunately, Holden has struggled to clearly express his reasons for rejecting astronomical waste arguments. He tried to explain his reasons to me in person once but I couldn't make sense of what he was saying. He also tried to explain his point in the last three paragraphs of this comment, but I, at least, still don't understand quite what he's saying. Explaining is hard.

Also, Holden has spent a lot of time working up to an explanation of why he (currently) thinks that (1) "generic good work" (which may indirectly produce astronomical benefits via ripple effects) has higher expected value than (2) narrow interventions aimed more directly at astronomical benefit. His two latest posts in this thread are Flow-through effects and Possible global catastrophic risks, and he has promised that "a future post will discuss how I think about the overall contribution of economic/technological development to our odds of having a very bright, as opposed to very problematic, future."

And all this during the early years in which GiveWell mostly hasn't been investigating trickier issues like how different interventions connect to potential astronomical benefits, because GiveWell (wisely, I think) decided to start under the streetlight.

Comment author: Eliezer_Yudkowsky 26 June 2013 07:29:42PM 3 points [-]

Well, but I'm not sure MIRI can be said to have "made a good case" that its own work is well-connected to astronomical benefits, either.

False modesty. The 'good case' already made for FAI being (optimally) related to astronomical benefits and the 'good case' already made for malaria reduction being (optimally) related to astronomical benefits are not of the same order of magnitude of already madeness.

Comment author: lukeprog 26 June 2013 07:45:35PM *  2 points [-]

I'm not sure "false modesty" applies, at least given my views about the degree to which the FAI case has been made.

For my own idea of "good case made," anyway, I'd say the "malaria nets near-optimally connected to astronomical benefits" case is close to 0% of the way to "good case made," and the "FAI research near-optimally connected to astronomical benefits" case is more like 10% of the way to "good case made."

Comment author: JonahSinick 26 June 2013 08:29:16PM *  0 points [-]

I don't think that MIRI has made a case for the particular FAI research that it's doing having non-negligible relevance to AI safety. See my "Chinese Economy" comments here.

Comment author: Eliezer_Yudkowsky 27 June 2013 05:11:07AM 3 points [-]

Ah, I'd heard a rumor you'd updated away from that, guess that was mistaken. I've replied to that comment.

Comment author: JonahSinick 27 June 2013 07:00:19AM 0 points [-]

Thanks