lukeprog comments on Lone Genius Bias and Returns on Additional Researchers - Less Wrong

24 Post author: ChrisHallquist 01 November 2013 12:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (63)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChrisHallquist 01 November 2013 03:32:09AM 3 points [-]

I really like both of your comments in this thread, Luke.

Also note that MIRI has in fact spent most of its history on strategic research and movement-building, and now that those things are also being done pretty well by FHI, CEA, and CFAR, it makes sense for MIRI to do (what we think is) the most useful object-level thing (FAI research), especially since we have a comparative advantage there (Eliezer).

I'm glad you mentioned this. I should clarify that most of my uncertainty about continuing to donate to MIRI in the future is uncertainty about donating to MIRI vs. one of these other organizations. To the extent that it's really important to have people at Google, the DoD, etc. be safety-conscious, it think it's possible movement building might offer better returns than technical research right now... but I'm not sure about that, and I do think the technical research is valuable.

Comment author: lukeprog 01 November 2013 04:12:27AM *  9 points [-]

Right; I think it's hard to tell whether donations do more good at MIRI, FHI, CEA, or CFAR — but if someone is giving to AMF then I assume they must care only about beings who happen to be living today (a Person-Affecting View), or else they have a very different model of the world than I do, one where the value of the far future is somehow not determined by the intelligence explosion.

Edit: To clarify, this isn't an exhaustive list. E.g. I think GiveWell's work is also exciting, though less in need of smaller donors right now because of Good Ventures.

Comment author: EHeller 02 November 2013 06:47:11PM *  6 points [-]

There is also the possibility that they believe that MIRI/FHI/CEA/CFAR will have no impact on the intelligence explosion or the far future.

Comment author: endoself 02 November 2013 08:25:58PM 4 points [-]

He's talking specifically about people donating to AMF. There are more things people can do than donate to AMF and donate to one of MIRI, FHI, CEA, and CFAR.

Comment author: lukeprog 03 November 2013 02:37:50AM 1 point [-]

Correct.

Comment author: private_messaging 29 December 2013 10:51:48AM *  0 points [-]

Or simply because the quality of research is positively correlated with ability to secure funding, and thus research that would not be done without your donations generally has the lowest expected value of all research. In case of malaria, we need quantity, in case of AI research, we need quality.

Comment author: somervta 01 November 2013 08:13:56AM 3 points [-]

I'm curious as to why you include CEA - my impression was that GWWC and 80k both focus on charities like AMF anyway? Is that wrong, or does CEA do more than it's component organizations?

Comment author: lukeprog 01 November 2013 08:26:43AM 5 points [-]

Perhaps because GWWC's founder Toby Ord is part of FHI, and because CEA now shares offices with FHI, CEA is finding / producing new far future focused EAs at a faster clip than, say, GiveWell (as far as I can tell).

Comment author: ciphergoth 01 November 2013 08:08:34AM 3 points [-]

I'm currently donating to FHI for the UK tax advantages, so that's good to hear.

Comment author: dankane 02 November 2013 06:07:26PM 4 points [-]

They could also reasonably believe that marginal donations to the organizations listed would not reliably influence an intelligence explosion in a way that would have significant positive impact on the value of the far future. They might also believe that AMF donations would have a greater impact on potential intelligence explosions (for example, because an intelligence explosion is so far into the future that the best way to help is to ensure human prosperity up to the point where GAI research actually becomes useful).

Comment author: Eliezer_Yudkowsky 02 November 2013 08:40:27PM 14 points [-]

They might also believe that AMF donations would have a greater impact on potential intelligence explosions

It is neither probable nor plausible that AMF, a credible maximum of short-term reliable known impact on lives saved valuing all current human lives equally, should happen to also possess a maximum of expected impact on future intelligence explosions. It is as likely as that donating to your local kitten shelter should be the maximum of immediate lives saved. This kind of miraculous excuse just doesn't happen in real life.

Comment author: dankane 03 November 2013 07:10:40PM 1 point [-]

OK. Granted. Even a belief that the AMF is better at affecting intelligence explosions is unlikely to justify the claim that it is the best, and thus not justify the behavior described.

Comment author: joaolkf 27 December 2013 04:23:03PM 0 points [-]

Amazing how even after reading all Eliezer's posts (many more than once), I can still get surprise, insight and irony at a rate sufficient enough to produce laughter for 1+ minute.

Comment author: timtyler 03 November 2013 10:48:50PM 0 points [-]

Bill Gates presents his rationale for attacking Malaria and Polio here.

I can't make much sense of it personally - but at least he isn't working on stopping global warming.