lukeprog comments on Lone Genius Bias and Returns on Additional Researchers - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (63)
Right; I think it's hard to tell whether donations do more good at MIRI, FHI, CEA, or CFAR — but if someone is giving to AMF then I assume they must care only about beings who happen to be living today (a Person-Affecting View), or else they have a very different model of the world than I do, one where the value of the far future is somehow not determined by the intelligence explosion.
Edit: To clarify, this isn't an exhaustive list. E.g. I think GiveWell's work is also exciting, though less in need of smaller donors right now because of Good Ventures.
There is also the possibility that they believe that MIRI/FHI/CEA/CFAR will have no impact on the intelligence explosion or the far future.
He's talking specifically about people donating to AMF. There are more things people can do than donate to AMF and donate to one of MIRI, FHI, CEA, and CFAR.
Correct.
Or simply because the quality of research is positively correlated with ability to secure funding, and thus research that would not be done without your donations generally has the lowest expected value of all research. In case of malaria, we need quantity, in case of AI research, we need quality.
I'm curious as to why you include CEA - my impression was that GWWC and 80k both focus on charities like AMF anyway? Is that wrong, or does CEA do more than it's component organizations?
Perhaps because GWWC's founder Toby Ord is part of FHI, and because CEA now shares offices with FHI, CEA is finding / producing new far future focused EAs at a faster clip than, say, GiveWell (as far as I can tell).
I'm currently donating to FHI for the UK tax advantages, so that's good to hear.
They could also reasonably believe that marginal donations to the organizations listed would not reliably influence an intelligence explosion in a way that would have significant positive impact on the value of the far future. They might also believe that AMF donations would have a greater impact on potential intelligence explosions (for example, because an intelligence explosion is so far into the future that the best way to help is to ensure human prosperity up to the point where GAI research actually becomes useful).
It is neither probable nor plausible that AMF, a credible maximum of short-term reliable known impact on lives saved valuing all current human lives equally, should happen to also possess a maximum of expected impact on future intelligence explosions. It is as likely as that donating to your local kitten shelter should be the maximum of immediate lives saved. This kind of miraculous excuse just doesn't happen in real life.
OK. Granted. Even a belief that the AMF is better at affecting intelligence explosions is unlikely to justify the claim that it is the best, and thus not justify the behavior described.
Amazing how even after reading all Eliezer's posts (many more than once), I can still get surprise, insight and irony at a rate sufficient enough to produce laughter for 1+ minute.
Bill Gates presents his rationale for attacking Malaria and Polio here.
I can't make much sense of it personally - but at least he isn't working on stopping global warming.