Will_Newsome comments on Efficient philanthropy: local vs. global approaches - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (13)
This is a little bit tricky in the case of Friendly AI, because Friendly AI is like the ultimate researcher of existential risks and potential countermeasures. But basically, currently there are three major options for those folk worried about x-risk and who want to help out with donations, at least as I see it. The first option is to donate to SIAI, perhaps earmarking it for Friendliness research. This option is for those who are familiar with all the arguments and either don't think it's Pascalian or don't mind it. The second option is to donate to FHI. They're actively researching possible existential risks, they're at Oxford, they're high status, you can explain their purpose to your friends and family, and they've proven they're pretty good at doing interesting research and publicizing it. Bostrom is effin' prolific. The third option is to save your cash and wait awhile. A better option might come up, you might get important info, et cetera. All three options seem reasonable to me. A fourth option might be to invest the money in your ability to wisely donate in the future; take a university course on probabilistic modeling or something.