I was slightly disappointed by his answer - surely there can only be one optimal charity to give to?
It seems that argument applies primarily to well-defined goals. Do you necessarily have to view the SI and FHI as two charities? The SI is currently pursuing a wide range of sub-goals, e.g. rationality camps. I perceive the FHI to be mainly about researching existential risks in general. Clearly you should do your own research and then decide which x-risk is the most urgent one and then support its mitigation. Yet you should also reassess your decision from time to time. And here I think it might be justified to contribute part of your money to the FHI. By doing so you can externalize the review of existential risks. You concentrate most of your effort on the risk that the FHI deems most urgent until it does revise its opinion.
In other words, view the SI and FHI as one charity with different departments and your ability to contribute separately as a way to weight different sub-goals aimed at the same overall big problem, saving humanity.
In other words, view the SI and FHI as one charity with different departments and your ability to contribute separately as a way to weight different sub-goals aimed at the same overall big problem, saving humanity.
Four!
Underground Q&A session with Nick Bostrom (http://www.nickbostrom.com) on existential risks and artificial intelligence with the Oxford Transhumanists (recorded 10 October 2011).
http://www.youtube.com/watch?v=KQeijCRJSog