Many people have an incorrect view of the Future of Humanity Institute's funding situation, so this is a brief note to correct that; think of it as a spiritual successor to this post. As John Maxwell puts it, FHI is "one of the three organizations co-sponsoring LW [and] a group within the University of Oxford's philosophy department that tackles important, large-scale problems for humanity like how to go about reducing existential risk." (If you're not familiar with our work, this article is a nice, readable introduction, and our director, Nick Bostrom, wrote Superintelligence.) Though we are a research institute in an ancient and venerable institution, this does not guarantee funding or long-term stability.
$30 donated. It may become quasi-regular, monthly.
Thanks for letting us know. I wanted to donate to x-risk, but I didn't really want to give to MIRI (even though I like their goals and the people) because I worry that MIRI's approach is too narrow. FHI's broader approach, I feel, is more appropriate given our current ignorance about the vast possible varieties of existential threats.
Yes, thank you!