Many people have an incorrect view of the Future of Humanity Institute's funding situation, so this is a brief note to correct that; think of it as a spiritual successor to this post. As John Maxwell puts it, FHI is "one of the three organizations co-sponsoring LW [and] a group within the University of Oxford's philosophy department that tackles important, large-scale problems for humanity like how to go about reducing existential risk." (If you're not familiar with our work, this article is a nice, readable introduction, and our director, Nick Bostrom, wrote Superintelligence.) Though we are a research institute in an ancient and venerable institution, this does not guarantee funding or long-term stability.

Academic research is generally funded through grants, but because the FHI is researching important but unusual problems, and because this research is multi-disciplinary, we've found it difficult to attract funding from the usual grant bodies. This has meant that we’ve had to prioritise a certain number of projects that are not perfect for existential risk reduction, but that allow us to attract funding from interested institutions.

With more assets, we could both liberate our long-term researchers to do more "pure Xrisk" research, and hire or commission new experts when needed to look into particular issues (such as synthetic biology, the future of politics, and the likelihood of recovery after a civilization collapse).

We are not in any immediate funding crunch, nor are we arguing that the FHI would be a better donation target than MIRI, CSER, or the FLI. But any donations would be both gratefully received and put to effective use. If you'd like to, you can donate to FHI here. Thank you!
New Comment
25 comments, sorted by Click to highlight new comments since:

In case this helps and isn't obvious to everyone, I'll briefly mention that I'm the Executive Director of MIRI and I agree with what Daniel wrote above.

Also, the linked Ross Andersen piece on FHI is really good and people should read it.

$30 donated. It may become quasi-regular, monthly.

Thanks for letting us know. I wanted to donate to x-risk, but I didn't really want to give to MIRI (even though I like their goals and the people) because I worry that MIRI's approach is too narrow. FHI's broader approach, I feel, is more appropriate given our current ignorance about the vast possible varieties of existential threats.

Yes, thank you!

A heuristic I've previously encountered being thrown around about whether to donate to the MIRI, or the FHI, is to fund whichever one has more room for more funding, or whichever one is experiencing more of a funding crunch at a given time. As Less Wrong is a hub for an unusually large number of donors to each of these organizations, it might be nice if there was a (semi-)annual discussion on these matters with representatives from the various organizations. How feasible would this be?

This is worth thinking about in the future, thanks. I think right now, it's good to take advantage of MIRI's matched giving opportunities when they arise, and I'd expect either organization to announce if they were under a particular crunch or aiming to hit a particular target.

.impact is a volunteer task force of effective altruists who take upon projects not linked to any one organization. .impact deals in particular with implementing open-source software resources that are useful to effective altruists. Well, that's what it's trying to specialize in; the decentralized coordination of remote volunteers is very difficult.

Anyway, on the effective altruism forum, I was involved with a discussion about building an interactive visual map that updates on what the status of projects, and funding, for effective altruist organizations. Anybody trying to reduce existential risk would fall under effective altruism, so ostensibly, they'd be included on such a map, too. This would solve most of the problem I myself posed above.

I'll update Less Wrong in the future if I get wind of any progress on such a project. Anyone: send me a private message if you want more information.

I agree that this would be a good idea, and agree with the points below. Some discussion of this took place in this thread last Christmas: http://lesswrong.com/r/discussion/lw/je9/donating_to_miri_vs_fhi_vs_cea_vs_cfar/

On that thread I provided information about FHI's room for more funding (accurate as of start of 2014) plus the rationale for FHI's other, less Xrisk/Future of Humanity-specific projects (externally funded). I'd be happy to do the same at the end of this year, but instead representing CSER's financial situation and room for more funding.

I have a suspicion that one of the factors holding back donations from big names (think Peter Thiel level), is the absence of visibility. Both from the point of view that it isn't as "cool" as the Bill and Melinda gates foundation (i.e. to say there isn't already an existing public opinion that issues such as x risk are charity worthy, as opposed to something like say donating for underprivileged children to take part in some sporting event) and that it isn't as "visible" (to continue with the donation to children example, a lot of publicity can be obtained by putting up photos of apparently malnourished children sitting together in a line, full of smiles for the camera).

The distinction I have made between the two is artificial, but I thought it was the best way to illustrate that the disadvantages suffered my FHI, MIRI and that cluster of institutes are happening on two different levels.

However, the second point about visibility is actually a bit of a teeny bit concerning. The MIRI has been criticized for not doing much except publishing papers.That doesn't look good and it is hard for a layman to feel that giving away a portion of his salary just to see a new set of math formulas (looking much like the same formulas he saw last month) a good use of his money, especially if he doesn't see it directly helping anyone out.

I understand that by the nature of the research being undertaken, this may be all that we can hope for, but if there is a better way that MIRI can signal it's accountability, then I think that it should be done. Pronto.

Also, could someone who is so inclined get the math/code that is happening and dumb it down enough so that an average LW-er such as yours truly could make more sense of it?

The MIRI has been criticized for not doing much except publishing papers.

Really? Before, MIRI was being constantly criticized for not publishing any papers.

I see.

I take it that this is a damned if you do and damned if you don't kind of situation.

I'm not able to find the source right now (that criticized the MIRI on said grounds), but I'm pretty certain it wasn't a very authentic/respectable source to begin with. As far as I can recall, it was Stephen Bond, the same guy who wrote the article on "the cult of bayes theorem", though there was a link to his page from Yudkowsky's wikipedia page which is not there anymore.

I simply brought up this example to show how easy it is tarnish an image, something I'm sure you're well aware of. Nonetheless, my point still stands. IMAGE MATTERS.

It doesn't make a difference that the good (and ingenious) folk at MIRI are doing some of the most important work, that may at any given moment solve a large number of headaches for the human race. There are others out there making that same claim. And because some of those others are politicians wearing fancy suits, people will listen to them. (Don't even get me started on the saints and the priests who successfully manage to make decent hard working folk part with large portions of their lifetime's worth of savings, but those cases are a little bit beyond the scope of this particular argument).

A real estate agent can point to a rising skyscraper as evidence of money being put to good use. A NASA type of organisation (slightly tongue in cheek, just indicating a cluster) can point to a satellite orbiting Mars. A biotech company may one day point to a fully lab grown human with perfect glowing skin. A nanotech company can one day point to the world's smallest robot to "do the robot".

The above examples have two things in common, one that they are visible in the most literal sense of the word. The second is (I believe) that most people have a ready intuition by which they can see how achieving any of the above would require a large amount of cash/funding.

Software is harder to impress people with. Even harder if the software is genuinely complicated. To make matters worse, the media has flooded the imagination of newspaper readers all over the world with rage to riches stories of entrepreneurs who made it big and were ok with being only ramen profitable for long years.

And yet institutions that are ostensibly purely academic and research oriented also require funding. And I don't disagree. I've read HPMoR and I've read portions of the LW site as well. I know that this is likely for real, and that there is more than enough credibility built up by the proponents for research into these areas.

Unfortunately, I'm in the minority. And as of now, I'm a far cry from being financially sound. If the MIRI/FHI have to accelerate their research and they need funding for it, then it is not a bad idea to make their progress seem more tangible, even if they can't deliver every single detail every single time.

One possible major downside of this approach of course is that it might eat into valuable time which could otherwise be spent making the real progress that these institutions were created for in the first place.

I have a suspicion that one of the factors holding back donations from big names (think Peter Thiel level), is the absence of visibility.

I don't think you can call Nick Bostrom not visible. He made Foreign Policy's Top 100 Global Thinkers list. He also wrote the book last year.

The MIRI has been criticized for not doing much except publishing papers.

By whom? By the traditional metric of published papers, MIRI is an exceptionally unproductive research organization- only a few low-impact peer-reviewed papers, mostly in the last few years, despite a decade of funding. Its probably fair to say that donations to the old SIAI were more likely to go toward blog posts and fanfic than toward research papers.

Yeah, I was actually trying to say that they need to do other stuff too, not cut down on publishing papers.

You might wanna weigh in on this: http://lesswrong.com/lw/l13/the_future_of_humanity_institute_could_make_use/be4o

Before dismissing blog posts keep in mind the Sequences were blog posts. And they are probably much more useful and important than all but the best academic papers. If current donations happened to lead to blog posts of that caliber the donations would be money well spent.

Before dismissing blog posts keep in mind the Sequences were blog posts. And they are probably much more useful and important than all but the best academic papers.

How are we measuring useful or important? The sequences are entertaining, but its not clear to me they do much to actually help with the core goals of MIRI (besides the goal of entertaining people enough to fund MIRI, perhaps).

The advantage of a high-impact academic paper is it shapes the culture of academic research. A good idea in a well-received research paper will almost instantly lead to lots of other researchers working on the same problems. A great idea on a well-received research paper can get an entire sub-field working on the same problem.

The sequences are more advertisements than formalized research. Its papers like the one on Lob's obstacle that get researchers interested in working on these problems.

The sequences are more advertisements than formalized research. Its papers like the one on Lob's obstacle that get researchers interested in working on these problems.

I think that's up for debate.

And the sequences aren't "just advertisements".

I don't know any LW-ers in person, but I'm sure that at least some people have benefited from reading the sequences.

Can't really speak on behalf of researchers, but their motivations could literally be anything, maybe just finding the work interesting, to altruistic reasons or financial incentives.

I don't know any LW-ers in person, but I'm sure that at least some people have benefited from reading the sequences.

You miss my meaning. The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.

With regards to their core goal, the sequences matter if 1. they lead to people donating to MIRI 2. they lead to people working on friendly AI.

I view point 1 as advertising, and I think research papers are obviously better than the sequences for point 2.

[-]Cyan40

The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.

Kinda... more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.

A big part of the purpose of the Sequences is to kill likely mistakes and missteps from smart people trying to think about AI. 'Friendly AI' is a sufficiently difficult problem that it may be more urgent to raise the sanity waterline, filter for technical and philosophical insight, and amplify that insight (e.g., through CFAR), than to merely inform academia that AI is risky. Given people's tendencies to leap on the first solution that pops into their head, indulge in anthropomorphism and optimism, and become inoculated to arguments that don't fully persuade them on the first go, there's a case to be made for improving people's epistemic rationality, and honing the MIRI arguments more carefully, before diving into outreach.

[-]V_V20

The MIRI has been criticized for not doing much except publishing papers.

By whom? I mean, what should MIRI do other than publishing research papers?

Of course, If I did get such a version of the code, I may end up tinkering with it and inadvertently create the paperclip Maximiser.

Though if I ended up creating Quirinus Quirell, I'm not sure if it would be a good thing or not.

PS. this was meant as a joke.

What a coincidence - I could make use of the Future of Humanity Institute's money, too.

By donating it to the top altruistic cause, I assume ;-)

It will be the job of my new Institute for Verifiably Estimating, Guessing, and Extrapolating the Most Important Thing Ever (Subject to Availability of a Nutritious Diet, with Wholesome Ingredients in Culturally and Historically Expedient Servings) to figure out what that is.

ETA of course, we shall be affiliated with the University of Woolloomooloo