Many people have an incorrect view of the Future of Humanity Institute's funding situation, so this is a brief note to correct that; think of it as a spiritual successor to this post. As John Maxwell puts it, FHI is "one of the three organizations co-sponsoring LW [and] a group within the University of Oxford's philosophy department that tackles important, large-scale problems for humanity like how to go about reducing existential risk." (If you're not familiar with our work, this article is a nice, readable introduction, and our director, Nick Bostrom, wrote Superintelligence.) Though we are a research institute in an ancient and venerable institution, this does not guarantee funding or long-term stability.
I see.
I take it that this is a damned if you do and damned if you don't kind of situation.
I'm not able to find the source right now (that criticized the MIRI on said grounds), but I'm pretty certain it wasn't a very authentic/respectable source to begin with. As far as I can recall, it was Stephen Bond, the same guy who wrote the article on "the cult of bayes theorem", though there was a link to his page from Yudkowsky's wikipedia page which is not there anymore.
I simply brought up this example to show how easy it is tarnish an image, something I'm sure you're well aware of. Nonetheless, my point still stands. IMAGE MATTERS.
It doesn't make a difference that the good (and ingenious) folk at MIRI are doing some of the most important work, that may at any given moment solve a large number of headaches for the human race. There are others out there making that same claim. And because some of those others are politicians wearing fancy suits, people will listen to them. (Don't even get me started on the saints and the priests who successfully manage to make decent hard working folk part with large portions of their lifetime's worth of savings, but those cases are a little bit beyond the scope of this particular argument).
A real estate agent can point to a rising skyscraper as evidence of money being put to good use. A NASA type of organisation (slightly tongue in cheek, just indicating a cluster) can point to a satellite orbiting Mars. A biotech company may one day point to a fully lab grown human with perfect glowing skin. A nanotech company can one day point to the world's smallest robot to "do the robot".
The above examples have two things in common, one that they are visible in the most literal sense of the word. The second is (I believe) that most people have a ready intuition by which they can see how achieving any of the above would require a large amount of cash/funding.
Software is harder to impress people with. Even harder if the software is genuinely complicated. To make matters worse, the media has flooded the imagination of newspaper readers all over the world with rage to riches stories of entrepreneurs who made it big and were ok with being only ramen profitable for long years.
And yet institutions that are ostensibly purely academic and research oriented also require funding. And I don't disagree. I've read HPMoR and I've read portions of the LW site as well. I know that this is likely for real, and that there is more than enough credibility built up by the proponents for research into these areas.
Unfortunately, I'm in the minority. And as of now, I'm a far cry from being financially sound. If the MIRI/FHI have to accelerate their research and they need funding for it, then it is not a bad idea to make their progress seem more tangible, even if they can't deliver every single detail every single time.
One possible major downside of this approach of course is that it might eat into valuable time which could otherwise be spent making the real progress that these institutions were created for in the first place.