steven0461 comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong

23 Post author: jimrandomh 09 June 2011 03:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread. Show more comments above.

Comment author: steven0461 09 June 2011 08:03:09PM 4 points [-]

I don't know who counts as the last several hires, but while I'm sure everyone at FHI does fine work, only Bostrom and Sandberg seem to be doing research related to AI risks. Also Hanson, I suppose, to the extent that he counts as working at FHI. I don't dispute that some marginal funds would on expectation go to research on these topics, but surely it would be a lot less than half.

Comment author: NickBostrom 26 June 2011 12:50:34AM 14 points [-]

Much of the dispersion is caused by the lack of unrestricted funds (and lack of future funding guarantees). Since we don't have enough funding from private philanthropists, we have to chase academic funding pots, and that then forces us to do some work that is less relevant to the important problems we would rather be working on. It would be unfortunate if potential private funders then looked at the fact that we've done some less-relevant work as a reason not to give.

Comment author: steven0461 26 June 2011 01:55:56AM 6 points [-]

Thank you for weighing in! Your point sounds valid. After taking it into account, if you considered marginal dollars donated to FHI without explicit earmarking, what is your estimate for the fraction of such dollars that end up causing a dollar's worth of research into topics that would be seen as highly relevant by someone with roughly SIAI-typical estimates for the future?

Comment author: NickBostrom 26 June 2011 07:31:27PM 7 points [-]

A high fraction. "A dollar's worth of research" is not a well-defined quantity - that is, the worth of the research produced by a dollar varies a lot depending on whom the dollar is given to. I like to think FHI is good at converting dollars into research. The kind of research I'd prefer to do with unrestricted funds at the moment probably coincides pretty well with what a person with SIAI-typical estimates would prefer, though what can be researched also depends on the capabilities and interests of the research staff one can recruit. (There are various tradeoffs here - e.g. a weaker researcher who has a long record of working in this area or taking a chance with a slighly stronger researcher and risk that she will do irrelevant work? headhunting somebody who is already actively contributing to the area or attempt to involve a new mind who would otherwise not have contributed? etc.)

There are also indirect effects, which might lead to the fraction being larger than one - for example, if discussions, conferences, and various kinds of influence encourage external researchers to enter the field. FHI does some of that, as does the SIAI.

Comment author: steven0461 27 June 2011 01:40:17AM *  3 points [-]

Thanks. When I said "a dollar's worth of research", I had in mind the estimate Carl mentioned of $200k per 2-year postdoc. I guess that doesn't affect the fraction question.

Comment author: CarlShulman 09 June 2011 08:58:09PM *  5 points [-]

The details depend on how you count the methodology/general existential risks stuff, e.g. the "probing the improbable" paper by Ord, Sandberg, and Hillerbrand. Also note that many of Bostrom's and Sandberg's publications, including the catastrophic risks book, and events like the Winter Intelligence Conference benefit from help by other FHI staff. Still, some hires have definitely done essentially no existential risk-relevant work. My guess is something like 1 Sandberg or Ord equivalent per 2-3 hires (with differential attrition leading to accumulation of the good).

Also, given earmarked funding they can create positions specifically for machine intelligence issues, the results of which are easier to track (the output of that person).

Comment author: steven0461 09 June 2011 10:57:33PM *  1 point [-]

given earmarked funding they can create positions specifically for machine intelligence issues

But presumably that would only be a consideration if FHI received very large amounts of such earmarked funding?

Comment author: CarlShulman 09 June 2011 11:02:10PM *  5 points [-]

$200k USD for one postdoc. One could save up for that with a donor-advised fund alone or with others, or use something like kickstarter.com.