XiXiDu comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong

23 Post author: jimrandomh 09 June 2011 03:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread.

Comment author: XiXiDu 09 June 2011 11:57:21AM 2 points [-]

...if there were a 100 good papers in about it in the right journals;

Just one paper (AI safety or FAI design)...I will be very impressed. I will donate a minimum of $10 ($20 for a technical paper on FAI design) per peer-reviewed research paper per journal to the SIAI.

I doubt I'll have to donate even once within the next 50 years. But I would be happy to be proven wrong.

Comment author: CarlShulman 09 June 2011 05:07:13PM *  5 points [-]

There are some of those in the works, but note that the Future of Humanity Institute converts funds into research papers on these topics as well (Nick Bostrom is working on an academic book now which pretty comprehensively summarizes the work of folk around SIAI).

FHI accepts donations, and estimates a cost of about $200k (USD, although currency swings may have changed this number) per 2 year postdoc, including travel, share of overhead and administrative costs, conferences, journal fees, etc. As part of Oxford, they have comparative advantage in hiring academics and lending prestige to the work. You can look at their research record on their website and assess things that way.

Comment author: steven0461 09 June 2011 06:00:00PM *  6 points [-]

note that the Future of Humanity Institute converts funds into research papers on these topics as well

Converts funds, or converts marginal funds?

I've been meaning to start the SIAI vs FHI conversation here in its own thread for some time, if people don't think it falls afoul of Common Interest of Many Causes.

Comment author: CarlShulman 09 June 2011 07:19:38PM 5 points [-]

Marginal funds. FHI is funding-limited in its number of positions there. The marginal hires do not average Bostrom-level productivity (it's hard to get academics to pursue a research agenda other than one they were already working on), but you can look at the last several hires and average across them.

Comment author: steven0461 09 June 2011 08:03:09PM 4 points [-]

I don't know who counts as the last several hires, but while I'm sure everyone at FHI does fine work, only Bostrom and Sandberg seem to be doing research related to AI risks. Also Hanson, I suppose, to the extent that he counts as working at FHI. I don't dispute that some marginal funds would on expectation go to research on these topics, but surely it would be a lot less than half.

Comment author: NickBostrom 26 June 2011 12:50:34AM 14 points [-]

Much of the dispersion is caused by the lack of unrestricted funds (and lack of future funding guarantees). Since we don't have enough funding from private philanthropists, we have to chase academic funding pots, and that then forces us to do some work that is less relevant to the important problems we would rather be working on. It would be unfortunate if potential private funders then looked at the fact that we've done some less-relevant work as a reason not to give.

Comment author: steven0461 26 June 2011 01:55:56AM 6 points [-]

Thank you for weighing in! Your point sounds valid. After taking it into account, if you considered marginal dollars donated to FHI without explicit earmarking, what is your estimate for the fraction of such dollars that end up causing a dollar's worth of research into topics that would be seen as highly relevant by someone with roughly SIAI-typical estimates for the future?

Comment author: NickBostrom 26 June 2011 07:31:27PM 7 points [-]

A high fraction. "A dollar's worth of research" is not a well-defined quantity - that is, the worth of the research produced by a dollar varies a lot depending on whom the dollar is given to. I like to think FHI is good at converting dollars into research. The kind of research I'd prefer to do with unrestricted funds at the moment probably coincides pretty well with what a person with SIAI-typical estimates would prefer, though what can be researched also depends on the capabilities and interests of the research staff one can recruit. (There are various tradeoffs here - e.g. a weaker researcher who has a long record of working in this area or taking a chance with a slighly stronger researcher and risk that she will do irrelevant work? headhunting somebody who is already actively contributing to the area or attempt to involve a new mind who would otherwise not have contributed? etc.)

There are also indirect effects, which might lead to the fraction being larger than one - for example, if discussions, conferences, and various kinds of influence encourage external researchers to enter the field. FHI does some of that, as does the SIAI.

Comment author: steven0461 27 June 2011 01:40:17AM *  3 points [-]

Thanks. When I said "a dollar's worth of research", I had in mind the estimate Carl mentioned of $200k per 2-year postdoc. I guess that doesn't affect the fraction question.

Comment author: CarlShulman 09 June 2011 08:58:09PM *  5 points [-]

The details depend on how you count the methodology/general existential risks stuff, e.g. the "probing the improbable" paper by Ord, Sandberg, and Hillerbrand. Also note that many of Bostrom's and Sandberg's publications, including the catastrophic risks book, and events like the Winter Intelligence Conference benefit from help by other FHI staff. Still, some hires have definitely done essentially no existential risk-relevant work. My guess is something like 1 Sandberg or Ord equivalent per 2-3 hires (with differential attrition leading to accumulation of the good).

Also, given earmarked funding they can create positions specifically for machine intelligence issues, the results of which are easier to track (the output of that person).

Comment author: steven0461 09 June 2011 10:57:33PM *  1 point [-]

given earmarked funding they can create positions specifically for machine intelligence issues

But presumably that would only be a consideration if FHI received very large amounts of such earmarked funding?

Comment author: CarlShulman 09 June 2011 11:02:10PM *  5 points [-]

$200k USD for one postdoc. One could save up for that with a donor-advised fund alone or with others, or use something like kickstarter.com.

Comment author: jimrandomh 09 June 2011 12:19:58PM 5 points [-]

Just one paper (AI safety or FAI design)...I will be very impressed. ... I doubt I'll have to donate even once within the next 50 years. But I would be happy to be proven wrong.

Comments like this are evidence that focus on getting papers into journals is important, relative to the amount of effort currently going into it.

Comment author: steven0461 09 June 2011 05:42:17PM 2 points [-]

And every time someone doesn't make a comment like this, it's evidence that such a focus is unimportant, so what makes you think it comes out one way rather than the other on net?

Comment author: handoflixue 09 June 2011 07:13:58PM 3 points [-]

LessWrong seems significantly more likely than normal to produce vocal dissent ("I wouldn't find this useful") rather than silence. That said, LessWrong is probably also not the majority of AI researchers, who are the actual target audience, so using ourselves as a "test market" is probably flawed on a few levels...

Comment author: timtyler 09 June 2011 05:54:44PM *  0 points [-]

Just one paper (AI safety or FAI design)...I will be very impressed.

Does this one count?

It has had some peer review - and should be in the AGI-11 Conference Proceedings.