Vladimir_Nesov comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 15 August 2010 03:21:51PM 13 points [-]

whpearson mentioned this already, but if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute.

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction, and indeed they're focusing on persuading more people of this particular claim. As you say, by focusing on something specific, radical and absurd, they run more of a risk of being dismissed entirely than does FHI, but their strategy is still correct given the premise.

Comment author: Vladimir_Nesov 15 August 2010 03:40:32PM 3 points [-]

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction

But who does the evaluation? It seems that it's better to let specialists think about whether a given cause is important, and they need funding just to get that running. This argues for ensuring minimum funding of organizations that research important uncertainties, even the ones where your intuitive judgment says probably lead nowhere. Just as most people shouldn't themselves research FAI, and instead fund its research, similarly most people shouldn't research feasibility of research of FAI, and instead fund the research of that feasibility.

Comment author: orthonormal 15 August 2010 04:01:03PM 1 point [-]

I think you claim too much. If I decided I couldn't follow the relevant arguments, and wanted to trust a group to research the important uncertainties of existential risk, I'd trust FHI. (They could always decide to fund or partner with SIAI themselves if its optimality became clear.)

Comment author: whpearson 15 August 2010 08:32:50PM 5 points [-]

My only worry about funding FHI exclusively is that they are primarily philosophical and academic. I'd worry that the default thing they would do with more money would be to produce more philosophical papers. Rather than say doing/funding biological research or programming, if that was what was needed.

But as the incentive structures for x-risk reduction organisations go, those of an academic philosophy department aren't too bad at this stage.

Comment author: Vladimir_Nesov 15 August 2010 04:04:47PM *  3 points [-]

This seems to work as an argument for much greater marginal worth of ensuring minimal funding, so that there is at least someone who researches the uncertainties professionally (to improve on what people from the street can estimate in their spare time), before we ask about the value of researching the stuff these uncertainties are about. (Of course, being the same organization that directly benefits from a given answer is generally a bad idea, so FHI might work in this case.)

Comment author: timtyler 15 August 2010 04:10:12PM *  0 points [-]