orthonormal comments on Robin Hanson's Cryonics Hour - Less Wrong

26 Post author: orthonormal 29 March 2013 05:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 29 March 2013 06:56:27PM 11 points [-]

We also talked about the relative likelihood of burning the cosmic commons, what would be required for a stable singleton in the future, mangled worlds and the Born probabilities, cryonics trusts and other incentives for revival, and some particulars of his projections about an em-driven world; but the topic that I'm most reconsidering afterward is the best approach to working on existential risk.

Essentially, Robin made the case that it's much more marginally useful now to work on analyzing the potentially long tail of x-risks than to focus on one very salient scenario—kind of like the way Bruce Schneier talks about better security via building resilient structures rather than concentrating on foiling specific "Hollywood" terror scenarios.

Comment author: John_Maxwell_IV 30 March 2013 05:49:02AM *  10 points [-]

Robin made the case that it's much more marginally useful now to work on analyzing the potentially long tail of x-risks than to focus on one very salient scenario

Seems worth its own post from him or you, IMO.

Comment author: Will_Newsome 30 March 2013 04:52:06AM *  5 points [-]

(Kneejerk response: If only we could engineer some kind of intelligence that could analyze the potentially long tail of x-risk, or could prudentially decide how to make trade offs between that and other ways of reducing x-risk, or could prudentially reconsider all the considerations that went into focusing on x-risk in the first place instead of some other focus of moral significance, or...)

Comment author: orthonormal 30 March 2013 03:48:57PM 11 points [-]

Yes, one of the nice features of FAI is that success there helps immensely with all other x-risks. However, it's an open question whether creating FAI is possible before other x-risks become critical.

That is, the kneejerk response has the same template as saying, "if only we could engineer cold fusion, our other energy worries would be moot, so clearly we should devote most of the energy budget to cold fusion research". Some such arguments carry through on expected utility, while others don't; so I actually need to sit down and do my best reckoning.

Comment author: [deleted] 29 March 2013 07:53:30PM *  3 points [-]

Am I right in thinking this is the answer given by Bostrom, Baum, and others? i.e. something like "Research a broad range and their inter-relationships rather than focusing on one (or engaging in policy advocacy)"

That viewpoint seems very different to MIRI's. I guess in practice there's less of a gap - Bostrom's writing an AI book, LW and MIRI people are interested in other xrisks. Nevertheless that's a fundamental difference between MIRI and FHI or CSER.

Edit: Also, thank you for sharing, that sounds fascinating - in particular I've never come across 'mangled worlds', how interesting.