Am I right in thinking this is the answer given by Bostrom, Baum, and others? i.e. something like "Research a broad range and their inter-relationships rather than focusing on one (or engaging in policy advocacy)"
That viewpoint seems very different to MIRI's. I guess in practice there's less of a gap - Bostrom's writing an AI book, LW and MIRI people are interested in other xrisks. Nevertheless that's a fundamental difference between MIRI and FHI or CSER.
Edit: Also, thank you for sharing, that sounds fascinating - in particular I've never come across 'mangled worlds', how interesting.
I'm writing to recommend something awesome to anyone who's recently signed up for cryonics (and to the future self of anyone who's about to do so). Robin Hanson has a longstanding offer that anyone who's newly signed up for cryonics can have an hour's discussion with him on any topic, and I took him up on that last week.
I expected to have a fascinating and wide-ranging discussion on various facets of futurism. My expectations were exceeded. Even if you've been reading Overcoming Bias for a long time, talking with Robin is an order of magnitude more stimulating/persuasive/informative than reading OB or even watching him debate someone else, and I'm now reconsidering my thinking on a number of topics as a result.
So if you've recently signed up, email Robin; and if you're intending to sign up, let this be one more incentive to quit procrastinating!
Relevant links:
The LessWrong Wiki article on cryonics is a good place to start if you have a bunch of questions about the topic.
If you want to argue about whether signing up for cryonics is a good idea, two good and relatively recent threads on that subject are under the posts on A survey of anti-cryonics writing and More Cryonics Probability Estimates.
And if you are cryocrastinating (you've decided that you should sign up for cryonics, but you haven't yet), here's a LW thread about taking the first step.