Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

New x-risk organization at Cambridge University

10 Post author: lukeprog 24 April 2012 05:50PM

CSER at Cambridge University joins the others.

Good people involved so far, but the expected output depends hugely on who they pick to run the thing.

Comments (13)

Comment author: JoshuaZ 27 April 2012 02:00:06AM *  10 points [-]

I'm a little worried to see that Nick Bostrom is involved in this group. Bostrom is very smart and he's clearly one of the people who is thinking the most about existential risk, but there's a real danger that having the same few people be involved in existential risk organizations will lead to problems like anchoring and availability bias. If one takes the Fermi paradox seriously and takes a not too strong Copernican principle, one concludes that other species would be likely to come up with the notion of the Fermi paradox and that this didn't help them at all. That suggests that if there's any Great Filter in our future it may be extremely non-obvious to the point where species that encounter it do so generally without warning even when they are looking for existential risk issues (and at a meta level, they might even have been aware of this problem!). If that's the case, it is all the more important that we have creative people thinking about existential risk independently from each other if we are going to have any hope of seeing such a threat before it arrives.

Comment author: DanArmak 01 May 2012 10:44:12PM 4 points [-]

If one takes the Fermi paradox seriously and takes a not too strong Copernican principle, one concludes that other species would be likely to come up with the notion of the Fermi paradox and that this didn't help them at all.

How is this different from reasoning more generally? I.e, "one concludes that we will come up with generally the same ideas as other species, and since we infer this didn't help most or all the other species, nothing we do is likely to help us either." Or in simpler words: we infer the Great Barrier is really Great.

Different potentially spacefaring, expansionist lifeforms, from completely different evolutions, will have an awful lot of differences on average. Those of them who use observation and rational deduction (a subset), will observe Fermi paradox and predict Great Filter just like we do, and on natural-selection principles at least some would try to avoid it, but we see none who have succeeded around. That's my reading of your argument.

But if we allow that they use observation and rational deduction to plan actions - that they are intelligent in a way comparable to ours - then it is also likely they are similar to us in other consequences of such intelligence. Should we conclude that no product of a generalized capacity for intelligence is likely to save us from the Great Filter, and we should instead try to use uniquely human advantages less likely to evolve twice, like e.g. our social-political behaviors?

Comment author: JoshuaZ 01 May 2012 11:03:49PM 1 point [-]

I'm not sure how to respond. Your comment is potentially the most enlightening and disturbing thing I've seen on LW for a while.

Comment author: Oscar_Cunningham 28 April 2012 09:22:52PM 2 points [-]

It's not clear to me why the Fermi paradox should be evidence of an unexpected great filter, as opposed to one that's just hard. Can you explain?

Comment author: JoshuaZ 29 April 2012 12:25:42AM 1 point [-]

One of the easiest ways to have a hard filter is if it is unexpected. But yes, the Fermi paradox is more generally evidence of a hard filter rather than an unexpected filter by itself.