CSER at Cambridge University joins the others.
Good people involved so far, but the expected output depends hugely on who they pick to run the thing.
CSER at Cambridge University joins the others.
Good people involved so far, but the expected output depends hugely on who they pick to run the thing.
I'm a little worried to see that Nick Bostrom is involved in this group. Bostrom is very smart and he's clearly one of the people who is thinking the most about existential risk, but there's a real danger that having the same few people be involved in existential risk organizations will lead to problems like anchoring and availability bias. If one takes the Fermi paradox seriously and takes a not too strong Copernican principle, one concludes that other species would be likely to come up with the notion of the Fermi paradox and that this didn't help them at all. That suggests that if there's any Great Filter in our future it may be extremely non-obvious to the point where species that encounter it do so generally without warning even when they are looking for existential risk issues (and at a meta level, they might even have been aware of this problem!). If that's the case, it is all the more important that we have creative people thinking about existential risk independently from each other if we are going to have any hope of seeing such a threat before it arrives.
If one takes the Fermi paradox seriously and takes a not too strong Copernican principle, one concludes that other species would be likely to come up with the notion of the Fermi paradox and that this didn't help them at all.
How is this different from reasoning more generally? I.e, "one concludes that we will come up with generally the same ideas as other species, and since we infer this didn't help most or all the other species, nothing we do is likely to help us either." Or in simpler words: we infer the Great Barrier is really Great.
Different potentially spacefaring, expansionist lifeforms, from completely different evolutions, will have an awful lot of differences on average. Those of them who use observation and rational deduction (a subset), will observe Fermi paradox and predict Great Filter just like we do, and on natural-selection principles at least some would try to avoid it, but we see none who have succeeded around. That's my reading of your argument.
But if we allow that they use observation and rational deduction to plan actions - that they are intelligent in a way comparable to ours - then it is also likely they are similar to us in other consequences of such intelligence. Should we conclude that no product of a generalized capacity for intelligence is likely to save us from the Great Filter, and we should instead try to use uniquely human advantages less likely to evolve twice, like e.g. our social-political behaviors?
Right. That allows the comment to be technically superfluous, given a charitable interpretation. Otherwise it still implies that he either deems the rest of the current staff to be not like Bostrom or that most academics are not like him and instead only care for status while getting nothing useful done. Especially since he could have instead stated that he is happy to see people being involved in the project that are probably going to do useful work.
By "like Bostrom" I mean: consistently outputs work useful for making decisions affecting global risk.
Most plausible hires are, indeed, not like Bostrom in this respect. My statement does not imply, however, that most academics "care only for status." I only said that their output could be predicted by a simple model of an amoeba seeking status and funding. (One of the major results of the heuristics & biases tradition, and also neuroeconomics, is that we are not Homo Economicus, and thus we cannot infer desires cleanly from behavior or "output".)