You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Estimating the probability of human extinction

5 philosophytorres 17 February 2016 04:19PM

I'm looking for feedback on the following idea. The article from which it's been excerpted can be found here: http://ieet.org/index.php/IEET/more/torres20120213

"But not only has the number of scenarios increased in the past 71 years, many riskologists believe that the probability of a global disaster has also significantly risen. Whereas the likelihood of annihilation for most of our species’ history was extremely low, Nick Bostrom argues that “setting this probability lower than 25% [this century] would be misguided, and the best estimate may be considerably higher.” Similarly, Sir Martin Rees claims that a civilization-destroying event before the year 02100 is as likely as getting a “heads” after flipping a coin. These are only two opinions, of course, but to paraphrase the Russell-Einstein Manifesto, my experience confirms that those who know the < most tend to be the most gloomy

"I [would] argue that Rees’ figure is plausible. To adapt a maxim from the philosopher David Hume, wise people always proportion their fears to the best available evidence, and when one honestly examines this evidence, one finds that there really is good reason for being alarmed. But I also offer a novel — to my knowledge — argument for why we may be systematically underestimating the overall likelihood of doom. In sum, just as a dog can’t possibly comprehend any of the natural and anthropogenic risks mentioned above, so too could there be risks that forever lie beyond our epistemic reach. All biological brains have intrinsic limitations that constrain the library of concepts to which one has access. And without concepts, one can’t mentally represent the external world. It follows that we could be “cognitively closed” to a potentially vast number of cosmic risks that threaten us with total annihilation. This being said, one might argue that such risks, if they exist at all, must be highly improbable, since Earth-originating life has existed for some 3.5 billion years without an existential catastrophe having happened. But this line of reasoning is deeply flawed: it fails to take into account that the only worlds in which observers like us could find ourselves are ones in which such a catastrophe has never occurred. It follows that a record of past survival on our planetary spaceship provides no useful information about the probability of certain existential disasters happening in the future. The facts of cognitive closure plus the observation selection effect suggest that our probability conjectures of total annihilation may be systematically underestimated, perhaps by a lot."

 

Thoughts?

[LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI

19 Sarokrae 30 August 2014 02:04PM

http://www.theguardian.com/technology/2014/aug/30/saviours-universe-four-unlikely-men-save-world

The article is titled "The scientific A-Team saving the world from killer viruses, rogue AI and the paperclip apocalypse", and features interviews with Martin Rees, Huw Price, Jaan Tallinn and Partha Dasgupta. The author takes a rather positive tone about CSER and MIRI's endeavours, and mentions x-risks other than AI (bioengineered pandemic, global warming with human interference, distributed manufacturing).

I find it interesting that the inferential distance for the layman to the concept of paperclipping AI is much reduced by talking about paperclipping America, rather than the entire universe: though the author admits still struggling with the concept. Unusually for an journalist who starts off unfamiliar with these concepts, he writes in a tone that suggests that he takes the ideas seriously, without the sort of "this is very far-fetched and thus I will not lower myself to seriously considering it" countersignalling usually seen with x-risk coverage. There is currently the usual degree of incredulity in the comments section though.

For those unfamiliar with The Guardian, it is a British left-leaning newspaper with a heavy focus on social justice and left-wing political issues. 

Update on establishment of Cambridge’s Centre for Study of Existential Risk

40 Sean_o_h 12 August 2013 04:11PM
Cambridge’s high-profile launch of the Centre for Study of Existential Risk last November received a lot of attention on LessWrong, and a number of people have been enquiring as to what‘s happened since. This post is meant to give a little explanation and update of what’s been going on.

Motivated by a common concern over human activity-related risks to humanity, Lord Martin Rees, Professor Huw Price, and Jaan Tallinn founded the Centre for Study of Existential Risk last year.  However, this announcement was made before the establishment of a physical research centre or securement of long-term funding. The last 9 months have been focused on turning an important idea into a reality.

Following the announcement in November, Professor Price contacted us at the Future of Humanity Institute regarding the possibility of collaboration on joint academic funding opportunities; the aim being both to raise the funds for CSER’s research programmes and to support joint work by the FHI and CSER’s researchers on anthropogenic existential risk. We submitted our first grant application in January to the European Research Council – an ambitious project to create “A New Science of Existential Risk” that, if successful, would provide enough funding for CSER’s first research programme - a sizeable programme that will run for five years.
We’ve been successful in the first and second rounds, and we will hear a final round decision at the end of the year. It was also an opportunity for us to get some additional leading academics onto the project – Sir Partha Dasgupta, Professor of Economics at Cambridge and an expert in social choice theory, sustainability and intergenerational ethics, is a co-PI (along with Huw Price, Martin Rees and Nick Bostrom). In addition, a number of prominent academics concerned about technology-related risk – including Stephen Hawking, David Spiegelhalter, George Church and David Chalmers – have joined our advisory board.

The FHI regards establishment of CSER as of the highest priority for a number of reasons including:

1) The value of the research the Centre will engage in
2) The reputational boost to the field of Existential Risk gained by the establishment of high-profile research centre in Cambridge.
3) The impact on policy and public perception that academic heavy-hitters like Rees and Price can have

Therefore we’ve been working with CSER behind the scenes over the last 9 months. Progress has been a little slow until now – Huw, Martin and Jaan are fully committed to this project, but due to their other responsibilities aren’t in a position to work full-time on it yet. 

However, we’re now in a position to make CSER’s establishment official. Cambridge’s new Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) will host CSER and provide logistical support. I’ll be acting manager of CSER’s activities over the coming 6-12 months, under the guidance of Huw, Martin and Jaan. A generous seed funding donation from Jaan Tallinn is funding CSER’s establishment and these activities – which will include a lecture series, workshops, public outreach, and staff time on grant-writing and fundraising. It’ll also provide a buyout of a fraction of my time from FHI (providing funds for us to hire part-time staff to offload some of the FHI workload and help with some of the CSER work).

At the moment and over the next couple of months we’re going to be focused on identifying and working on additional academic funding opportunities for additional programmes, as well as chasing some promising leads in industry, private and philanthropic funding. I’ll also be aiming to keep CSER’s public profile active. There will be newsletters every three months (sign up here), the website’s going to be fleshed out to contain more detail about our planned research and existing literature, and we’ll be arranging regular high-quality media engagement. While we’re unlikely to have time to answer every general query that comes in (though we’ll try whenever possible: email: admin@cser.org), we’ll aim to keep the existential risk community informed through the newsletters and posts such as these.

We’ve been lucky to get a lot of support from the academic and existential risk community for the CSER centre. In addition to CRASSH, Cambridge’s Centre for Science and Policy will provide support in making policy-relevant links, and may co-host and co-publicise events. Luke Muehlhauser, MIRI’s Executive Director, has been very supportive and has provided valuable advice, and has generously offered to direct some of MIRI’s volunteer support towards CSER tasks. We also expect to get valuable support from the growing community around FHI.

From where I’m sitting, CSER’s successful launch is looking very promising. The timeline on our research programmes, however, is still a little more uncertain. If we’re successful with the European Research Council, we can expect to be hiring a full research team next spring. If not, it may take a little longer, but we’re exploring a number of different opportunities in parallel and are feeling confident. The support of the existential risk community continues to be invaluable.

Thanks,

Seán Ó hÉigeartaigh
Academic Manager, Future of Humanity Institute 
Acting Academic Manager, Cambridge Centre for Study of Existential Risk.