You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Update on establishment of Cambridge’s Centre for Study of Existential Risk

40 Sean_o_h 12 August 2013 04:11PM
Cambridge’s high-profile launch of the Centre for Study of Existential Risk last November received a lot of attention on LessWrong, and a number of people have been enquiring as to what‘s happened since. This post is meant to give a little explanation and update of what’s been going on.

Motivated by a common concern over human activity-related risks to humanity, Lord Martin Rees, Professor Huw Price, and Jaan Tallinn founded the Centre for Study of Existential Risk last year.  However, this announcement was made before the establishment of a physical research centre or securement of long-term funding. The last 9 months have been focused on turning an important idea into a reality.

Following the announcement in November, Professor Price contacted us at the Future of Humanity Institute regarding the possibility of collaboration on joint academic funding opportunities; the aim being both to raise the funds for CSER’s research programmes and to support joint work by the FHI and CSER’s researchers on anthropogenic existential risk. We submitted our first grant application in January to the European Research Council – an ambitious project to create “A New Science of Existential Risk” that, if successful, would provide enough funding for CSER’s first research programme - a sizeable programme that will run for five years.
We’ve been successful in the first and second rounds, and we will hear a final round decision at the end of the year. It was also an opportunity for us to get some additional leading academics onto the project – Sir Partha Dasgupta, Professor of Economics at Cambridge and an expert in social choice theory, sustainability and intergenerational ethics, is a co-PI (along with Huw Price, Martin Rees and Nick Bostrom). In addition, a number of prominent academics concerned about technology-related risk – including Stephen Hawking, David Spiegelhalter, George Church and David Chalmers – have joined our advisory board.

The FHI regards establishment of CSER as of the highest priority for a number of reasons including:

1) The value of the research the Centre will engage in
2) The reputational boost to the field of Existential Risk gained by the establishment of high-profile research centre in Cambridge.
3) The impact on policy and public perception that academic heavy-hitters like Rees and Price can have

Therefore we’ve been working with CSER behind the scenes over the last 9 months. Progress has been a little slow until now – Huw, Martin and Jaan are fully committed to this project, but due to their other responsibilities aren’t in a position to work full-time on it yet. 

However, we’re now in a position to make CSER’s establishment official. Cambridge’s new Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) will host CSER and provide logistical support. I’ll be acting manager of CSER’s activities over the coming 6-12 months, under the guidance of Huw, Martin and Jaan. A generous seed funding donation from Jaan Tallinn is funding CSER’s establishment and these activities – which will include a lecture series, workshops, public outreach, and staff time on grant-writing and fundraising. It’ll also provide a buyout of a fraction of my time from FHI (providing funds for us to hire part-time staff to offload some of the FHI workload and help with some of the CSER work).

At the moment and over the next couple of months we’re going to be focused on identifying and working on additional academic funding opportunities for additional programmes, as well as chasing some promising leads in industry, private and philanthropic funding. I’ll also be aiming to keep CSER’s public profile active. There will be newsletters every three months (sign up here), the website’s going to be fleshed out to contain more detail about our planned research and existing literature, and we’ll be arranging regular high-quality media engagement. While we’re unlikely to have time to answer every general query that comes in (though we’ll try whenever possible: email: admin@cser.org), we’ll aim to keep the existential risk community informed through the newsletters and posts such as these.

We’ve been lucky to get a lot of support from the academic and existential risk community for the CSER centre. In addition to CRASSH, Cambridge’s Centre for Science and Policy will provide support in making policy-relevant links, and may co-host and co-publicise events. Luke Muehlhauser, MIRI’s Executive Director, has been very supportive and has provided valuable advice, and has generously offered to direct some of MIRI’s volunteer support towards CSER tasks. We also expect to get valuable support from the growing community around FHI.

From where I’m sitting, CSER’s successful launch is looking very promising. The timeline on our research programmes, however, is still a little more uncertain. If we’re successful with the European Research Council, we can expect to be hiring a full research team next spring. If not, it may take a little longer, but we’re exploring a number of different opportunities in parallel and are feeling confident. The support of the existential risk community continues to be invaluable.

Thanks,

Seán Ó hÉigeartaigh
Academic Manager, Future of Humanity Institute 
Acting Academic Manager, Cambridge Centre for Study of Existential Risk.


Centre for the Study of Existential Risk (CSER) at Cambridge makes headlines.

19 betterthanwell 26 November 2012 08:56PM

As of an hour ago, I had not yet heard of the Centre for the Study of Existential Risk.

Luke announced it to Less Wrong, as The University of Cambridge announced it to the world, back in April:

CSER at Cambridge University joins the others.

Good people involved so far, but the expected output depends hugely on who they pick to run the thing.

CSER is scheduled to launch next year.

 


 

Here is a small selection of CSER press coverage from the last two days:

http://www.bbc.co.uk/news/technology-20501091

http://www.guardian.co.uk/education/shortcuts/2012/nov/26/cambridge-university-terminator-studies

http://www.dailymail.co.uk/news/article-2238152/Cambridge-University-open-Terminator-centre-study-threat-humans-artificial-intelligence.html

http://www.theregister.co.uk/2012/11/26/new_centre_human_extinction_risks/

http://www.slashgear.com/new-ai-think-tank-hopes-to-get-real-on-existential-risk-26258246/

http://www.techradar.com/news/world-of-tech/super-brains-to-guard-against-robot-apocalypse-1115293

http://www.hindustantimes.com/world-news/Europe/Cambridge-to-study-risks-from-robots-at-Terminator-Centre/Article1-964746.aspx

http://economictimes.indiatimes.com/news/news-by-industry/et-cetera/cambridge-to-study-risks-from-robots-at-terminator-centre/articleshow/17372042.cms

http://www.extremetech.com/extreme/141372-judgment-day-update-disneys-grenade-catching-robot-and-the-burger-flipping-robot-that-could-replace-2-million-us-workers

http://slashdot.org/topic/bi/cambridge-university-vs-skynet/

http://www.businessinsider.com/researchers-robots-risk-human-civilization-2012-11

http://www.newscientist.com/article/dn22534-megarisks-that-could-drive-us-to-extinction.html

http://news.cnet.com/8301-11386_3-57553993-76/killer-robots-cambridge-brains-to-assess-ai-risk/

http://www.globalpost.com/dispatches/globalpost-blogs/weird-wide-web/cambridge-university-opens-so-called-termintor-centre-stu

http://www.washingtonpost.com/world/europe/cambridge-university-to-open-center-studying-the-risks-of-technology-to-humans/2012/11/25/e551f4d0-3733-11e2-9258-ac7c78d5c680_story.html

http://www.foxnews.com/tech/2012/11/26/terminator-center-to-open-at-cambridge-university/

Google News: All 119 news sources...

 


 

Here's an excerpt from one quite typical story appearing in tech-tabloid theregister.co.uk today:

 

Cambridge boffins fear 'Pandora's Unboxing' and RISE of the MACHINES

Boffins at Cambridge University want to set up a new centre to determine what humankind will do when ultra-intelligent machines like the Terminator or HAL pose "extinction-level" risks to our species.

A philosopher, a scientist and a software engineer are proposing the creation of a Centre for the Study of Existential Risk (CSER) to analyse the ultimate risks to the future of mankind - including bio- and nanotech, extreme climate change, nuclear war and artificial intelligence.

Apart from the frequent portrayal of evil - or just misguidedly deadly - AI in science fiction, actual real scientists have also theorised that super-intelligent machines could be a danger to the human race.

Jaan Tallinn, the former software engineer who was one of the founders of Skype, has campaigned for serious discussion of the ethical and safety aspects of artificial general intelligence (AGI).

Tallinn has said that he sometimes feels he is more likely to die from an AI accident than from cancer or heart disease, CSER co-founder and philosopher Huw Price said.
[...]




The source for these stories appears to be a press release from the University of Cambridge:

Humanity’s last invention and our uncertain future

In 1965, Irving John ‘Jack’ Good sat down and wrote a paper for New Scientist called Speculations concerning the first ultra-intelligent machine. Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built. [...] 



Three Four quick observations:

1: That's a lot of Terminator II photos.
2: FHI at Oxford and the Singularity Institute does not often get this kind of attention.
3: CSER doesn't appear to have published anything yet.
4: The number of people who have heard the term "existential risk" must have doubled a few times today.

Werewolf, Cambridge UK Less Wrong Meetup April 1st 2012

4 Clarity1992 02 April 2012 11:46AM

There is already a post related to this meetup but it concerns a discussion which took place after I had left so I will write about the games of Werewolf. Please post your thoughts too and correct any inaccuracies.

Thoughts:

  • Most people said that this was very good fun and I suspect those that didn't still really enjoyed it.

  • Each game lasted about 20 minutes.

  • I was late and observed the first game. I remember Ai was given a werewolf card but she didn't realise so the game was played with her as a villager.

  • When Douglas suggested people give reasons for lynching Thomas one that stood out was "he talks too much". This seems to go with Douglas' later observation that the game is all about information, whether that is obtained by careful choice of sheriff/lynching to maximise what is learned next round or by picking up on what people have said, how they have said it, and how much they have said. Personally I played it very much on instinct and watching for tells, letting others do the logical reasoning (!).

  • Jon left after game one. There was some discussion about whether he was coming back. "His body language seemed dismissive like 'nah, I'm not into this'", "Really? I didn't get that impression!", "I disagree with your analysis. Past evidence of Jon leaving suggests he will return", "I think he would have said goodbye if he wasn't coming back. Since he didn't I assume he is returning". I found it interesting how we applied rationality principles to this.

  • Generally the sheriff/lynching discussions would begin with sincere considerations of outcome trees then as soon as anyone said "but that's what you'd say if you were a werewolf!" or "she seemed a little quick to agree with that!" or "he's swallowing a lot while talking!" it switched to accusations and double bluffs.

  • There were quite a few pieces of reasoning relating to proximity to people. e.g. "I'm sure I heard movement next to me 'last night'". My immediate instinct was that this is outside of the rules and unsporting, but obviously that isn't the case with this game!

  • Something I found especially inspired was Alexey (as a werewolf) in game two claiming to be the seer after Thomas (the actual seer) had already told everyone that he himself was. Alexey argued that he had withheld the information to see who would try to pretend to be the seer and then he would know who one of the werewolves was. Most people weren't convinced but it was very entertaining.

  • We decided, on Alexey's suggestion, that a coin toss is acceptable to decide a tied vote. Jonathan remarked that British coins land on heads 53 times out of 100. Does anyone have a link for that?

  • Douglas did a great job giving the game some life with the storytelling style of delivery. I don't know what the proper term for this is, or whether you're traditionally supposed to play werewolves that way (I suspect you are), but it was cool. As was Thomas' replication of it when he was GM.

  • Ramana spent the most time dead and made the point that it's very different watching from the outside compared to playing. He said you can perceive much better what people are trying to do and who is gullible.

  • Douglas explained that for the villagers it is always best to lynch someone because otherwise the next day you'll just be in the exact same position with one less villagers' vote against the same number of werewolves' votes. This seems definitely true, but oddly counter-intuitive given that you're more likely to lynch a villager by mistake, the more of them you have.

  • Between games three and four there was a false start because someone had forgotten they had a werewolf card and then suddenly and noisily realised they were supposed to have their eyes open. Oops!

  • I hadn't played before but was familiar with the concept and had been meaning to try it with friends for a long time. If you're in a similar position, then bump it up your priority list. It's awesome!

--------------------------------------------------------------------------------------------------------------------------------------

 

continue reading »