As of an hour ago, I had not yet heard of the Centre for the Study of Existential Risk.
Luke announced it to Less Wrong, as The University of Cambridge announced it to the world, back in April:
CSER at Cambridge University joins the others.
Good people involved so far, but the expected output depends hugely on who they pick to run the thing.
CSER is scheduled to launch next year.
Here is a small selection of CSER press coverage from the last two days:
http://www.bbc.co.uk/news/technology-20501091
http://www.guardian.co.uk/education/shortcuts/2012/nov/26/cambridge-university-terminator-studies
http://www.theregister.co.uk/2012/11/26/new_centre_human_extinction_risks/
http://www.slashgear.com/new-ai-think-tank-hopes-to-get-real-on-existential-risk-26258246/
http://www.techradar.com/news/world-of-tech/super-brains-to-guard-against-robot-apocalypse-1115293
http://slashdot.org/topic/bi/cambridge-university-vs-skynet/
http://www.businessinsider.com/researchers-robots-risk-human-civilization-2012-11
http://www.newscientist.com/article/dn22534-megarisks-that-could-drive-us-to-extinction.html
http://news.cnet.com/8301-11386_3-57553993-76/killer-robots-cambridge-brains-to-assess-ai-risk/
http://www.foxnews.com/tech/2012/11/26/terminator-center-to-open-at-cambridge-university/
Google News: All 119 news sources...
Here's an excerpt from one quite typical story appearing in tech-tabloid theregister.co.uk today:
Cambridge boffins fear 'Pandora's Unboxing' and RISE of the MACHINES
Boffins at Cambridge University want to set up a new centre to determine what humankind will do when ultra-intelligent machines like the Terminator or HAL pose "extinction-level" risks to our species.
A philosopher, a scientist and a software engineer are proposing the creation of a Centre for the Study of Existential Risk (CSER) to analyse the ultimate risks to the future of mankind - including bio- and nanotech, extreme climate change, nuclear war and artificial intelligence.
Apart from the frequent portrayal of evil - or just misguidedly deadly - AI in science fiction, actual real scientists have also theorised that super-intelligent machines could be a danger to the human race.
Jaan Tallinn, the former software engineer who was one of the founders of Skype, has campaigned for serious discussion of the ethical and safety aspects of artificial general intelligence (AGI).
Tallinn has said that he sometimes feels he is more likely to die from an AI accident than from cancer or heart disease, CSER co-founder and philosopher Huw Price said.
[...]
Humanity’s last invention and our uncertain future
In 1965, Irving John ‘Jack’ Good sat down and wrote a paper for New Scientist called Speculations concerning the first ultra-intelligent machine. Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built. [...]
Interesting; there is now a member of a national legislature who is publicly concerned about existential risk. I wonder if he's planning to try to use his political power to reduce x-risk. My guess: probably not. He appears to be rather a lot more interested in science than in politics, and I'm not sure to what extend the average member of the House of Lords even has political power.
By the way he wrote excelent book on x-risks.
http://books.google.ru/books/about/The_End_of_the_World.html?id=CLvuO9_lDmwC&redir_esc=y
download: http://www.avturchin.narod.ru/Rees.doc