Hi,
Let me introduce myself: I'm Sean and I work as project manager at FHI (finally got around to registering!). In posts here I won't be speaking on behalf of FHI unless I explicitly state so (although, like Stuart, I imagine I often will be). I'm not involved officially with CSER, but I'm in communication with them and hope to be keeping up to date with them over the coming months.
A few comments on your observations:
2) CSER have done a deliberate and well-orchestrated "media splash" campaign over the last week, but I believe they're finished with this now. They've got some big names involved and a good support structure in place in Cambridge, which helps.
3) My understanding is that CSER hasn't published anything yet because they don't exist yet in a practical sense - they've been founded but nobody's employed, and they're still gathering seed funding.
4) The Sunday Times article's a bit unfortunate and the general feeling at FHI is that we're not too impressed by the journalist's work, but please note that the more "controversial" statements are the journalist's own thoughts (it's not clear in all places if you skim the article like I did at first). CSER has some good people behind it, and at the time of writing the FHI plans to support it and collaborate with it where possible - we think it's a very positive development in the field of Xrisk. Even the term getting out there is a positive!
Welcome, and thanks for the comments.
Even the term getting out there is a positive!
Agreed.
If journalism demands that you stick to Hollywood references when communicating a concept,
it wouldn't be so bad if journalists managed to understand and convey the distinction between:
I think it works as a hierarchy of increasingly complex models. Readers will stop at whichever rung they are comfortable with depending on their curiosity and background.
My real life conversations on X-risk tend to go
Terminator
Drones
Skynet
Specialized AI
General AI
Friendly AI
News stories in post: 16
Number with a picture from the movie series Terminator: 8 / 16
Number referencing Terminator in text (some with text had no picture, and vice versa): 11 / 16
Popular but not as popular: HAL references.
News stories with no Terminator picture and no textual references to HAL or Arnold Schwarzenegger: 1 / 16, the New Scientist.
To be fair the Guardian story only references Terminator in the header. The text body is written by Lord Martin Rees and is a short but clear description of X-risk without any sci-fi references. It also focuses more on other X-risks, perhaps a difference in opinion amongst the founders?
("Lord Martin Rees is a British cosmologist and astrophysicist. He has been Astronomer Royal since 1995 and Master of Trinity College, Cambridge since 2004. He was President of the Royal Society between 2005 and 2010". For anyone like me who didn't know.)
Interesting; there is now a member of a national legislature who is publicly concerned about existential risk. I wonder if he's planning to try to use his political power to reduce x-risk. My guess: probably not. He appears to be rather a lot more interested in science than in politics, and I'm not sure to what extend the average member of the House of Lords even has political power.
By the way he wrote excelent book on x-risks.
http://books.google.ru/books/about/The_End_of_the_World.html?id=CLvuO9_lDmwC&redir_esc=y
download: http://www.avturchin.narod.ru/Rees.doc
Tallinn and Price are very concerned with AI-related Xrisk. Martin Rees currently considers biological risks his no.1 concern (which is not to say he's unconcerned by AI); he's famously offered bets on a major (~1 million death) bio-related catastrophe occuring in the coming years. http://online.wsj.com/article/SB124121965740478983.html
NPR's Morning Edition had about 30s on this topic today. They also included a voice clip from the Terminator: "Hasta la vista, baby."
I remember a post by Hanson (can't seem to find the exact url at the moment), where he said that academic big names are "risk averse," but if a long shot topic becomes hot/fashionable, the big names simply move in on the innovators' turf, and take over the topic.
Yudkowsky seemed to me simplistic in his understanding of moral norms. “You would not kill a baby,” he said to me, implying that was one norm that could easily be programmed into a machine.
“Some people do,” I pointed out, but he didn’t see the full significance. SS officers killed babies routinely because of an adjustment in the society from which they sprang in the form of Nazism. Machines would be much more radically adjusted away from human social norms, however we programmed them.
Wow. This particular mistake seems to be an unlikely and even difficult mistake to make in good faith,
as opposed to, for example, by outright dishonesty.
Update: I told Appleyard of his mistake, and he simply denied that his article has made a mistake on this matter.
Never mind, it seems they don't even try to be honest.
An article at CAM, the Cambridge alumni magazine. (H/T my wife, who gets it in hardcopy).
Nothing too new, but it is good to see the basic AI x-risk concepts laid out with a minimum of snarkiness in a publication aimed at a closed, elite audience. I think that more reasonable ideas about AI x-risk are gaining social status..
As of an hour ago, I had not yet heard of the Centre for the Study of Existential Risk.
Luke announced it to Less Wrong, as The University of Cambridge announced it to the world, back in April:
CSER is scheduled to launch next year.
Here is a small selection of CSER press coverage from the last two days:
http://www.bbc.co.uk/news/technology-20501091
http://www.guardian.co.uk/education/shortcuts/2012/nov/26/cambridge-university-terminator-studies
http://www.dailymail.co.uk/news/article-2238152/Cambridge-University-open-Terminator-centre-study-threat-humans-artificial-intelligence.html
http://www.theregister.co.uk/2012/11/26/new_centre_human_extinction_risks/
http://www.slashgear.com/new-ai-think-tank-hopes-to-get-real-on-existential-risk-26258246/
http://www.techradar.com/news/world-of-tech/super-brains-to-guard-against-robot-apocalypse-1115293
http://www.hindustantimes.com/world-news/Europe/Cambridge-to-study-risks-from-robots-at-Terminator-Centre/Article1-964746.aspx
http://economictimes.indiatimes.com/news/news-by-industry/et-cetera/cambridge-to-study-risks-from-robots-at-terminator-centre/articleshow/17372042.cms
http://www.extremetech.com/extreme/141372-judgment-day-update-disneys-grenade-catching-robot-and-the-burger-flipping-robot-that-could-replace-2-million-us-workers
http://slashdot.org/topic/bi/cambridge-university-vs-skynet/
http://www.businessinsider.com/researchers-robots-risk-human-civilization-2012-11
http://www.newscientist.com/article/dn22534-megarisks-that-could-drive-us-to-extinction.html
http://news.cnet.com/8301-11386_3-57553993-76/killer-robots-cambridge-brains-to-assess-ai-risk/
http://www.globalpost.com/dispatches/globalpost-blogs/weird-wide-web/cambridge-university-opens-so-called-termintor-centre-stu
http://www.washingtonpost.com/world/europe/cambridge-university-to-open-center-studying-the-risks-of-technology-to-humans/2012/11/25/e551f4d0-3733-11e2-9258-ac7c78d5c680_story.html
http://www.foxnews.com/tech/2012/11/26/terminator-center-to-open-at-cambridge-university/
Google News: All 119 news sources...
Here's an excerpt from one quite typical story appearing in tech-tabloid theregister.co.uk today:
Humanity’s last invention and our uncertain future
http://www.cam.ac.uk/research/news/humanitys-last-invention-and-our-uncertain-future/
ThreeFour quick observations: