The Future of Life Institute has published their document Research priorities for robust and beneficial artificial intelligence and written an open letter for people to sign indicating their support.

Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls. This document gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.

 

New Comment
10 comments, sorted by Click to highlight new comments since:

A number of prestigious people from AI and other fields have signed the open letter, including Stuart Russell, Peter Norvig, Eric Horvitz, Ilya Sutskever, several DeepMind folks, Murray Shanahan, Erik Brynjolfsson, Margaret Boden, Martin Rees, Nick Bostrom, Elon Musk, Stephen Hawking, and others. I'd edit that into the OP.

Also, it's worth noting that the research priorities document cites a number of MIRI's papers.

There is also a discussion of this happening on Hacker News here.

[-]devi70

Is the idea to get as many people as possible to sign this? Or do we want to avoid the image of a giant LW puppy jumping up and down while barking loudly, when the matter finally starts getting attention from serious people?

After the first few pages of signatories, I recognize very few of the names, so my guess is that LW signers will just get drowned in the much larger population of people who support the basic content of the research priorities document, which means there's not much downside to lots of LWers signing the open letter.

I'm impressed they managed to get the Big Three of the Deep Learning movement (Geoffrey Hinton, Yann LeCun, and Yoshua Bengio). I remember at the 27th Canadian Conference on Artificial Intelligence 2014, I asked Professor Bengio what he thought of the ethics of machine learning, and he asked if I was a reporter. XD

This has appeared in the popular science page "I Fucking Love Science", followed by almost 20 million people on facebook. I think this is extremely good news. Despite the picture and its caption, the article seems to take the matter seriously.

So if an AI were created that had consciousness and sentience, like in the new Chappie movie. Would they advocate killing it?

If the AI were superintelligent and would otherwise kill everyone else on Earth - yes. Otherwise, no. The difficult question is when the uncertainties are high and difficult to quantify.

I didn't see Ray Kurzwiels name on there. I guess he wants AI asap, and figures it's worth the risk.