I think the excerpt you give is pretty misleading, and gave me a much different understanding of the article (which I had trouble believing based on my previous knowledge of Tom and Eric) than when I actually read it. In particular, your quote ends mid-paragraph. The actual paragraph is:
However, we still have a great deal of work to do to address the concerns and risks afoot with our growing reliance on AI systems. Each of the three important risks outlined above (programming errors, cyberattacks, “Sorcerer’s Apprentice”) is being addressed by current research, but greater efforts are needed.
The next paragraph is:
We urge our colleagues in industry and academia to join us in identifying and studying these risks and in finding solutions to addressing them, and we call on government funding agencies and philanthropic initiatives to support this research. We urge the technology industry to devote even more attention to software quality and cybersecurity as we increasingly rely on AI in safety-critical functions. And we must not put AI algorithms in control of potentially-dangerous systems until we can provide a high degree of assurance that they will behave safely and properly.
Can you please fix this ASAP? (And also change your title to actually be an accurate synopsis of the article as well?) Otherwise you're just adding to the noise.
I disagree that it is as inaccurate as you claim. Specifically, they did actually say that "AI doomsday scenarios belong more in the realm of science fiction". I don't think it's inaccurate to quote what someone actually said.
When they talk about "having more work to do" etc, it seems that they are emphasizing risks of sub-human intelligence and de-emphasizing the risks of superintelligence.
Of course LW being LW I know that balance and fairness is valued very highly, so would you kindly suggest what you think the title should be and I will change it.
I will also add in the paragraphs you suggest.
Yeah, echoing jsteinhardt, I think you misread the letter, and science journalists in general are not to be trusted when it comes to reporting on AI or AI dangers. Dietterich is the second listed signatory and Horvitz is the third of the FLI open letter, and this letter seems to me to be saying "hey general public, don't freak out about the Terminator, the AI research field has this under control--we recognize that safety is super important and are working hard on it (and you should fund more of it)."
AI research field has this under control--we recognize that safety is super important and are working hard on it
Great, except
a) they don't have it under control and
b) no-one in mainstream AI academia is working on the control problem for superintelligence
So, can you find the phrase in the letter that's the MIRI open problem that Nate Soares presented on at the AAAI workshop on AI ethics that Dietterich was at a few days later?
If not, maybe you should reduce your confidence about your interpretation. My suspicion is that MIRI is rapidly becoming mainstream, and that the FLI grant is attracting even more attention. Perhaps more importantly, I think we're in a position where it's more effective to treat AI safety issues as mainstream than fringe.
I also think that we're interpreting "under control" differently. I'm not making the claim that the problem is solved, just that it's being worked on (in the way that academia works on these problems), and getting Congress or the media or so on involved in a way not mediated by experts is likely to do more harm than good.
One question that keeps kicking around in my mind is that if someone's true but unstated objection to the problem of AI risk is that superintelligence will never happen, how do you change their mind?
Note that superintelligence doesn't by itself provide much of a risk. It is extreme superintelligence, together with variants of the orthogonality thesis and an intelligence that is able to rapidly achieve its superintelligence. The first two of these seem to be much easier to convince people of than the third, which shouldn't be that surprising because the third is really the most questionable. (At the same time there seems to be a hard core of people who absolutely won't budge on orthogonality. I disagree with such people on such fundamental intuitions and other issues that I'm not sure I can model well what they are thinking.)
The orthogonality thesis, in the form "you can't get an ought from an is", is widely accepted or at least widely considered a popular position in public discourse.
It is true that slow superintelligence is less risky, but that argument isn't explicitly made in this letter.
An article by AAAI president Tom Dietterich and Director of Microsoft Research Eric Horvitz has recently got some media attention (BBC, etc) downplaying AI existential risks. You can go read it yourself, but the key paragraph is this: