An article by AAAI president Tom Dietterich and Director of Microsoft Research Eric Horvitz has recently got some media attention (BBC, etc) downplaying AI existential risks. You can go read it yourself, but the key paragraph is this:
A third set of risks echo the tale of the Sorcerer’s Apprentice. Suppose we tell a self-driving car to “get us to the airport as quickly as possible!” Would the autonomous driving system put the pedal to the metal and drive at 300 mph while running over pedestrians? Troubling scenarios of this form have appeared recently in the press. Other fears center on the prospect of out-of-control superintelligences that threaten the survival of humanity. All of these examples refer to cases where humans have failed to correctly instruct the AI algorithm in how it should behave.This is not a new problem. An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands in a literal manner. An AI system should not only act on a set of rules that it is instructed to obey — it must also analyze and understand whether the behavior that a human is requesting is likely to be judged as “normal” or “reasonable” by most people. It should also be continuously monitoring itself to detect abnormal internal behaviors, which might signal bugs, cyberattacks, or failures in its understanding of its actions. In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability — and responsibility — of working with people to obtain feedback and guidance. They must know when to stop and “ask for directions” — and always be open for feedback.Some of the most exciting opportunities ahead for AI bring together the complementary talents of people and computing systems. AI-enabled devices are ... (examples follow) ...In reality, creating real-time control systems where control needs to shift rapidly and fluidly between people and AI algorithms is difficult. Some airline accidents occurred when pilots took over from the autopilots. The problem is that unless the human operator has been paying very close attention, he or she will lack a detailed understanding of the current situation.AI doomsday scenarios belong more in the realm of science fiction than science fact.
However, we still have a great deal of work to do to address the concerns and risks afoot with our growing reliance on AI systems. Each of the three important risks outlined above (programming errors, cyberattacks, “Sorcerer’s Apprentice”) is being addressed by current research, but greater efforts are needed
We urge our colleagues in industry and academia to join us in identifying and studying these risks and in finding solutions to addressing them, and we call on government funding agencies and philanthropic initiatives to support this research. We urge the technology industry to devote even more attention to software quality and cybersecurity as we increasingly rely on AI in safety-critical functions. And we must not put AI algorithms in control of potentially-dangerous systems until we can provide a high degree of assurance that they will behave safely and properly.
I think the excerpt you give is pretty misleading, and gave me a much different understanding of the article (which I had trouble believing based on my previous knowledge of Tom and Eric) than when I actually read it. In particular, your quote ends mid-paragraph. The actual paragraph is:
The next paragraph is:
Can you please fix this ASAP? (And also change your title to actually be an accurate synopsis of the article as well?) Otherwise you're just adding to the noise.
I disagree that it is as inaccurate as you claim. Specifically, they did actually say that "AI doomsday scenarios belong more in the realm of science fiction". I don't think it's inaccurate to quote what someone actually said.
When they talk about "having more work to do" etc, it seems that they are emphasizing risks of sub-human intelligence and de-emphasizing the risks of superintelligence.
Of course LW being LW I know that balance and fairness is valued very highly, so would you kindly suggest what you think the title should be and I will change it.
I will also add in the paragraphs you suggest.