UC Berkeley launches Center for Human-Compatible Artificial Intelligence
Source article: http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/
UC Berkeley artificial intelligence (AI) expert Stuart Russell will lead a new Center for Human-Compatible Artificial Intelligence, launched this week.
Russell, a UC Berkeley professor of electrical engineering and computer sciences and the Smith-Zadeh Professor in Engineering, is co-author of Artificial Intelligence: A Modern Approach, which is considered the standard text in the field of artificial intelligence, and has been an advocate for incorporating human values into the design of AI.
The primary focus of the new center is to ensure that AI systems are beneficial to humans, he said.
The co-principal investigators for the new center include computer scientists Pieter Abbeel and Anca Dragan and cognitive scientist Tom Griffiths, all from UC Berkeley; computer scientists Bart Selman and Joseph Halpern, from Cornell University; and AI experts Michael Wellman and Satinder Singh Baveja, from the University of Michigan. Russell said the center expects to add collaborators with related expertise in economics, philosophy and other social sciences.
The center is being launched with a grant of $5.5 million from the Open Philanthropy Project, with additional grants for the center’s research from the Leverhulme Trust and the Future of Life Institute.
Russell is quick to dismiss the imaginary threat from the sentient, evil robots of science fiction. The issue, he said, is that machines as we currently design them in fields like AI, robotics, control theory and operations research take the objectives that we humans give them very literally. Told to clean the bath, a domestic robot might, like the Cat in the Hat, use mother’s white dress, not understanding that the value of a clean dress is greater than the value of a clean bath.
The center will work on ways to guarantee that the most sophisticated AI systems of the future, which may be entrusted with control of critical infrastructure and may provide essential services to billions of people, will act in a manner that is aligned with human values.
“AI systems must remain under human control, with suitable constraints on behavior, despite capabilities that may eventually exceed our own,” Russell said. “This means we need cast-iron formal proofs, not just good intentions.”
One approach Russell and others are exploring is called inverse reinforcement learning, through which a robot can learn about human values by observing human behavior. By watching people dragging themselves out of bed in the morning and going through the grinding, hissing and steaming motions of making a caffè latte, for example, the robot learns something about the value of coffee to humans at that time of day.
“Rather than have robot designers specify the values, which would probably be a disaster,” said Russell, “instead the robots will observe and learn from people. Not just by watching, but also by reading. Almost everything ever written down is about people doing things, and other people having opinions about it. All of that is useful evidence.”
Russell and his colleagues don’t expect this to be an easy task.
“People are highly varied in their values and far from perfect in putting them into practice,” he acknowledged. “These aspects cause problems for a robot trying to learn what it is that we want and to navigate the often conflicting desires of different individuals.”
Russell, who recently wrote an optimistic article titled “Will They Make Us Better People?,” summed it up this way: “In the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans. As we envision AI aligned with human values, that process might cause us to think more about how we ourselves really should behave, and we might learn that we have more in common with people of other cultures than we think.”
[Link] NYU conference: Ethics of Artificial Intelligence (October 14-15)
FYI: https://wp.nyu.edu/consciousness/ethics-of-artificial-intelligence/
This conference will explore these questions about the ethics of artificial intelligence and a number of other questions, including:
What ethical principles should AI researchers follow?
Are there restrictions on the ethical use of AI?
What is the best way to design morally beneficial AI?
Is it possible or desirable to build moral principles into AI systems?
When AI systems cause benefits or harm, who is morally responsible?
Are AI systems themselves potential objects of moral concern?
What moral framework is best used to assess questions about the ethics of AI?Speakers and panelists will include:
Nick Bostrom (Future of Humanity Institute), Meia Chita-Tegmark (Future of Life Institute), Mara Garza (UC Riverside, Philosophy), Sam Harris (Project Reason), Demis Hassabis (DeepMind/Google), Yann LeCun (Facebook, NYU Data Science), Peter Railton (University of Michigan, Philosophy), Francesca Rossi (University of Padova, Computer Science), Stuart Russell (UC Berkeley, Computer Science), Susan Schneider (University of Connecticut, Philosophy), Eric Schwitzgebel (UC Riverside, Philosophy), Max Tegmark (Future of Life Institute), Wendell Wallach (Yale, Bioethics), Eliezer Yudkowsky (Machine Intelligence Research Institute), and others.
Organizers: Ned Block (NYU, Philosophy), David Chalmers (NYU, Philosophy), S. Matthew Liao (NYU, Bioethics)
A full schedule will be circulated closer to the conference date.
Registration is free but required. REGISTER HERE. Please note that admission is limited, and is first-come first-served: it is not guaranteed by registration.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)