Here is a simple moral rule that should make an AI much less likely to harm the interests of humanity:
Never take any action that would reduce the number of bits required to describe the universe by more than X.
where X is some number smaller than the number of bits needed to describe an infant human's brain. For information-reductions smaller than X, the AI should get some disutility, but other considerations could override. This 'information-based morality' assigns moral weight to anything that makes the universe a more information-filled or complex place, and it does so without any need to program complex human morality into the thing. It is just information theory, which is pretty fundamental. Obviously actions are evaluated based on how they alter the expected net present value of the information in the universe, and not just the immediate consequences.
This rule, by itself, prevents the AI from doing many of the things we fear. It will not kill people; a human's brain is the most complex known structure in the universe and killing a person reduces it to a pile of fat and protein. It will not hook people up to experience machines; doing so would dramatically reduce the uniqueness of each individual and make the universe a much simpler place.
Human society is extraordinarily complex. The information needed to describe a collection of interacting humans is much greater than the information needed to describe isolated humans. Breaking up a society of humans destroys information, just like breaking up a human brain into individual neurons. Thus an AI guided by this rule would not do anything to threaten human civilization.
This rule also prevents the AI from making species extinct or destroying ecosystems and other complex natural systems. It ensures that the future will continue to be inhabited by a society of unique humans interacting in a system where nature has been somewhat preserved. As a first approximation, that is all we really care about.
Clearly this rule is not complete, nor is it symmetric. The AI should not be solely devoted to increasing information. If I break a window in your house, it takes more information to describe your house. More seriously, a human body infected with diseases and parasites requires more information to describe than a healthy body. The AI should not prevent humans from reducing the information content of the universe if we choose to do so, and it should assign some weight to human happiness.
The worst-case scenario is that this rule generates an AI that is an extreme pacifist and conservationist, one that refuses to end disease or alter the natural world to fit our needs. I can live with that. I'd rather have to deal with my own illnesses than be turned into paperclips.
One final note: I generally agree with Robin Hanson that rule-following is more important than values. If we program an AI with an absolute respect for property rights, such that it refuses to use or alter anything that it has not been given ownership of, we should be safe no matter what its values or desires are. But I'd like information-based morality in there as well.
Ward Farnsworth (2007) The Legal Analyst
My main goal in teaching my introductory Economics class is to give students a good set of mental tools for understanding the world. This semester, I had a student who already had a surprisingly good understanding of game theory and questions of knowledge and proof. As we talked after class, he mentioned that he had learned these things from a book assigned for an introductory law class. After I asked about the book, he lent it to me.
From the minute I started reading 'The Legal Analyst', I saw that it was consistently excellent. About two-thirds of it was a readable, intuitive, high-quality summary of things I already knew, and the other third was new information that I am very glad to have. After finishing the book, my professional opinion is that it is extraordinarily good. Anyone who studies it will be a much better thinker and citizen.
'The Legal Analyst' is not just a law textbook. The subtitle is A toolkit for thinking about the law. These should be reversed. The title of the book should be 'A Toolkit for Thinking' and the subtitle should be 'using examples from the legal system'. The book is an excellent overview of a lot of very important things, such as incentives, thinking at the margin, game theory, the social value of rules and standards, heuristics and biases in human thinking, and the tools of rational thinking. It has the best intuitive explanation of Bayes' Theorem I have ever seen, making this incredibly important mental tool available for everyone's use.
I am very glad that law students are reading 'The Legal Analyst'. They will be much better thinkers as a result. The existence of this book makes me more optimistic about the future of our government and legal system. If the principles outlined here become widely understood, the world will be a better place. This book should be required reading in any course that can get away with assigning it. Anyone who is responsible for writing any kind of regulation or policy, or does economic analysis, needs the information in this book.
'The Legal Analyst' is a very easy book to read, making it even better from a cost-benefit analysis standpoint. I read it a few chapters a time, in my spare time, without any mental effort required. A great deal of high-quality research has been carefully and expertly summarized in clear, vibrant language.
Anyone who has an interest in understanding how the world works, or becoming a more rational thinker, should read 'The Legal Analyst'.