This post enumerates texts that I consider (potentially) useful training for making progress on Friendly AI/decision theory/metaethics.
Rationality and Friendly AI
Eliezer Yudkowsky's sequences and this blog can provide solid introduction to the problem statement of Friendly AI, giving concepts useful for understanding motivation for the problem, and disarming endless failure modes that people often fall into when trying to consider the problem.
For a shorter introduction, see
- Eliezer S. Yudkowsky (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk". Global Catastrophic Risks. Oxford University Press.
Decision theory
The following book introduces an approach to decision theory that seems to be closer to what's needed for FAI than the traditional treatments in philosophy or game theory:
- G. L. Drescher (2006). Good and Real: Demystifying Paradoxes from Physics to Ethics (Bradford Books). The MIT Press, 1 edn.
Another (more technical) treatment of decision theory from the same cluster of ideas:
- E. Yudkowsky. Timeless Decision Theory (draft, Sep 2010)
Following posts on Less Wrong present ideas relevant to this development of decision theory:
- A Priori
- Newcomb's Problem and Regret of Rationality
- The True Prisoner's Dilemma
- Counterfactual Mugging
- Timeless Decision Theory: Problems I Can't Solve
- Towards a New Decision Theory
- Ingredients of Timeless Decision Theory
- Decision theory: Why Pearl helps reduce "could" and "would", but still leaves us with at least three alternatives
- The Absent-Minded Driver
- AI cooperation in practice
- What a reduction of "could" could look like
- Controlling Constant Programs
- Notion of Preference in Ambient Control
Mathematics
The most relevant tool for thinking about FAI seems to be mathematics, where it teaches to work with precise ideas (in particular, mathematical logic). Starting from a rusty technical background, the following reading list is one way to start:
[Edit Nov 2011: I no longer endorse scope/emphasis, gaps between entries, and some specific entries on this list.]
- F. W. Lawvere & S. H. Schanuel (1991). Conceptual mathematics: a first introduction to categories. Buffalo Workshop Press, Buffalo, NY, USA.
- B. Mendelson (1962). Introduction to Topology. College Mathematics. Allyn & Bacon Inc., Boston.
- P. R. Halmos (1960). Naive Set Theory. Springer, first edn.
- H. B. Enderton (2001). A Mathematical Introduction to Logic. Academic Press, second edn.
- S. Mac Lane & G. Birkhoff (1999). Algebra. American Mathematical Society, 3 edn.
- F. W. Lawvere & R. Rosebrugh (2003). Sets for Mathematics. Cambridge University Press.
- J. R. Munkres (2000). Topology. Prentice Hall, second edn.
- S. Awodey (2006). Category Theory. Oxford Logic Guides. Oxford University Press, USA.
- K. Kunen (1999). Set Theory: An Introduction To Independence Proofs, vol. 102 of Studies in Logic and the Foundations of Mathematics. Elsevier Science, Amsterdam.
- P. G. Hinman (2005). Fundamentals of Mathematical Logic. A K Peters Ltd.
Ok, I certainly agree that defining the goal is important. Although I think there is a definite need for a balance between investigation of the problem and attempts at its solution (as each feed into one another). Much as how academia currently functions. For example, any AI will need a model of human and social behaviour in order to make predictions. Solving how an AI might learn this would represent a huge step towards solving FAI and a huge step in understanding the problem of being friendly. I.e. whatever the solution is will involve some configuration of society that maintains and maximises some set of measurable properties from it.
If the system can predict how a person will feel in a given state it can solve for which utopia we will be most enthusiastic about. Eliezer's posts seem to be exploring this problem manually, without really taking a stab at a solution, or proposing a route to reaching one. This can be very entertaining but I'm not sure it's progress.
Unfortunately, if you think about it, "predicting how a person feels" isn't really helpful to anything, and doesn't contribute to the project of FAI at all (see Are wireheads happy? and The Hidden Complexity of Wishes, for example).
The same happens with other obvious ideas that you think up in the first 5 minutes of considering the problem, and which appear to argue that "research into nuts and bolts of AGI" is relevant for FAI. But on further reflection, it always turns out that these arguments don't hold any water.
The problem comes ... (read more)