I dislike this. Here is why:
There is rarely a stable equilibrium in evolutionary games. When we look at the actual history of evolution, it is one of arms races -- every time a new form of signaling is invented, another organism figures out how to fake it. Any Parfitian filter can be passed by an organism that merely fools Omega. And such an organism will do better than one who actually pays Omega.
are limited to using a decision theory that survived past social/biological Parfitian filters.
What really frustrates me about your article is that you never specify a decision theory, list of decision theories, or category of decision theories that would be likely to survive Parfitian filters.
I agree with User:Perplexed that one obvious candidate for such a decision theory is the one we seem to actually have: a decision theory that incorporates values like honor, reciprocity, and filial care into its basic utility function. Yet you repeatedly insist that this is not what is actually happening...why? I do not understand.
Parenthood doesn't look like a Parfait's Hitchhiker* to me - are you mentioning it for some other reason?
* Err, Parfit's Hitchhiker. Thanks, Alicorn!
This post has its flaws, as has been pointed out, but to add the required nuance would make a book (or at least a LW sequence) out of what is currently a good and provocative post.
The nuances I think are vital:
Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.
Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.
Hmm, so basically evolution started out making creatures with a simple selfish utility function and a straightforward causal decision theory. One improvement would have been to make a make the decision theory "better" (more like Timeless Decisi...
Something just occurred to me - the conclusions you reach in this post and in the version of the post on your blog, seem to contradict each other. If moral intuitions really are "the set of intuitions that were selected for because they saw optimality in the absence of a causal link", and if, as you claim on your blog, Parfit's Hitchhiker is a useful model for intellectual property, then why is it that an entire generation of kids... TWO generations really, have grown up now with nearly unanimous moral intuitions telling them there's nothing wrong with "stealing" IP?
it means it is far too strict to require that our decisions all cause a future benefit; we need to count acausal “consequences” (SAMELs) on par with causal ones (CaMELs)
OK, so this may be a completely stupid question, as I'm a total newbie to decision theoryish issues... but couldn't you work non-zero weighting of SAMELs into a decision theory, without abandoning consequentialism, by reformulating "causality" in an MWIish, anthropic kind of way in which you say that an action is causally linked to a consequence if it increases the number of w...
Sure. Morals = the part of our utility function that benefits our genes more than us. But is this telling us anything we didn't know since reading The Selfish Gene? Or any problems with standard decision theory? There's no need to invoke Omega, or a new decision theory. Instead of recognizing that you can use standard decision theory, but measure utility as gene copies rather than as a human carrier's qualia, you seem to be trying to find a decision theory for the human that will implement the gene's utility function.
Thinking of it as being limited to using a specific decision theory is incorrect. Instead, it should simply be seen as using a specific decision theory, or one of many. It's not like evolution and such are here right now, guiding your actions. Evolution acts through our genes, which program us to do a specific thing.
Why do the richest people on Earth spend so much time and money helping out the poorest? Is that what a rational agent with a Parfit-winning decision theory would do?
Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.
I don't think any decision theory that has been proposed by anyone so far has this property. You might want to either change the example, or explain what you're talking about here...
Strongly downvoted for using omega to rationalize your pre-existing political positions.
You use methods of rationality to fail at substance of rationality.
Non-political follow-up to: Ungrateful Hitchhikers (offsite)
Related to: Prices or Bindings?, The True Prisoner's Dilemma
Summary: Situations like the Parfit's Hitchhiker problem select for a certain kind of mind: specifically, one that recognizes that an action can be optimal, in a self-interested sense, even if it can no longer cause any future benefit. A mind that can identify such actions might put them in a different category which enables it to perform them, in defiance of the (futureward) consequentialist concerns that normally need to motivate it. Our evolutionary history has put us through such "Parfitian filters", and the corresponding actions, viewed from the inside, feel like "something we should do", even if we don’t do it, and even if we recognize the lack of a future benefit. Therein lies the origin of our moral intuitions, as well as the basis for creating the category "morality" in the first place.
Introduction: What kind of mind survives Parfit's Dilemma?
Parfit's Dilemma – my version – goes like this: You are lost in the desert and near death. A superbeing known as Omega finds you and considers whether to take you back to civilization and stabilize you. It is a perfect predictor of what you will do, and only plans to rescue you if it predicts that you will, upon recovering, give it $0.01 from your bank account. If it doesn’t predict you’ll pay, you’re left in the desert to die. [1]
So what kind of mind wakes up from this? One that would give Omega the money. Most importantly, the mind is not convinced to withhold payment on the basis that the benefit was received only in the past. Even if it recognizes that no future benefit will result from this decision -- and only future costs will result -- it decides to make the payment anyway.
If a mind is likely to encounter such dilemmas, it would be an advantage to have a decision theory capable of making this kind of "un-consequentialist" decision. And if a decision theory passes through time by being lossily stored by a self-replicating gene (and some decompressing apparatus), then only those that shift to encoding this kind of mentality will be capable of propagating themselves through Parfit's Hitchhiker-like scenarios (call these scenarios "Parfitian filters").
Sustainable self-replication as a Parfitian filter
Though evolutionary psychology has its share of pitfalls, one question should have an uncontroversial solution: "Why do parents care for their children, usually at great cost to themselves?" The answer is that their desires are largely set by evolutionary processes, in which a “blueprint” is slightly modified over time, and the more effective self-replicating blueprint-pieces dominate the construction of living things. Parents that did not have sufficient "built-in desire" to care for their children would be weeded out; what's left is (genes that construct) minds that do have such a desire.
This process can be viewed as a Parfitian filter: regardless of how much parents might favor their own survival and satisfaction, they could not get to that point unless they were "attached" to a decision theory that outputs actions sufficiently more favorable toward one's children than one's self. Addendum (per pjeby's comment): The parallel to Parfit's Hitchhiker is this: Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the "decide to pay"/"decide to care for children" if it had the right decision theory before the "rescue"/"copy to next generation".
Explanatory value of utility functions
Let us turn back to Parfit’s Dilemma, an idealized example of a Parfitian filter, and consider the task of explaining why someone decided to pay Omega. For simplicity, we’ll limit ourselves to two theories:
Theory 1a: The survivor’s utility function places positive weight on benefits both to the survivor and to Omega; in this case, the utility of “Omega receiving the $0.01” (as viewed by the survivor’s function) exceeds the utility of keeping it.
Theory 1b: The survivor’s utility function only places weight on benefits to him/herself; however, the survivor is limited to using decision theories capable of surviving this Parfitian filter.
The theories are observationally equivalent, but 1a is worse because it makes strictly more assumptions: in particular, the questionable one that the survivor somehow values Omega in some terminal, rather than instrumental sense. [2] The same analysis can be carried over to the earlier question about natural selection, albeit disturbingly. Consider these two analogous theories attempting to explain the behavior of parents:
Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.
Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.
The point here is not to promote some cynical, insulting view of parents; rather, I will show how this “acausal self-interest” so closely aligns with the behavior we laud as moral.
SAMELs vs. CaMELs, Morality vs. Selfishness
So what makes an issue belong in the “morality” category in the first place? For example, the decision of which ice cream flavor to choose is not regarded as a moral dilemma. (Call this Dilemma A.) How do you turn it into a moral dilemma? One way is to make the decision have implications for the well-being of others: "Should you eat your favorite ice cream flavor, instead of your next-favorite, if doing so shortens the life of another person?" (Call this Dilemma B.)
Decision-theoretically, what is the difference between A and B? Following Gary Drescher's treatment in Chapter 7 of Good and Real, I see another salient difference: You can reach the optimal decision in A by looking only at causal means-end links (CaMELs), while Dilemma B requires that you consider the subjunctive acausal means-end links (SAMELs). Less jargonishly, in Dilemma B, an ideal agent will recognize that their decision to pick their favorite ice cream at the expense of another person suggests that others in the same position will do (and have done) likewise, for the same reason. In contrast, an agent in Dilemma A (as stated) will do no worse as a result of ignoring all such entailments.
More formally, a SAMEL is a relationship between your choice and the satisfaction of a goal, in which your choice does not (futurewardly) cause the goal’s achievement or failure, while in a CaMEL, it does. Drescher argues that actions that implicitly recognize SAMELs tend to be called “ethical”, while those that only recognize CaMELs tend to be called “selfish”. I will show how these distinctions (between causal and acausal, ethical and unethical) shed light on moral dilemmas, and on how we respond to them, by looking at some familiar arguments.
Joshua Greene, Revisited: When rationalizing wins
A while back, LW readers discussed Greene’s dissertation on morality. In it, he reviews experiments in which people are given moral dilemmas and asked to justify their position. The twist: normally people justify their position by reference to some consequence, but that consequence is carefully removed from being a possibility in the dilemma’s set-up. The result? The subjects continued to argue for their position, invoking such stopsigns as, “I don’t know, I can’t explain it, [sic] I just know it’s wrong” (p. 151, citing Haidt).
Greene regards this as misguided reasoning, and interprets it to mean that people are irrationally making choices, excessively relying on poor intuitions. He infers that we need to fundamentally change how we think and talk about moral issues so as to eliminate these questionable barriers in our reasoning.
In light of Parfitian filters and SAMELs, I think a different inference is available to us. First, recall that there are cases where the best choices don’t cause a future benefit. In those cases, an agent will not be able to logically point to such a benefit as justification, even despite the choice’s optimality. Furthermore, if an agent’s decision theory was formed through evolution, their propensity to act on SAMELs (selected for due to its optimality) arose long before they were capable of careful self-reflective analysis of their choices. This, too, can account for why most people a) opt for something that doesn’t cause a future benefit, b) stick to that choice with or without such a benefit, and c) place it in a special category (“morality”) when justifying their action.
This does not mean we should give up on rationally grounding our decision theory, “because rationalizers win too!” Nor does it mean that everyone who retreats to a “moral principles” defense is really acting optimally. Rather, it means it is far too strict to require that our decisions all cause a future benefit; we need to count acausal “consequences” (SAMELs) on par with causal ones (CaMELs) – and moral intuitions are a mechanism that can make us do this.
As Drescher notes, the optimality of such acausal benefits can be felt, intuitively, when making a decision, even if they are insufficient to override other desires, and even if we don’t recognize it in those exact terms (pp. 318-9):
To this we can add the Parfit’s Hitchhiker problem: how do you feel, internally, about not paying Omega? One could just as easily criticize your desire to pay Omega as “rationalization”, as you cannot identify a future benefit caused by your action. But the problem, if any, lies in failing to recognize acausal benefits, not in your desire to pay.
The Prisoner’s Dilemma, Revisited: Self-sacrificial caring is (sometimes) self-optimizing
In this light, consider the Prisoner’s Dilemma. Basically, you and your partner-in-crime are deciding whether to rat each other out; the sum of the benefit to you both is highest if you stay silent, but one can do better at the cost of the other by confessing. (Label this scenario that is used to teach it as the “Literal Prisoner’s Dilemma Situation”, or LPDS.)
Eliezer Yudkowsky previously claimed in The True Prisoner's Dilemma that mentioning the LPDS introduces a major confusion (and I agreed): real people in that situation do not, intuitively, see the payoff matrix as it's presented. To most of us our satisfaction with the outcome is not solely a function of how much jail time we avoid: we also care about the other person, and don't want to be a backstabber. So, the argument goes, we need a really contrived situation to get a payoff matrix like that.
I suggest an alternate interpretation of this disconnect: the payoff matrix is correct, but the humans facing the dilemma have been Parfitian-filtered to the point where their decision theory contains dispositions that assist them in winning on these problems, even given that payoff matrix. To see why, consider another set of theories to choose from, like the two above:
Theory 3a: Humans in a literal Prisoner’s Dilemma (LPDS) have a positive weight in their utility function both for themselves, and their accomplices, and so would be hurt to see the other one suffer jail time.
Theory 3b: Humans in a literal Prisoner’s Dilemma (LPDS) have a positive weight in their utility function only for themselves, but are limited to using a decision theory that survived past social/biological Parfitian filters.
As with the point about parents, the lesson is not that you don’t care about your friends; rather, it’s that your actions based on caring are the same as that of a self-interested being with a good decision theory. What you recognize as “just wrong” could be the feeling of a different “reasoning module” acting.
Conclusion
By viewing moral intuitions as mechanism that allows propagation through Parfitian filters, we can better understand:
1) what moral intuitions are (the set of intuitions that were selected for because they saw optimality in the absence of a causal link);
2) why they arose (because agents with them pass through the Parfitian filters that weed out others, evolution being one of them); and
3) why we view this as a relevant category boundary in the first place (because they are all similar in that they elevate the perceived benefit of an action that lacks a self-serving, causal benefit).
Footnotes:
[1] My variant differs in that there is no communication between you and Omega other than knowledge of your conditional behaviors, and the price is absurdly low to make sure the relevant intuitions in your mind are firing.
[2] Note that 1b’s assumption of constraints on the agent’s decision theory does not penalize it, as this must be assumed in both cases, and additional implications of existing assumptions do not count as additional assumptions for purposes of gauging probabilities.