Non-political follow-up to: Ungrateful Hitchhikers (offsite)

 

Related to: Prices or Bindings?, The True Prisoner's Dilemma

 

Summary: Situations like the Parfit's Hitchhiker problem select for a certain kind of mind: specifically, one that recognizes that an action can be optimal, in a self-interested sense, even if it can no longer cause any future benefit.  A mind that can identify such actions might put them in a different category which enables it to perform them, in defiance of the (futureward) consequentialist concerns that normally need to motivate it.  Our evolutionary history has put us through such "Parfitian filters", and the corresponding actions, viewed from the inside, feel like "something we should do", even if we don’t do it, and even if we recognize the lack of a future benefit.  Therein lies the origin of our moral intuitions, as well as the basis for creating the category "morality" in the first place.

 

Introduction: What kind of mind survives Parfit's Dilemma?

 

Parfit's Dilemma – my version – goes like this: You are lost in the desert and near death.  A superbeing known as Omega finds you and considers whether to take you back to civilization and stabilize you.  It is a perfect predictor of what you will do, and only plans to rescue you if it predicts that you will, upon recovering, give it $0.01 from your bank account.  If it doesn’t predict you’ll pay, you’re left in the desert to die. [1]

 

So what kind of mind wakes up from this?  One that would give Omega the money.  Most importantly, the mind is not convinced to withhold payment on the basis that the benefit was received only in the past.  Even if it recognizes that no future benefit will result from this decision -- and only future costs will result -- it decides to make the payment anyway.

 

If a mind is likely to encounter such dilemmas, it would be an advantage to have a decision theory capable of making this kind of "un-consequentialist" decision.  And if a decision theory passes through time by being lossily stored by a self-replicating gene (and some decompressing apparatus), then only those that shift to encoding this kind of mentality will be capable of propagating themselves through Parfit's Hitchhiker-like scenarios (call these scenarios "Parfitian filters").

 

Sustainable self-replication as a Parfitian filter

 

Though evolutionary psychology has its share of pitfalls, one question should have an uncontroversial solution: "Why do parents care for their children, usually at great cost to themselves?"  The answer is that their desires are largely set by evolutionary processes, in which a “blueprint” is slightly modified over time, and the more effective self-replicating blueprint-pieces dominate the construction of living things.  Parents that did not have sufficient "built-in desire" to care for their children would be weeded out; what's left is (genes that construct) minds that do have such a desire.

 

This process can be viewed as a Parfitian filter: regardless of how much parents might favor their own survival and satisfaction, they could not get to that point unless they were "attached" to a decision theory that outputs actions sufficiently more favorable toward one's children than one's self.  Addendum (per pjeby's comment): The parallel to Parfit's Hitchhiker is this: Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the "decide to pay"/"decide to care for children" if it had the right decision theory before the "rescue"/"copy to next generation".

 

Explanatory value of utility functions

 

Let us turn back to Parfit’s Dilemma, an idealized example of a Parfitian filter, and consider the task of explaining why someone decided to pay Omega.  For simplicity, we’ll limit ourselves to two theories:

 

Theory 1a: The survivor’s utility function places positive weight on benefits both to the survivor and to Omega; in this case, the utility of “Omega receiving the $0.01” (as viewed by the survivor’s function) exceeds the utility of keeping it.

Theory 1b: The survivor’s utility function only places weight on benefits to him/herself; however, the survivor is limited to using decision theories capable of surviving this Parfitian filter.

 

The theories are observationally equivalent, but 1a is worse because it makes strictly more assumptions: in particular, the questionable one that the survivor somehow values Omega in some terminal, rather than instrumental sense. [2] The same analysis can be carried over to the earlier question about natural selection, albeit disturbingly.  Consider these two analogous theories attempting to explain the behavior of parents:

 

Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.

Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.

 

The point here is not to promote some cynical, insulting view of parents; rather, I will show how this “acausal self-interest” so closely aligns with the behavior we laud as moral.

 

SAMELs vs. CaMELs, Morality vs. Selfishness

 

So what makes an issue belong in the “morality” category in the first place?  For example, the decision of which ice cream flavor to choose is not regarded as a moral dilemma.  (Call this Dilemma A.)  How do you turn it into a moral dilemma?  One way is to make the decision have implications for the well-being of others: "Should you eat your favorite ice cream flavor, instead of your next-favorite, if doing so shortens the life of another person?"  (Call this Dilemma B.)

 

Decision-theoretically, what is the difference between A and B?  Following Gary Drescher's treatment in Chapter 7 of Good and Real, I see another salient difference: You can reach the optimal decision in A by looking only at causal means-end links (CaMELs), while Dilemma B requires that you consider the subjunctive acausal means-end links (SAMELs).  Less jargonishly, in Dilemma B, an ideal agent will recognize that their decision to pick their favorite ice cream at the expense of another person suggests that others in the same position will do (and have done) likewise, for the same reason.  In contrast, an agent in Dilemma A (as stated) will do no worse as a result of ignoring all such entailments.

 

More formally, a SAMEL is a relationship between your choice and the satisfaction of a goal, in which your choice does not (futurewardly) cause the goal’s achievement or failure, while in a CaMEL, it does.  Drescher argues that actions that implicitly recognize SAMELs tend to be called “ethical”, while those that only recognize CaMELs tend to be called “selfish”.  I will show how these distinctions (between causal and acausal, ethical and unethical) shed light on moral dilemmas, and on how we respond to them, by looking at some familiar arguments.

 

Joshua Greene, Revisited: When rationalizing wins

 

A while back, LW readers discussed Greene’s dissertation on morality.  In it, he reviews experiments in which people are given moral dilemmas and asked to justify their position.  The twist: normally people justify their position by reference to some consequence, but that consequence is carefully removed from being a possibility in the dilemma’s set-up.  The result?  The subjects continued to argue for their position, invoking such stopsigns as, “I don’t know, I can’t explain it, [sic] I just know it’s wrong” (p. 151, citing Haidt).

 

Greene regards this as misguided reasoning, and interprets it to mean that people are irrationally making choices, excessively relying on poor intuitions.  He infers that we need to fundamentally change how we think and talk about moral issues so as to eliminate these questionable barriers in our reasoning.

 

In light of Parfitian filters and SAMELs, I think a different inference is available to us.  First, recall that there are cases where the best choices don’t cause a future benefit.  In those cases, an agent will not be able to logically point to such a benefit as justification, even despite the choice’s optimality.  Furthermore, if an agent’s decision theory was formed through evolution, their propensity to act on SAMELs (selected for due to its optimality) arose long before they were capable of careful self-reflective analysis of their choices.  This, too, can account for why most people a) opt for something that doesn’t cause a future benefit, b) stick to that choice with or without such a benefit, and c) place it in a special category (“morality”) when justifying their action.

 

This does not mean we should give up on rationally grounding our decision theory, “because rationalizers win too!”  Nor does it mean that everyone who retreats to a “moral principles” defense is really acting optimally.  Rather, it means it is far too strict to require that our decisions all cause a future benefit; we need to count acausal “consequences” (SAMELs) on par with causal ones (CaMELs) – and moral intuitions are a mechanism that can make us do this.

 

As Drescher notes, the optimality of such acausal benefits can be felt, intuitively, when making a decision, even if they are insufficient to override other desires, and even if we don’t recognize it in those exact terms (pp. 318-9):

 

Both the one-box intuition in Newcomb’s Problem (an intuition you can feel … even if you ultimately decide to take both boxes), and inclinations toward altruistic … behavior (inclinations you likewise can feel even if you end up behaving otherwise), involve what I have argued are acausal means-end relations.  Although we do not … explicitly regard the links as means-end relations, as a practical matter we do tend to treat them exactly as only means-end relations should be treated: our recognition of the relation between the action and the goal influences us to take the action (even if contrary influences sometimes prevail).

 

I speculate that it is not coincidental that in practice, we treat these means-end relations as what they really are.  Rather, I suspect that the practical recognition of means-end relations is fundamental to our cognitive machinery: it treats means-end relations (causal and acausal) as such because doing so is correct – that is, because natural selection favored machinery that correctly recognizes and acts on means-end relations without insisting that they be causal….

 

If we do not explicitly construe those moral intuitions as recognitions of subjunctive means-end links, we tend instead to perceive the intuitions as recognitions of some otherwise-ungrounded inherent deservedness by others of being treated well (or, in the case of retribution, of being treated badly).

 

To this we can add the Parfit’s Hitchhiker problem: how do you feel, internally, about not paying Omega?  One could just as easily criticize your desire to pay Omega as “rationalization”, as you cannot identify a future benefit caused by your action.  But the problem, if any, lies in failing to recognize acausal benefits, not in your desire to pay.

 

The Prisoner’s Dilemma, Revisited: Self-sacrificial caring is (sometimes) self-optimizing

 

In this light, consider the Prisoner’s Dilemma.  Basically, you and your partner-in-crime are deciding whether to rat each other out; the sum of the benefit to you both is highest if you stay silent, but one can do better at the cost of the other by confessing.  (Label this scenario that is used to teach it as the “Literal Prisoner’s Dilemma Situation”, or LPDS.)

 

Eliezer Yudkowsky previously claimed in The True Prisoner's Dilemma that mentioning the LPDS introduces a major confusion (and I agreed): real people in that situation do not, intuitively, see the payoff matrix as it's presented.  To most of us our satisfaction with the outcome is not solely a function of how much jail time we avoid: we also care about the other person, and don't want to be a backstabber.  So, the argument goes, we need a really contrived situation to get a payoff matrix like that.

 

I suggest an alternate interpretation of this disconnect: the payoff matrix is correct, but the humans facing the dilemma have been Parfitian-filtered to the point where their decision theory contains dispositions that assist them in winning on these problems, even given that payoff matrix.  To see why, consider another set of theories to choose from, like the two above:

 

Theory 3a: Humans in a literal Prisoner’s Dilemma (LPDS) have a positive weight in their utility function both for themselves, and their accomplices, and so would be hurt to see the other one suffer jail time.

Theory 3b: Humans in a literal Prisoner’s Dilemma (LPDS) have a positive weight in their utility function only for themselves, but are limited to using a decision theory that survived past social/biological Parfitian filters.

 

As with the point about parents, the lesson is not that you don’t care about your friends; rather, it’s that your actions based on caring are the same as that of a self-interested being with a good decision theory.  What you recognize as “just wrong” could be the feeling of a different “reasoning module” acting.

 

Conclusion

 

By viewing moral intuitions as mechanism that allows propagation through Parfitian filters, we can better understand:

 

1) what moral intuitions are (the set of intuitions that were selected for because they saw optimality in the absence of a causal link);

2) why they arose (because agents with them pass through the Parfitian filters that weed out others, evolution being one of them); and

3) why we view this as a relevant category boundary in the first place (because they are all similar in that they elevate the perceived benefit of an action that lacks a self-serving, causal benefit).

 

Footnotes:

 

[1] My variant differs in that there is no communication between you and Omega other than knowledge of your conditional behaviors, and the price is absurdly low to make sure the relevant intuitions in your mind are firing.

 

[2] Note that 1b’s assumption of constraints on the agent’s decision theory does not penalize it, as this must be assumed in both cases, and additional implications of existing assumptions do not count as additional assumptions for purposes of gauging probabilities.

Morality as Parfitian-filtered Decision Theory?
New Comment
273 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I dislike this. Here is why:

  • I dislike all examples involving omniscient beings.
  • I dislike the suggestion that natural selection finetuned (or filtered) our decision theory to the optimal degree of irrationality which was needed to do well in lost-in-desert situations involving omniscient beings.
  • I would prefer to assume that natural selection endowed us with a rational or near-rational decision theory and then invested its fine tuning into adjusting our utility functions.
  • I would also prefer to assume that natural selection endowed us with sub-conscious
... (read more)
4pjeby
Humans don't operate by maximizing utiltiy, for any definition of "utility" that isn't hideously tortured. Mostly, we simply act in ways that keep the expected value of relevant perceptual variables (such as our own feelings) within our personally-defined tolerances. (Corollary: creating autonomous systems that are utility-maximizing is a Really Bad Idea, as they will fail in ways that humans wouldn't intuitively expect. A superhuman FAI might be capable of constructing a friendly maximizer, but a human would be an idiot to try.)
2SilasBarta
I appreciate that you're criticizing the ad-hoc assumptions needed to salvage the utility function model in certain contexts, as one of my points was that several utility functions can equally well explain the same actions. Still, could you please limit your comments about Perceptual Control Theory to points directly relevant to the issues I raised? Just link one of your previous expositions of PCT rather than use this discussion as a platform to argue for it anew.
2Perplexed
Actually, the definition of "utility" is pretty simple. It is simply "that thing that gets maximized in any particular person's decision making". Perhaps you think that humans do not maximize utility because you have a preferred definition of utility that is different from this one. Ok, that is a plausible sounding alternative to the idea of maximizing something. But the maximizing theory has been under scrutiny for 150 years, and under strong scrutiny for the past 50. It only seems fair to give your idea some scrutiny too. Two questions jump out at me: * What decision is made when multiple choices all leave the variables within tolerance? * What decision is made when none of the available choices leave the variables within tolerance? Looking forward to hearing your answer on these points. If we can turn your idea into a consistent and plausible theory of human decision making, I'm sure we can publish it.
8Richard_Kennaway
Ah, "the advantage of theft over honest toil". Writing down a definite noun phrase does not guarantee the existence of a thing in reality that it names. Some specific references would help in discerning what, specifically, you are alluding to here. You say in another comment in this thread: but you have not done this at all, merely made vague allusions to "the last 150 years" and "standard economic game theory". Well, you can't get much more standard than Von Neumann and Morgenstern's "Theory of Games and Economic Behaviour". This book does not attempt to justify the hypothesis that we maximise something when we make decisions. That is an assumption that they adopt as part of the customary background for the questions they want to address. Historically, the assumption goes back to the questions about gambling that got probability theory started, in which there is a definite thing -- money -- that people can reasonably be regarded as maximising. Splitting utility from money eliminates complications due to diminishing marginal utility of money. The Utility Theorem does not prove, or attempt to prove, that we are maximisers. It is a not very deep mathematical theorem demonstrating that certain axioms on a set imply that it is isomorphic to an interval of the real line. The hypothesis that human preferences are accurately modelled as a function from choices to a set satisfying those axioms is nowhere addressed in the text. I shall name this the Utility Hypothesis. What evidence are you depending on for asserting it?
3wedrifid
That isn't a particularly good example. There are advantages to theft over honest toil. It is just considered inappropriate to acknowledge them. I have a whole stash of audio books that I purchased with the fruit of 'honest toil'. I can no longer use them because they are crippled with DRM. I may be able to sift around and find the password somewhere but to be honest I suspect it would be far easier to go and 'steal' a copy. Oh, then there's the bit where you can get a whole lot of money and stuff for free. That's an advantage!
1Richard_Kennaway
It's a metaphor.
1wedrifid
My point being that it is a bad metaphor.
0Perplexed
I liked the metaphor. Russell was a smart man. But so was von Neumann, and Aumann and Myerson must have gotten their Nobel prizes for doing something useful. Axiomatic "theft" has its place along side empirical "toil"
4wedrifid
So, am I to understand that you like people with Nobel prizes? If I start writing the names of impressive people can I claim some of their status for myself too? How many times will I be able to do it before the claims start to wear thin?
8Morendil
Before I broke down and hit the Kibitz button I had a strong hunch that Clippy had written the above. Interesting. ;)
1Perplexed
Only if you are endorsing their ideas in the face of an opposition which cannot cite such names. ;) Sorry if it is wearing thin, but I am also tired of being attacked as if the ideas I am promoting mark me as some kind of crank.
6wedrifid
I haven't observed other people referencing those same names both before and after your appearance having all that much impact on you. Nor have I taken seriously your attempts to present a battle between "Perplexed and all Nobel prize winners" vs "others". I'd be very surprised if the guys behind the names really had your back in these fights, even if you are convinced you are fighting in their honour.
0Perplexed
Sure. Happy to help. I too sometimes have days when I can't remember how to work that "Google" thing. You mention Von Neumann and Morgenstern's "Theory of Games and Economic Behaviour" yourself - as you can see, I have added an Amazon link. The relevant chapter is #3. Improvements to this version have been made by Savage and by Anscombe and Aumann. You can get a useful survey of the field from wikipedia. Wikipedia is an amazing resource, by the way. I strongly recommend it. Two texts from my own bookshelf that contain expositions of this material are Chapter 1 of Myerson and Chapter 2 of Luce and Raiffa. I would recommend the Myerson. Luce and Raiffa is cheaper, but it is somewhat dated and doesn't prove much coverage at all of the more advanced topics such as correlated equilibria and the revelation principle. It does have some good material on Nash's program though. And finally, for a bit of fun in the spirit of Project Steve, I offer this online bibliography of some of the ways this body of theory has been applied in one particular field. Did I assert it? Where? I apologize profusely if I did anything more than to suggest that it provides a useful model for the more important and carefully considered economic decisions. I explicitly state here that the justification of the theory is not empirical. The theory is about rational decision making, not human decision making.
4Richard_Kennaway
It is not. As I said, the authors do not attempt to justify the Utility Hypothesis, they assume it. Chapter 2 (not 3), page 8: "This problem [of what to assume about individuals in economic theory] has been stated traditionally by assuming that the consumer desires to obtain a maximum of utility or satisfaction and the entrepreneur a maximum of profits." The entire book is about the implications of that assumption, not its justificaation, of which it says nothing. Neither do these authors attempt to justify the Utility Hypothesis; they too assume it. I can find Luce and Raiffa in my library and Myerson through inter-library loan, but as none of the first three works you've cited provide evidence for the claim that people have utility functions, rather than postulating it as an axiom, I doubt that these would either. But now you deny having asserted any such thing: Here you claim that people have utility functions: And also here: Here you assume that people must be talking about utility functions: Referring to the message from which the last three quotes are taken, you say and yet here you expand the phrase "prefer to assume" as : These are weasel words to let you talk about utility functions while denying you think there are any such things. How would you set about finding a model that is closer to reality, rather than one which merely makes better predictions?
1Perplexed
I would undertake an arduous self-education in neuroscience. Thankfully, I have no interest in cognitive models which are close to reality but make bad predictions. I'm no longer as good at learning whole new fields as I was when I was younger, so I would find neuroscience a tough slog.
5Vladimir_Nesov
It's a losing battle to describe humans as utility maximizers. Utility, as applied to people, is more useful in the normative sense, as a way to formulate one's wishes, allowing to infer the way one should act in order to follow them.
0Perplexed
Nevertheless, standard economic game theory frequently involves an assumption that it is common knowledge that all players are rational utility maximizers. And the reason it does so is the belief that on the really important decisions, people work extra hard to be rational. For this reason, on the really important decisions, utility maximization probably is not too far wrong as a descriptive theory.
7wedrifid
The reason it does so is because it is convenient. I don't entirely agree with pgeby. Being unable to adequately approximate human preferences to a single utility function is not something that is a property of the 'real world'. It is something that is a property of our rather significant limitations when it comes to making such evaluations. Nevertheless, having a textbook prescribe official status to certain mechanisms for deriving a utility function does not make that process at all reliable.
-1Perplexed
I'll be sure to remember that line, for when the people promoting other models of rationality start citing textbooks too. Well, no, I probably won't, since I doubt I will live long enough to see that. ;) But, if I recall correctly, I have mostly cited the standard textbook thought-experiments when responding to claims that utility maximization is conceptually incoherent - so absurd that no one in their right mind would propose it.
5wedrifid
I see that you are trying to be snide, but it took a while to figure out why you would believe this to be incisive. I had to reconstruct a model of what you think other people here believe from your previous rants. Yes. That would be a crazy thing to believe. (Mind you, I don't think pjeby believes crazy things - he just isn't listening closely enough to what you are saying to notice anything other than a nail upon which to use one of his favourite hammers.)
4pjeby
It seems to me that what has actually been shown is that when people think abstractly (i.e. "far") about these kinds of decisions, they attempt to calculate some sort of (local and extremely context-dependent) maximum utility. However, when people actually act (using "near" thinking), they tend to do so based on the kind of perceptual filtering discussed in this thread. What's more, even their "far" calculations tend to be biased and filtered by the same sort of perceptual filtering processes, even when they are (theoretically) calculating "utility" according to a contextually-chosen definition of utility. (What a person decides to weigh into a calculation of "best car" is going to vary from one day to the next, based on priming and other factors.) In the very best case scenario for utility maximization, we aren't even all that motivated to go out and maximize utility: it's still more like playing, "pick the best perceived-available option", which is really not the same thing as operating to maximize utility (e.g. the number of paperclips in the world). Even the most paperclip-obsessed human being wouldn't be able to do a good job of intuiting the likely behavior of a true paperclip-maximizing agent -- even if said agent were of only-human intelligence. For me, I'm not sure that "rational" and "utility maximizer" belong in the same sentence. ;-) In simplified economic games (think: spherical cows on a frictionless plane), you can perhaps get away with such silliness, but instrumental rationality and fungible utility don't mix under real world conditions. You can't measure a human's perception of "utility" on just a single axis!
4Perplexed
You have successfully communicated your scorn. You were much less successful at convincing anyone of your understanding of the facts. And you can't (consistently) make a decision without comparing the alternatives along a single axis. And there are dozens of textbooks with a chapter explaining in detail exactly how you go about doing it.
1pjeby
And what makes you think humans are any good at making consistent decisions? The experimental evidence clearly says we're not: frame a problem in two different ways, you get two different answers. Give us larger dishes of food, and we eat more of it, even if we don't like the taste! Prime us with a number, and it changes what we'll say we're willing to pay for something utterly unrelated to the number. Human beings are inconsistent by default. Of course. But that's not how human beings generally make decisions, and there is experimental evidence that shows such linearized decision algorithms are abysmal at making people happy with their decisions! The more "rationally" you weigh a decision, the less likely you are to be happy with the results. (Which is probably a factor in why smarter, more "rational" people are often less happy than their less-rational counterparts.) In addition, other experiments show that people who make choices in "maximizer" style (people who are unwilling to choose until they are convinced they have the best choice) are consistently less satisfied than people who are satisficers for the same decision context.
8wedrifid
It seems there is some criteria by which you are evaluating various strategies for making decisions. Assuming you are not merely trying to enforce your deontological whims upon your fellow humans I can infer that there is some kind of rough utility function by which you are giving your advice and advocating decision making mechanisms. While it is certainly not what we would find in Perplexed's text books it is this function which can be appropriately described as a 'rational utility function'. I am glad that you included the scare quotes around 'rationally'. It is 'rational' to do what is going to get the best results. It is important to realise the difference between 'sucking at making linearized spock-like decisions' and for good decisions being in principle uncomputable in a linearized manner. If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.
2pjeby
Not at all. Even in pure computational systems, being able to compare two things is not the same as having a total ordering. For example, in predicate dispatching, priority is based on logical implication relationships between conditions, but an arbitrary set of applicable conditions isn't guaranteed to have a total (i.e. linear) ordering. What I'm saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable "utility" function necessarily loses information from that preference ordering. That's why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It's insane.
4xamdam
Am I correct thinking that you welcome money pumps?
7pjeby
A partial order isn't the same thing as a cyclical ordering, and the existence of a money pump would certainly tend to disambiguate a human's preferences in its vicinity, thereby creating a total ordering within that local part of the preference graph. ;-)
0saturn
Hypothetically, would it cause a problem if a human somehow disambiguated her entire preference graph?
0pjeby
If conscious processing is required to do that, you probably don't want to disambiguate all possible tortures where you're not really sure which one is worse, exactly. (I mean, unless the choice is actually going to come up, is there really a reason to know for sure which kind of pliers you'd prefer to have your fingernails ripped out with?) Now, if you limit that preference graph to pleasant experiences, that would at least be an improvement. But even then, you still get the subjective experience of a lifetime of doing nothing but making difficult decisions! These problems go away if you leave the preference graph ambiguous (wherever it's currently ambiguous), because then you can definitely avoid simulating conscious experiences. (Note that this also isn't a problem if all you want to do is get a rough idea of what positive and/or negative reactions someone will initially have to a given world state, which is not the same as computing their totally ordered preference over some set of possible world states.)
2Perplexed
True enough. But the information loss is "just in time" - it doesn't take place until actually making a decision. The information about utilities that is "stored" is a mapping from states-of-the-world to ordinal utilities of each "result". That is, in effect, a partial order of result utilities. Result A is better than result B in some states of the world, but the preference is reversed in other states. You don't convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment - the moment when you have to make the decision.
0pjeby
Go implement yourself a predicate dispatch system (not even an AI, just a simple rules system), and then come back and tell me how you will linearize a preference order between non-mutually exclusive, overlapping conditions. If you can do it in a non-arbitrary (i.e. noise-injecting) way, there's probably a computer science doctorate in it for you, if not a math Nobel. If you can do that, I'll happily admit being wrong, and steal your algorithm for my predicate dispatch implementation. (Note: predicate dispatch is like a super-baby-toy version of what an actual AI would need to be able to do, and something that human brains can do in hardware -- i.e., we automatically apply the most-specific matching rules for a given situation, and kick ambiguities and conflicts up to a higher-level for disambiguation and post-processing. Linearization, however, is not the same thing as disambiguation; it's just injecting noise into the selection process.)
2Perplexed
I am impressed with your expertise. I just built a simple natural deduction theorem prover for my project in AI class. Used Lisp. Python didn't even exist back then. Nor Scheme. Prolog was just beginning to generate some interest. Way back in the dark ages. But this is relevant ... how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?
0pjeby
Predicate dispatch is a good analog of an aspect of human (and animal) intelligence: applying learned rules in context. More specifically, applying the most specific matching rules, where specificity follows logical implication... which happens to be partially-ordered. Or, to put it another way, humans have no problems recognizing exceptional conditions as having precedence over general conditions. And, this is a factor in our preferences as well, which are applied according to matching conditions. The specific analogy here with predicate dispatch, is that if two conditions are applicable at the same time, but neither logically implies the other, then the precedence of rules is ambiguous. In a human being, ambiguous rules get "kicked upstairs" for conscious disambiguation, and in the case of preference rules, are usually resolved by trying to get both preferences met, or at least to perform some kind of bartering tradeoff. However, if you applied a linearization instead of keeping the partial ordering, then you would wrongly conclude that you know which choice is "better" (to a human) and see no need for disambiguation in cases that were actually ambiguous. (Even humans' second-stage disambiguation doesn't natively run as a linearization: barter trades need not be equivalent to cash ones.) Anyway, the specific analogy with predicate dispatch, is that you really can't reduce applicability or precedence of conditions to a single number, and this problem is isomoprhic to humans' native preference system. Neither at stage 1 (collecting the most-specific applicable rules) nor stage 2 (making trade-offs) are humans using values that can be generally linearized in a single dimension without either losing information or injecting noise, even if it looks like some particular decision situation can be reduced to such. Theorem provers are sometimes used in predicate dispatch implementations, and mine can be considered an extremely degenerate case of one; one need only
0timtyler
It doesn't literally lose information - since the information inputs are sensory, and they can be archived as well as ever. The short answer is that human cognition is a mess. We don't want to reproduce all the screw-ups in an intelligent machine - and what you are talking about lookss like one of the mistakes.
3pjeby
It loses information about human values, replacing them with noise in regions where a human would need to "think things over" to know what they think... unless, as I said earlier, you simply build the entire human metacognitive architecture into your utility function, at which point you have reduced nothing, solved nothing, accomplished nothing, except to multiply the number of entities in your theory.
-7timtyler
-3wedrifid
Perplexed answered this question well.
1Perplexed
Nothing make me think that. I don't even care. That is the business of people like Tversky and Kahneman. They can give us a nice descriptive theory of what idiots people really are. I am more interested in a nice normative theory of what geniuses people ought to be.
-2pjeby
What you seem to have not noticed is that one key reason human preferences can be inconsistent is because they are represented in a more expressive formal system than a single utility value. Or that conversely, the very fact that utility functions are linearizable means that they are inherently less expressive. Now, I'm not saying "more expressiveness is always better", because, being human, I have the ability to value things non-fungibly. ;-) However, in any context where we wish to be able to mathematically represent human preferences -- and where lives are on the line by doing so -- we would be throwing away important, valuable information by pretending we can map a partial ordering to a total ordering. That's why I consider the "economic games assumption" to be a spherical cow assumption. It works nicely enough for toy problems, but not for real-world ones. Heck, I'll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)
1Perplexed
Programming and math are definitely the fields where most of my experience with partial orders comes from. Particularly domain theory and denotational semantics. Complete partial orders and all that. But the concepts also show up in economics textbooks. The whole concept of Pareto optimality is based on partial orders. As is demand theory in micro-economics. Indifference curves. Theorists are not as ignorant or mathematically naive as you seem to imagine.
-3timtyler
You are talking about the independence axiom...? You can just drop that, you know: "Of all the axioms, independence is the most often discarded. A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom."
-2pjeby
As far as I can tell from the discussion you linked, those axioms are based on an assumption that value is fungible. (In other words, they're begging the question, relative to this discussion.)
0timtyler
The basis of using utilities is that you can consider agent's possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that - provided that your decision process is computable in the first place. That's a pretty general framework - about the only assumption that can be argued with is its quantising of spacetime. The von Neumann-Morgenstern axioms layer on top of that basic idea. The independence axiom is the one about combining utilities by adding them up. I would say it is the one most closely associated with fungibility.
2pjeby
And that is not what humans do (although we can of course lamely attempt to mimic that approach by trying to turn off all our parallel processing and pretending to be a cheap sequential computer instead). Humans don't compute utility, then make a decision. Heck, we don't even "make decisions" unless there's some kind of ambiguity, at which point we do the rough equivalent of making up a new utility function, specifically to resolve the conflict that forced us pay conscious attention in the first place! This is a major (if not the major) "impedance mismatch" between linear "rationality" and actual human values. Our own thought processes are so thoroughly and utterly steeped in context-dependence that it's really hard to see just how alien the behavior of an intelligence based on a consistent, context-independent utility would be.
0timtyler
There's nothing serial about utility maximisation! ...and it really doesn't matter how the human works inside. That type of general framework can model the behaviour of any computable agent.
3pjeby
I didn't say there was. I said that humans needed to switch to slow serial processing in order to do it, because our brains aren't set up to do it in parallel. Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)
0timtyler
I think this indicates something about where the problem lies. You are apparently imagining an agent consciously calculating utilities. That idea has nothing to do with the idea that utility framework proponents are talking about.
2pjeby
No, I said that's what a human would have to do in order to actually calculate utilities, since we don't have utility-calculating hardware.
0timtyler
Ah - OK, then.
0wnoise
When humans don't consciously calculate, the actions they take are much harder to fit into a utility-maximizing framework, what with inconsistencies cropping up everywhere.
0timtyler
It depends on the utility-maximizing framework you are talking about - some are more general than others - and some are really very general.
0FAWS
Negative term for having made what later turns out to have been a wrong decision, perhaps proportional to the importance of the decision, and choices otherwise close to each other in expected utility, but with a large potential difference in actually realized utility.
-5timtyler
4pjeby
Whatever occurs to us first. ;-) We waffle, or try to avoid making the decision in the first place. ;-) (See, e.g., typical people's reactions to "trolley problems", or other no-win scenarios.) What I'm saying is that the above construction leads to error if you assume that "utility" is a function of the state of the world outside the human, rather than a function of the difference between the human's perceptions of the outside world, and the human's internal reference values or tolerance ranges for those perceptions. Maximizing a utility function over the state of the external world inherently tends to create results that would be considered undesirable by most humans. (See, for example, the various tortured insanities that come about when you try to maximize such a conception of "utility" over a population of humans.) It's important to understand that the representation you use to compute something is not value-neutral. Roman numerals, for example, make division much more complicated than Arabic ones. So, I'm not saying that you can't create some sort of "utility" function to represent human values. We have no reason to assume that human values aren't Turing-computable, and if they're Turing-computable, we should be able to use whatever stupidly complex representation we want to compute them. However, to use world-state-utility as your basis for computation is just plain silly, like using Roman numerals for long division. Your own intuition will make it harder for you to see the Friendliness-failures that are sitting right under your nose, because utility maximization is utterly foreign to normal human cognitive processes. (Externality-maximizing processes in human behavior are generally the result of pathology, rather than normal brain function.) Eliezer hasn't been alive that long, has he? ;-) Seriously, though, external-utility-maximizing thinking is the very essence of Unfriendly AI, and the history of discussions of world-state-based utility is that m
2timtyler
It looks as though it can be rearranged into a utility-maximization representation pretty easily. Set utility equal to minus the extent to which the "personally-defined tolerances" are exceeded. Presto!
2pjeby
Not quite - this would imply that tolerance-difference is fungible, and it's not. We can make trade-offs in our decision-making, but that requires conscious effort and it's a process more akin to barter than to money-trading.
0timtyler
Diamonds are not fungible - and yet they have prices. Same difference here, I figure.
3pjeby
What's the price of one red paperclip? Is it the same price as a house?
0timtyler
That seems to be of questionable relevance - since utilities in decision theory are all inside a single agent. Different agents having different values is not an issue in such contexts.
1pjeby
That's a big part of the problem right there: humans aren't "single agents" in this sense.
0timtyler
Humans are single agents in a number of senses - and are individual enough for the idea of revealed preference to be useful.
1pjeby
From the page you linked (emphasis added): However, even if you ignore that, WARP is trivially proven false by actual human behavior: people demonstrably do sometimes choose differently based on context. That's what makes ordinal utilities a "spherical cow" abstraction. (WARP's inapplicability when applied to real (non-spherical) humans, in one sentence: "I feel like having an apple today, instead of an orange." QED: humans are not "economic agents" under WARP, since they don't consistently choose A over B in environments where both A and B are available.)
0timtyler
The first sentence is true - but the second sentence doesn't follow from it logically - or in any other way I can see. It is true that there are some problems modelling humans as von Neumann–Morgenstern agents - but that's no reason to throw out the concept of utility. Utility is a much more fundamental and useful concept.
2pjeby
WARP can't be used to predict a human's behavior in even the most trivial real situations. That makes it a "spherical cow" because it's a simplifying assumption adopted to make the math easier, at the cost of predictive accuracy. That sounds to me uncannily similar to, "it is true that there are some problems modeling celestial movement using crystal spheres -- but that's no reason to throw out the concept of celestial bodies moving in perfect circles."
0timtyler
There is an obvious surface similarity - but so what? You constructed the sentence that way deliberately. You would need to make an analogy for arguing like that to have any force - and the required analogy looks like a bad one to me.
2pjeby
How so? I'm pointing out that the only actual intelligent agents we know of don't actually work like economic agents on the inside. That seems like a very strong analogy to Newtonian gravity vs. "crystal spheres". Economic agency/utility models may have the Platonic purity of crystal spheres, but: 1. We know for a fact they're not what actually happens in reality, and 2. They have to be tortured considerably to make them "predict" what happens in reality.
-3timtyler
It seems to me like arguing that we can't build a good computer model of a bridge - because inside the model is all bits, while inside the actual bridge is all spinning atoms. Computers can model anything. That is because they are universal. It doesn't matter that computers work differently inside from the thing they are modelling. Just the same applies to partially-recursive utility functions - they are a universal modelling tool - and can model any computable agent.
3pjeby
Not at all. I'm saying that just as it takes more bits to describe a system of crystal spheres to predict planetary motion than it does to make the same predictions with a Newtonian solar system model, so too does it take more bits to predict a human's behavior with a utility function, than it does to describe a human with interests and tolerances. Indeed, your argument seems to be along the lines that since everything is made of atoms, we should model bridges using them. What were your words? Oh yes: Right. That very universality is exactly what makes them a poor model of human intelligence: they don't concentrate probability space in the same way, and therefore don't compress well.
0timtyler
Sure - but whay you claimed was a "spherical cow" was "ordinal utilities" which is a totally different concept.
0pjeby
It was you who brought the revealed preferences into it, in order to claim that humans were close enough to spherical cows. I merely pointed out that revealed preferences in even their weakest form are just another spherical cow, and thus don't constitute evidence for the usefulness of ordinal utility.
0timtyler
That's treating the "Weak Axiom of Revealed Preference" as the "weakest form" of revealed preference. However, that is not something that I consider to be correct. The idea I introduced revealed preference to support was that humans act like a single agent in at least one important sense - namely that they have a single brain and a single body.
1pjeby
Single brain and body doesn't mean anything when that brain is riddled with sometimes-conflicting goals... which is precisely what refutes WARP. (See also Ainslie's notion of "picoeconomics", i.e. modeling individual humans as a collection of competing agents -- which is closely related to the tolerance model I've been giving examples of in this thread.)
1Perplexed
That sounds interesting. Is there anything serious about it available online? Every paper I could find was behind a paywall.
2arundelo
* Ainslie's précis of his book Breakdown of Will * Yvain's Less Wrong post "Applied Picoeconomics"
1Perplexed
Muchas gracias.
0timtyler
Competing sub-goals are fine. Deep Blue wanted to promote its pawn as well as protect its king - and those aims conflict. Such conflicts don't stop utilities being assigned and moves from being made. You only have one body - and it is going to do something.
0pjeby
Then why did you even bring this up in the first place?
-3SilasBarta
Probably for the same reason you threadjacked to talk about PCT ;-)
0[anonymous]
This definition sounds dangerously vacuous to me. Of course, you can always give some consistent parametrization of (agent,choice,situation) triplets so that choice C made by agent A in situation S is always maximal among all available choices. If you call this function "utility", then it is mathematically trivial that "Agents always maximize utility." However, the usefulness of this approach is very low without additional constraints on the utility function. I'd be really curious to see some pointers to the "maximizing theory" you think survived a 50 years of "strong scrutiny".
0Mass_Driver
The obvious way to combine the two systems -- tolerance and utility -- is to say that stimuli that exceed our tolerances prompt us to ask questions about how to solve a problem, and utility calculations answer those questions. This is not an original idea on my part, but I do not remember where I read about it. The one that appears to maximize utility after a brief period of analysis. For example, I want ice cream; my ice cream satisfaction index is well below tolerance. Fortunately, I am in an ice cream parlor, which carries several flavors. I will briefly reflect on which variety maximizes my utility, which in this case is mostly defined by price, taste, and nutrition, and then pick a flavor that returns a high (not necessarily optimal) value for that utility. A lack of acceptable alternatives leads to stress, which (a) broadens the range of acceptable outcomes, and (b) motivates future analysis about how to avoid similar situations in the future. For example, I want ice cream; my ice cream satisfaction index is well below tolerance; unfortunately, I am in the desert. I find this situation unpleasant, and eventually reconcile myself to the fact that my ice cream satisfaction level will remain below what was previously thought of as 'minimum' tolerance for some time, however, upon returning to civilization, I will have a lower tolerance for 'desert-related excursions' and may attempt to avoid further trips through the desert. Note that 'minimum' tolerance refers to the minimum level that will lead to casual selection of an acceptable alternative, rather than the minimum level that allows my decision system to continue functioning.
4pjeby
Actually, I'd tend to say that you are not so much maximizing the utility of your ice cream choice, as you are ensuring that your expected satisfaction with your choice is within tolerance. To put it another way, it's unlikely that you'll actually weigh price, cost, and taste, in some sort of unified scoring system. Instead, what will happen is that you'll consider options that aren't already ruled out by cached memories (e.g. you hate that flavor), and then predict whether that choice will throw any other variables out of tolerance. i.e., "this one costs too much... those nuts will give me indigestion... that's way too big for my appetite... this one would taste good, but it just doesn't seem like what I really want..." Yes, some people do search for the "best" choice in certain circumstances, and would need to exhaustively consider the options in those cases. But this is not a matter of maximizing some world-state-utility, it is simply that each choice is also being checked against a, "can I be certain I've made the best choice yet?" perception. Even when we heavily engage our logical minds in search of "optimum" solutions, this cognition is still primarily guided by these kinds of asynchronous perceptual checks, just ones like, "Is this formula really as elegant as I want it to be?" instead.
1Mass_Driver
Very interesting. There's a lot of truth in what you say. If anyone reading this can link to experiments or even experimental designs that try to figure out when people typically rely on tolerances vs. utilities, I'd greatly appreciate it. Y'know, most people probably don't, and at times I certainly do take actions based entirely on nested tolerance-satisfaction. When I'm consciously aware that I'm making a decision, though, I tend to weigh the utilities, even for a minor choice like ice cream flavor. This may be part of why I felt estranged enough from modern society in the first place to want to participate in a blog like Less Wrong. OK, so you've hit on the behavioral mechanism that helps me decide how much time I want to spend on a decision...90 seconds or so is usually the upper bound on how much time I will comfortably and casually spend on selecting an ice cream flavor. If I take too much time to decide, then my "overthinking" tolerance is exceeded and alarm bells go off; if I feel too uncertain about my decision, then my "uncertainty" tolerance is exceeded and alarm bells go off; if neither continuing to think about ice cream nor ending my thoughts about ice cream will silence both alarm bells, then I feel stress and broaden my tolerance and try to avoid the situation in the future, probably by hiring a really good psychotherapist. But that's just the criteria for how long to think...not for what to think about. While I'm thinking about ice cream, I really am trying to maximize my ice-cream-related world-state-utility. I suspect that other people, for somewhat more important decisions, e.g., what car shall I buy, behave the same way -- it seems a bit cynical to me to say that people make the decision to buy a car because they've concluded that their car-buying analysis is sufficiently elegant; they probably buy the car or walk out of the dealership when they've concluded that the action will very probably significantly improve their car-related world-stat
2pjeby
And how often, while doing this, do you invent new ice cream options in an effort to increase the utility beyond that offered by the available choices? How many new ice cream flavors have you invented, or decided to ask for mixed together? So now you say, "Ah, but it would take too long to do those things." And I say, "Yep, there goes another asynchronous prediction of an exceeded perceptual tolerance." "Okay," you say, "so, I'm a bounded utility calculator." "Really? Okay, what scoring system do you use to arrive at a combined rating on all these criteria that you're using? Do you even know what criteria you're using?" Is this utility fungible? I mean, would you eat garlic ice cream if it were free? Would you eat it if they paid you? How much would they need to pay you? The experimental data says that when it comes to making these estimates, your brain is massively subject to priming and anchoring effects -- so your "utility" being some kind of rational calculation is probably illusory to start with. I was referring to the perceptions involved in a task like computer programming, not car-buying. Part of the point is that every task has its own set of regulating perceptions. They do it when they find a car that leads to an"acceptable "satisfaction" level. Part of my point about things like time, elegance, "best"-ness, etc. though, is that they ALL factor into what "acceptable" means. "Satisfaction", in other words, is a semi-prioritized measurement against tolerances on ALL car-buying-related perceptual predictions that get loaded into a person's "working memory" during the process.
2simplicio
Aside: I have partaken of the garlic ice-cream, and lo, it is good.
2wedrifid
Are you joking? I'm curious!
2simplicio
I'm not joking, either about its existence or its gustatory virtues. I'm trying to remember where the devil I had it; ah yes, these fine folks served it at Taste of Edmonton (a sort of outdoor food-fair with samples from local restaurants).
7kodos96
Theory: you don't actually enjoy garlic ice cream. You just pretend to in order to send an expensive signal that you are not a vampire.
2wedrifid
If I ever encounter it I shall be sure to have a taste!
2Mass_Driver
I'm not going to respond point for point, because my interest in whether we make decisions based on tolerances or utilities is waning, because I believe that the distinction is largely one of semantics. You might possibly convince me that more than semantics are at stake, but so far your arguments have been of the wrong kind in order to do so. Obviously we aren't rational utility-maximizers in any straightforward early-20th-century sense; there is a large literature on heuristics and biases, and I don't dispute its validity. Still, there's no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility. Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me. Your fluid and persuasive and persistent rephrasing of utility in terms of tolerance does not really change my opinion here. As for ice cream flavors, I find that the ingenuity of chefs in manufacturing new ice cream flavors generally keeps pace with my ability to conceive of new flavors; I have not had to invent recipes for Lychee sorbet or Honey Mustard ice cream because there are already people out there trying to sell it to me. I often mix multiple flavors, syrups, and toppings. I would be glad to taste garlic ice cream if it were free, but expect that it would be unpleasant enough that I would have to be paid roughly $5 an ounce to eat it, mainly because I am counting calories and would have to cut out other foods that I enjoy more to make room for the garlic. As I've already admitted, though, I am probably not a typical example. The fact that my estimate of $5/oz is almost certainly biased, and is made with so little confidence that a better estimate of what you would have to pay me to eat it might be negative $0.50/oz to positive $30/oz, does not in any way convince me that my attempt to consult my own utility is "illusory."
1pjeby
It does not seem so to me, unless you recapitulate/encapsulate the tolerance framework into the utility function, at which point the notion of a utility function has become superfluous. The point here isn't that humans can't do utility-maximization, it's merely that we don't, unless we have made it one of our perceptual-tolerance goals. So, in weighing the two models, we see one model that humans in principle can do (but mostly don't) and one that models what we mostly do, and can also model the flawed way of doing the other, that we actually do as well. Seems like a slam dunk to me, at least if you're looking to understand or model humans' actual preferences with the simplest possible model. The only thing I'm saying is illusory is the idea that utility is context-independent, and totally ordered without reflection. (One bit of non-"semantic" relevance here is that we don't know whether it's even possible for a superintelligence to compute your "utility" for something without actually running a calculation that amounts to simulating your consciousness! There are vast spaces in all our "utility functions" which are indeterminate until we actually do the computations to disambiguate them.)
0Vladimir_Nesov
You confuse descriptive with normative.
3Perplexed
Actually, in fairness to pjeby, I did a pretty good job of confusing them in my comment. If you look again, you will see that I was saying that standard utility maximization does a pretty good job on both the descriptive and the normative tasks. And, of course as the whole structure of LW teaches us, utility maximization is only an approximation to the correct descriptive theory. I would claim that it is a good approximation - an approximation which keeps getting better as more and more cognitive resources are invested in any particular decision by the decision maker. But an approximation nonetheless. So, what I am saying is that pjeby criticized me on descriptive grounds because that is where it seemed I had pitched my camp.
0Vladimir_Nesov
He made a "corollary" about the normative sense of utility maximization, right after an argument about its descriptive sense. Hence, confusion.
2pjeby
The choice of how you represent a computation is not value-neutral, even if all you care about is the computation speed. The notion of a single utility function is computationally much better suited to machines than humans -- but that's because it's a much poorer representation of human values! Conversely, single utility functions are much more poorly-suited to processing on humans' cognifitive architecture, because our brains don't really work that way. Ergo, if you want to think about how humans will behave and what they will prefer, you are doing it suboptimally by using utility functions. You will have to think much harder to get worse answers than you would by thinking in terms of satisficing perceptual differences. (IOW, the descriptive and normative aspects are pretty thoroughly intertwined, because the thing being described is also the thing that needs to be used to do the computation!)
2SilasBarta
Also, I should clarify another point: My point was that I previously agreed with EY that the payoff matrix doesn't accurately represent how people would perceive the situation if they were in a LPDS, but that I now think that people's reaction to it could just as well be explained by assuming that they accept the canonical payoff matrix as accurate, but pursue those utilities under a constrained decision theory. And also, that their intuitions are due to that decision theory, not necessarily from valuing the outcomes differently.
1Perplexed
Ok, I think I see the distinction. I recognize that it is tempting to postulate a 2 part decision theory because it seems that we have two different kinds of considerations to deal with. It seems we just can't compare ethical motivations like loyalty with selfish motivations like getting a light sentence. "It is like comparing apples and oranges!", screams our intuition. However my intuition has a piece screaming even louder, "It is one decision, you idiot! Of course you have to bring all of the various kinds of considerations together to make the decision. Shut up and calculate - then decide."
1Oscar_Cunningham
The only thing Omega uses its omniscience for is to detect if you're lying, so if humans are bad at convincing lying you don't need omniscience. Also, "prefer to assume" indicates extreme irrationallity, you can't be rational if you are choosing what to believe based on anything other than the evidence, see Robin Hanson's post You Are Never Entitled to Your Opinion. Of course you probably didn't mean that, you probably just meant: Say what you mean, otherwise you end up with Belief in Belief.
2Perplexed
As I have answered repeatedly on this thread, when I said "prefer to assume", I actually meant "prefer to assume". If you are interpreting that as "prefer to believe" you are not reading carefully enough. One makes (sometimes fictional) assumptions when constructing a model. One is only irrational when one imagines that a model represents reality. If it makes you happy, insert a link to some profundity by Eliezer about maps and territories at this point in my reply.
1Oscar_Cunningham
Heh, serve me right for not paying attention.
1Perplexed
If I understand the OP correctly, it is important to him that this example not include any chit-chat between the hitchhiker and Omega. So what Omega actually detects is propensity to pay, not lying. Minor point.
0SilasBarta
In the ideal situation, it's important that there be no direct communication. A realistic situation can match this ideal one if you remove the constraint of "no chit-chat" but add the difficulty of lying. Yes, this allows you (in the realistic scenario) to use an "honor hack" to make up for deficiencies in your decision theory (or utility function), but my point is that you can avoid this complication by simply having a decision theory that gives weight to SAMELs.
2Perplexed
Gives how much weight to SAMELs? Do we need to know our evolutionary(selective) history in order to perform the calculation? My off-the-cuff objections to "constraints" were expressed on another branch of this discussion It is pretty clear that you and I have different "aesthetics" as to what counts as a "complication".
0SilasBarta
The answers determine whether you're trying to make your own decision theory reflectively consistent, or looking at someone else's. But either way, finding the exact relative weight and exact relevance of the evolutionary history is beyond the scope of the article; what's important is that SAMELs' explanatory power be used at all. Like I said in my first reply to you, the revealed preferences don't uniquely determine a utility function: if someone pays Omega in PH, then you can explain that either with a utility function that values just the survivor, or one that values the survivor and Omega. You have to look at desiderata other than UF consistency with revealed preferences. Well, you're entitled to your own aesthetics, but not your own complexity. (Okay, you are, but I wanted it to sound catchy.) As I said in footnote 2, trying to account for someone's actions by positing more terminal values (i.e. positive terms in the utility function) requires you to make strictly more assumptions than when you assume fewer, but then draw on the implications of assumptions you'd have to make anyway.
1Pavitra
When you say you "prefer to assume", do you mean: 1. you want to believe? 2. your prior generally favors such? What evidence would persuade you to change your mind? 3. you arrived at this belief through evidence? What evidence persuaded you? 4. none of the above? Please elaborate. 5. not even 4 is right -- my question is wrong? Please elaborate.
2Perplexed
4 I mean that making assumptions as I suggest leads to a much more satisfactory model of the issues being discussed here. I don't claim my viewpoint is closer to reality (though the lack of an omniscient Omega certainly ought to give me a few points for style in that contest!). I claim that my viewpoint leads to a more useful model - it makes better predictions, is more computationally tractable, is more suggestive of ways to improve human institutions, etc. All of the things you want a model to do for you.
0Pavitra
But how did you come to locate this particular model in hypothesis-space? Surely some combination of 2 and 3?
2Perplexed
I read it in a book. It is quite standard. And I'm pretty sure that the people who first invented it were driven by modeling motivations, rather than experiment. Mathematical techniques already exist to solve maximization problems. The first field which really looked at the issues in a systematic way was microeconomics - and this kind of model is the kind of thing that would occur to an economist. It all fits together into a pretty picture; most of the unrealistic aspects don't matter all that much in practice; bottom line is that it is the kind of model that gets you tenure if you are an Anglo-American econ professor. Really and truly, the motivation was almost certainly not "Is this the way it really works?". Rather it was, "What is a simple picture that captures the main features of the truth, where "main" means the aspects that I can, in principle, quantify?"
1SilasBarta
Thanks for the reasoned reply. I guess I wasn't clear, because I actually agree with a lot of what you just said! To reply to your points as best I can: Natural selection filtered us for at least one omniscience/desert situation: the decision to care for offspring (in one particular domain of attraction). Like Omega, it prevents us (though with only near-perfect rather than perfect probability) from being around in the n-th generation if we don't care about the (n+1)th generation. Also, why do you say that giving weight to SAMELs doesn't count as rational? Difficulty of lying actually counts as another example of Parfitian filtering: from the present perspective, you would prefer to be able to lie (as you would prefer having slightly more money). However, by having previously sabotaged your ability to lie, people now treat you better. "Regarding it as suboptimal to lie" is one form this "sabotage" takes, and it is part of the reason you received previous benefits. Ditto for keeping promises. But I didn't make it that easy for you -- in my version of PH, there is no direct communication; Omega only goes by your conditional behavior. If you find this unrealistic, again, it's no different than what natural selection is capable of. But my point was that the revealed preference does not reveal a unique utility function. If someone pays Omega, you can say this reveals that they like Omega, or that they don't like Omega, but view paying it as a way to benefit themselves. But at the point where you start positing that each happens-to-win decision is made in order to satisfy yet-another terminal value, your description of the situation becomes increasingly ad hoc, to the point where you have to claim that someone terminally values "keeping a promise that was never received".
0Perplexed
I find it totally unrealistic. And therefore I will totally ignore it. The only realistic scenario, and the one that natural selection tries out enough times so that it matters, is the one with an explicit spoken promise. That is how the non-omniscient driver gets the information he needs in order to make his rational decision. Sure it does ... As long as there has or has not been an explicit promise made to pay the driver, you can easily distinguish how much the driver gets due to the promise from what the driver gets because you like him.
0timtyler
The issues there, briefly. We want a decision theory that: * is smart; * we actually know how to implement efficiently with limited resources; * allows for the possibility that its mind is physical - and that extracting the gold atoms from its own mind is bad; * allows us to tell it what to do - as opposed to using carrot and stick; * isn't prone to the wirehead problem; * allows for an off switch - and other safety features.
0torekp
Or for that matter, the (globally) optimal degree of anything. For all we know, much of human morality may be an evolutionary spandrel. Perhaps, like the technological marvel of condoms, parts of morality are fitness-reducing byproducts of generally fitness-enhancing characteristics. What I do like about the post is its suggestion that paying Omega for the ride is not simply utility-maximizing behavior, but acceptance of a constraint (filter). Robert Nozick used the term "side constraint". That seems descriptively accurate for typical refusals to break promises - more so than anything that can be stated non-tortuously in goal-seeking terms. Now as a normative thesis, on the other hand, utility-maximization ... also isn't convincing. YMMV.
1Perplexed
I dislike complicating the theory by using two kinds of entities (utilities and constraints). That strikes me as going one entity "beyond necessity" Furthermore, how do we find out what the constraints are? We have "revealed preference" theory for the utilities. Do you think you can construct a "revealed constraint" algorithm? My opinion is exactly the opposite. I have rarely encountered a person who had made a promise which wouldn't be broken if the stakes were high enough. It is not a constraint. It is a (finite) disutility.
0Cyan
I recommend reading the off-site lead-in post Ungrateful Hitchhikers to see why the above points don't address some of the implications of the argument Silas is making.
4Perplexed
I've now read it. I'll set aside the fact that he is attempting to model owners of intellectual property as omniscient. I guess he is trying to slip in that old "But what if everybody did that?" argument. See, Omega-IP-owner knows that if you are an IP pirate, so is everyone else, so he won't even generate IP. So everyone dies in the desert. Well, I tend to think that Joseph Heller in "Catch 22" had the best answer to the "What if everyone did it?" gambit: "Well if everyone else did it, then I would be a damn fool to do any differently, wouldn't I?" The right parable for the argument SilasBarta is trying to make comes from biology - from gene-clone selection theory (roughly Dawkins's Selfish gene). Suppose you are a red flower in a field of red flowers. Along comes a bee, hoping to pick up a little nectar. But what you really want is the pollen the bee carries, or maybe you want the bee to pick up your pollen. The question is whether you should actually provide nectar to the bee. She has already done what you wanted her to do. Giving her some nectar doesn't cost you very much, but it does cost something. So why pay the bee her nectar? The answer is that you should give the bee the nectar because all the other flowers in the field are your siblings - if your genes tell you to stiff the bee, then their genes tell them the same. So the bee stops at just a few red flowers, comes up dry each time, and decides to try the white flowers in the next field. Jackpot! The bee returns to the hive, and soon there are hundreds of bees busily pollenating the white flowers. And next year, no more red flowers. There, the parable works and we didn't even have to assume that the bee is omniscient. Incidentally, if we now go back and look at my analysis of the Hitchhiker you will notice that my solution works because the driver expects almost every person he encounters to have an "honor module". He doesn't know for sure that the hitchhiker's honor is still intact, but it seems like
4SilasBarta
Okay, I think I see the source of the disconnect: Though my examples involve an omniscient being, that's not actually necessary for the points to hold. It's just looking at an extreme end. It would remain optimal to pay even if Omega were only 90% accurate, or 60%, etc. As for the decision-theoretics of "what if everyone did it?" type reasoning, there's a lot more to consider than what you've given. (A few relevant articles.) Most importantly, by making a choice, you're setting the logical output of all sufficiently similar processes, not just your own. In a world of identical beings, they would all "wake up" from any Prisoner's Dilemma situation finding that they had both defected, or both cooperated. Viewed in this light, it makes sense to cooperate, since it will mean waking up in the pure-cooperation world, even though your decision to cooperate did not literally cause the other parties to cooperate (and even though you perceive it this way). Making the situation more realistic does not change this conclusion either. Imagine you are positively, but not perfectly, correlated with the other beings; and that you go through thousands of PDs at once with different partners. In that case, you can defect, and wake up having found partners that cooperated. Maybe there are many such partners. However, from the fact that you regard it as optimal to always defect, it follows that you will wake up in a world with more defecting partners than if you had regarded it as optimal in such situations to cooperate. As before, your decision does not cause others to cooperate, but it does influence what world you wake up in. (Edit: And likewise, for the case of IP, if you defect, you will (arguably) find that you wake up in a world where you get lots of great music for free ... but a fundamentally different world, that's maybe not as pleasant as it could be...) ---------------------------------------- The bee situation you described is very similar to the parent-child problem
1Cyan
Since I read your IP example a while ago, this seemed obvious to me, but I guess it should be emphasized in the text more strongly than it currently is.
1Perplexed
But making Omega less accurate doesn't alleviate the bizarreness of Omega. The incredible thing isn't that Omega is accurate. It is that his "predictions" are influenced (acausaly?) by future events. Decreasing the accuracy of the predictions just makes it harder to do the experiments that shows conclusively that Omega is doing something supernatural. It doesn't make what he does any less supernatural.
2SilasBarta
Actually, Omega's prediction and your action are both the result of a common cause (at least under a model of the situation that meets the given problem constraints -- see EY's justification in the case of Newcomb's problem [1].) This doesn't require backwards-flowing causality. See also Anna Salamon's article about the multiple Newcomb's problem causal models. [1] This article. The paragraph beginning with the words "From this, I would argue, TDT follows." goes over the constraints that lead EY to posit the causal model I just gave.
-3Perplexed
With all due respect, I have to disagree. My decision, made now, is modeled to change the output of an algorithm which, in reality, spit out its result some time ago. Universe: Make a decision. Me: What are my choices? Universe: You don't have any choices. Your response was determined long ago. Me: Uh, so how am I supposed to decide now? Universe: Just tell me which result you would prefer. Me: The one that gives me the most utility. Universe: Poof. Congratulations, you have made the best decision. Thank you for chosing to use TDT, the decision theory which makes use of the secret power of the quantum to make you rich. Yeah, I'm being a bit unfair. but, as applied to human decision making, it still looks to me that there is causation (i.e. information) running back in time from my "free will" decision today to some "critical nexus" in the past.
4SilasBarta
Are you up-to-date on the free will sequence? Now would be a good time, as it sorts out the concepts of free will, determinism, and choice. Because I never send someone off to read something as my response without summarizing what I except them to learn: You are still making a choice, even if you are in a deterministic world. A computer program applied to Parfit's Hitchhiker makes a choice in basically the same sense that you make a choice when you're in it. With that in mind, you can actually experiment with what it's like to be Omega. Assume that you are given the source code of a program applicable to Parfit's Hitchhiker. You're allowed to review it, and you decide whether to choose "rescue" based on whether you expect that the program will output "pay" after waking up, and then it runs. In that case, the program is making a choice. You're making a perfect prediction [1] of its choice. But where's the reverse causation? [1] except to the extent the program uses random predicates, in which case you figure out the probability of being paid, and if this justifies a rescue.
2Perplexed
I'm pretty sure I have read all of the free will sequence. I am a compatibilist, and have been since before EY was born. I am quite happy with analyses that have something assumed free at one level (of reduction) and determined at another level. I still get a very bad feeling about Omega scenarios. My intuition tells me that there is some kind of mind projection fallacy being committed. But I can't put my finger on exactly where it is. I appreciate that the key question in any form of decision theory is how you handle the counter-factual "surgery". I like Pearl's rules for counter-factual surgery: If you are going to assume that some node is free, and to be modeled as controled by someone's "free decision" rather than by its ordinary causal links, then the thing to do is to surgically sever the causal links as close to the decision node as possible. This modeling policy strikes me as simply common sense. My gut tells me that something is being done wrong when the surgery is pushed back "causally upstream" - to a point in time before the modeled "free decision". I understand that if we are talking about the published "decision making" source code of a robot, then the true "free decision" is actually made back there upstream in the past. And that if Omega reads the code, then he can make pretty good predictions. What I don't understand is why the problem is not expressed this way from the beginning. "A robot in the desert need its battery charged soon. A motorist passes by, checks the model number, looks up the robot specs online, and then drives on, knowing this robot doesn't do reciprocity." A nice simple story. Maybe the robot designer should have built in reciprocity. Maybe he will design differently next time. No muss, no fuss, no paradox. I suppose there is not much point continuing to argue about it. Omega strikes me as both wrong and useless, but I am not having much luck convincing others. What I really should do is just shut up on the subject and simply
4timtyler
I don't know for sure - but perhaps a memetic analaysis of paradoxes might throw light on the issue: Famous paradoxes are often the ones that cause the most confusion and discussion. Debates and arguments make for good fun and drama - and so are copied around by the participants. If you think about it that way, finding a "paradox" that is confusingly expressed may not be such a surprise. Another example would be: why does the mirror reverse left and right but not up and down? There, the wrong way of looking at the problem seems to be built into the question. ( Feynman's answer ).
2Lightwave
Because the point is to explain to the robot why it's not getting its battery charged?
2Perplexed
That is either profound, or it is absurd. I will have to consider it. I've always assumed that the whole point of decision theory is to give normative guidance to decision makers. But in this case, I guess we have two decision makers to consider - robot and robot designer - operating at different levels of reduction and at different times. To say nothing of any decisions that may or may not be being made by this Omega fellow. My head aches. Up to now, I have thought that we don't need to think about "meta-decision theory". Now I am not sure.
-4timtyler
Mostly we want well-behaved robots - so the moral seems to be to get the robot maker to build a better robot that has a good reputation and can make credible commitments.
0SilasBarta
Hm, that robot example would actually be a better way to go about it...
0timtyler
I think we discussed that before - if you think you can behave unpredictably and outwit Omega, then to stay in the spirit of the problem you have to imagine you have built a deterministic robot, published its source code - and it will be visited by Omega (or maybe just an expert programmer).
1Perplexed
I am not trying to outwit anyone. I bear Omega no ill will. I look forward to being visited by that personage. But I really doubt that your robot problem is really "in the spirit" of the original. Because, if it is, I can't see why the original formulation still exists.
1timtyler
Well, sure - for one thing, in the scenarios here, Omega is often bearing gifts! You are supposed to treat the original formulation in the same way as the robot one, IMO. You are supposed to believe that a superbeing who knows your source code can actually exist - and that you are not being fooled or lied to. If your problem is that you doubt that premise, then it seems appropriate to get you to consider a rearranged version of the problem - where the premise is more reasonable - otherwise you can use your scepticism to avoid considering the intended problem. The robot formulation is more complex - and that is one reason for it not being the usual presentation of the problem. However, if you bear in mind the reason for many people here being interested in optimal decision theory in the first place, I hope you can see that it is a reasonable scenario to consider. FWIW, much the same goes for your analysis of the hitch-hiker problem. There your analysis is even more tempting - but you are still dodging the "spirit" of the problem.
0timtyler
You mean that he predicts future events? That is sometimes possible to do - in cases where they are reasonbly-well determined by the current situation.
0Pavitra
Isn't this group selectionism? Surely the much more likely explanation is that producing more or better nectar attracts the bee to you over all the other red flowers.
5Perplexed
I would prefer to call it kin selection, but some people might call it group selection. It is one of the few kinds of group selection that actually work. That wasn't part of my scenario, nor (as far as I know) biologically realistic. It is my bright red color that attracts the bee, and in this regard I am competing with my sibs. But the bee has no sense organ that can remotely detect the nectar. It has to actually land and do the pollen transfer bit before it finds out whether the nectar is really there. So, it is important that I don't provide the color before I am ready with nectar and the sexy stuff. Else I have either wasted nectar or pissed off the bee.
1Perplexed
Thx. I'll do that.

There is rarely a stable equilibrium in evolutionary games. When we look at the actual history of evolution, it is one of arms races -- every time a new form of signaling is invented, another organism figures out how to fake it. Any Parfitian filter can be passed by an organism that merely fools Omega. And such an organism will do better than one who actually pays Omega.

1RobinZ
I think this crystallizes what didn't make sense to me about his parenthood example - there is no Omega for that process, no agent to be fooled, even hypothetically.

are limited to using a decision theory that survived past social/biological Parfitian filters.

What really frustrates me about your article is that you never specify a decision theory, list of decision theories, or category of decision theories that would be likely to survive Parfitian filters.

I agree with User:Perplexed that one obvious candidate for such a decision theory is the one we seem to actually have: a decision theory that incorporates values like honor, reciprocity, and filial care into its basic utility function. Yet you repeatedly insist that this is not what is actually happening...why? I do not understand.

5SilasBarta
I thought I did: decision theories that give weight to SAMELs. For the same reason one wouldn't posit "liking Omega" as a good explanation for why someone would pay Omega in the Parfit's Hitchhiker problem.
6Mass_Driver
Sure, but this is borderline tautological -- by definition, Parfitian filters will tend to filter out decision theories that assign zero weight to SAMELs, and a SAMEL is the sort of consideration that a decision theory must incorporate in order to survive Parfitian filters. You deserve some credit for pointing out that assigning non-zero weight to SAMELs involves non-consequentialist reasoning, but I would still like to know what kind of reasoning you have in mind. "Non-zero" is a very, very broad category. Right, but as Perplexed pointed out, humans regularly encounter other humans and more or less never encounter penny-demanding, paradox-invoking superpowers. I would predict (and I suspect Perplexed would predict) that if we had evolved alongside Omegas, we would have developed a capacity to like Omegas in the same way that we have developed a capacity to like other humans.
5SilasBarta
There wouldn't be much point to further constraining the set, considering that it's only the subjunctive action that matters, not the reasoning that leads up to it. As I said on my blog, it doesn't matter whether you would decide to pay because you: * feel honor-bound to do so; * feel so grateful to Omega that you think it deserves what it wanted from you; * believe you would be punished with eternal hellfire if you didn't, and dislike hellfire; * like to transfer money to Omega-like beings, just for the heck of it; * or for any other reason. So if I'm going to list all the theories that win on PH-like problems, it's going to be a long list, as it includes (per the Drescher quote) everyone that behaves as if they recognized the SAMEL, including people who simply feel "grateful". To answer the question of "What did I say that's non-tautological?", it's that a decision theory that is optimal in a self-interested sense will not merely look at the future consequences (not necessarily a redundant term), but will weight the acausal consequences on par with them, bypassing the task of having to single out each intuition and elevate it to a terminal value. Edit: And, to show how this acausal weighting coincides with what we call morality, explaining why we have the category in the first place. And as I said to Perplexed, natural selection was our Omega.
1Perplexed
Did you? I'm sorry I missed it. Could you explain it? I can see how NS might be thought of as a powerful psychic capable of discerning our true natures. And I can see, maybe, how NS cannot itself easily be modeled as a rational decision maker making decisions to maximize its own utility. Hence we must treat it as as a fairly arbitrary agent with a known decision algorithm. Modeling NS as a variant of Omega is something I had never thought of doing before. Is there anything already written down justifying this viewpoint?
2SilasBarta
This was the point I made in the second section of the article.
2Perplexed
I read the article again, but didn't see the point being made clearly at all. Nevertheless, the point has been made right here, and I think it is an important point. I would urge anyone promoting decision theories of the UDT/TDT family to research the theory of kin selection in biological evolution - particularly the justification of "Hamilton's rule". Also, the difference between the biological ESS version of game theory and the usual "rational agent" approach. I think that it should be possible to cleanly merge these Omega-inspired ideas into standard utility maximization theory by using a theoretical construct something like Hamilton's "inclusive fitness". "Inclusive utility". I like the sound of that.
0SilasBarta
I'm referring to the point I made here: Do you think that did not make clear the similarity between Omega and natural selection?
4Perplexed
No, it did not. I see it now, but I did not see it at first. I think I understand why it was initially obvious to you but not to me. It all goes back to a famous 1964 paper in evolutionary theory by William Hamilton. His theory of kin selection. Since Darwin, it has been taken as axiomatic that parents will care for children. Of course, they do, says the Darwinian. Children are the only thing that does matter. All organisms are mortal, their only hope for genetic immortality is by way of descendents. The only reason the rabbit runs away from the fox is so it can have more children, sometime in the near future. So, as a Darwinian, I saw your attempt to justify parental care using Omega as just weird. We don't need to explain that. It is just axiomatic. Then along came Hamilton with the idea that taking care of descendants (children and grandchildren) is not the whole story. Organisms are also selected to take care of siblings, and cousins and nephews and nieces. That insight definitely was not part of standard received Darwinism. But Hamilton had the math to prove it. And, as Trivers and others pointed out, even the traditional activities of taking care of direct descendants should probably be treated as just one simple case of Hamilton's more general theory. Ok, that is the background. I hope it is now clear if I say that the reason I did not see parental care as an example of a "Parfitian filter" is exactly like the reason traditional Darwinists did not at first see parental care as just one more example supporting Hamilton's theory. They didn't get that point because they already understood parental care without having to consider this new idea.
1SilasBarta
Okay, thanks for explaining that. I didn't intend for that explanation of parental behavior to be novel (I even said it was uncontroversial), but rather, to show it as a realistic example of a Parfitian filter, which motivates the application to morality. In any case, I added a note explicitly showing the parallel between Omega and natural selection.
2Perplexed
Could you expand on this? I'm pretty sure that "liking the driver" was not part of my "solution". I suppose my "honor module" could be called "irrational" .... but, it is something that the hitchhiker is endowed with that he cannot control, no more than he can control his sex drive. And it is evolutionarily a useful thing to have. Or rather, a useful thing to have people believe you have. And people will tend to believe that, even total strangers, if natural selection has made it an observable feature of human nature.

Parenthood doesn't look like a Parfait's Hitchhiker* to me - are you mentioning it for some other reason?

* Err, Parfit's Hitchhiker. Thanks, Alicorn!

Edit: I have updated my position downthread.

5gwern
http://www.newyorker.com/arts/critics/books/2010/03/22/100322crbo_books_kolbert?currentPage=all (Glad I kept this citation; knew at some point I would run into someone claiming parenthood is a joy. Wish I had the one that said parenthood was a net gain in happiness only years/decades later after the memories have been distorted enough.)
9simplicio
The basic idea about parents and hedonic psychology, as I understand it, is that your moment-to-moment happiness is not typically very high when you have kids, but your "tell me a story" medium/long term reflective happiness may be quite high. Neither of those is privileged. Have you ever spent a day doing nothing but indulging yourself (watching movies, eating your favourite foods, relaxing)? If you're anything like me you find that even thought most moments during the day were pleasant, the overall experience of the day was nasty and depressing. Basically, happiness is not an integral of moment-to-moment pleasure, so while it's naive to say parenting is an unqualified joy, it's not so bleak as to be only a good thing after the memories are distorted by time.
3a_parent
As a parent I can report that most days my day-wise maximum moment-to-moment happiness is due to some interaction with my child. But then, my child is indisputably the most lovable child on the planet. (welcome thread link not necessary)
1simplicio
Then let me just say, welcome! I'm inclined to believe you, but note that what you said doesn't quite contradict the hypothesis, which is that if you were not a parent, your day-wise maximum (from any source) would probably be higher. Also, beware of attributing more power to introspection than it deserves, especially when the waters are already muddied by the normativity of parents' love for their children. You say your happiest moments are with your child, but a graph of dopamine vs. time might (uninspiringly) show bigger spikes whenever you ate sugar. Or it might not. My point is that I'm not sure how much we should trust our own reflections on our happiness.
6a_parent
Fair point. So let me just state that as far as I can tell, the average of my DWMM2M happiness is higher than it was before my child was born, and I expect that in a counterfactual world where my spouse and I didn't want a child and consequently didn't have one, my DWMM2M happiness would not be as great as in this one. It's just that knowing what I know (including what I've learned from this site) and having been programmed by evolution to love a stupendous badass (and that stupendous badass having been equally programmed to love me back), I find that watching that s.b. unfold into a human before my eyes causes me happiness of a regularity and intensity that I personally have never experienced before.
1gwern
I would mischievously point out things like the oxytocin released after childbirth ought to make us especially wary of bias when it comes to kids. After all, there is no area of our life that evolution could be more concerned about than the kids. (Even your life is worth less than a kid or two, arguably, from its POV.)
4a_parent
That oxytocin &c. causes us to bond with and become partial to our children does not make any causally subsequent happiness less real.
-1gwern
So, then, you would wirehead? It seems to me to be the same position.
6a_parent
I wouldn't: I have preferences about the way things actually are, not just how they appear to me or what I'm experiencing at any given moment.
0gwern
So that use of oxytocin (and any other fun little biases and sticks and carrots built into us) is a 'noble lie', justified by its results? In keeping with the Niven theme, so, then you would not object to being tasped by a third party solicitous of your happiness?
3a_parent
Er, what? Please draw a clearer connection between the notion of having preferences over the way things actually are and the notion that our evolutionarily constructed bias/carrot/stick system is a 'noble lie'. I'm not categorically against being tasped by a third party, but I'd want that third party to pay attention to my preferences, not merely my happiness. I'd also require the third party to be more intelligent than the most intelligent human who ever existed, and not by a small margin either.
0gwern
Alright, I'll put it another way. You seem very cavalier about having your utility-function/preferences without your volition. You defend a new mother's utility-function/preferences being modified by oxytocin, and in this comment you would allow a third party to tasp you and get you addicted to wireheading. When exactly are such involuntary manipulations permitted?
0a_parent
They are permitted by informed consent. (A new mother may not know in detail what oxytocin does, but would have to be singularly incurious not to have asked other mothers what it's like to become a mother.) No, I wouldn't. I required the third party to pay attention to my preferences, not just my happiness, and I've already stated my preference to not be wireheaded. I can't help but get the feeling that you have some preconceived notions about my personal views which are preventing you from reading my comments carefully. ETA: Well, no, maybe you just believe remote stimulation of the pleasure centers of one's brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.
0gwern
Well, I figure wireheading is either intrinsically addicting, by definition (what else could addiction be motivated by but pleasure?) or so close to it as to make little practical difference; there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don't mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.
5Perplexed
Perhaps the sheer-force-of-will meters were malfunctioning in these experiments. More seriously, lets create a series of thought experiments, all involving actions by "Friendly" AI. (FAI. Those were scare quotes. I won't use them again. You have been warned!). In each case, the question in the thought experiment is whether the FAI behavior described is prima facie evidence that the FAI has been misprogrammed. Thought experiment #1: The FAI has been instructed to respect the autonomy of the human will, but also to try to prevent humans from hurting themselves. Therefore, in cases where humans have threatened suicide, the FAI offers the alternative of becoming a Niven wirehead. No tasping, it is strictly voluntary. Thought experiment #2: The FAI makes the wirehead option available to all of mankind. It also makes available effective, but somewhat unpleasant, addiction treatment programs for those who have tried the wire, but now wish to quit. Thought experiment #3: The request for addiction treatment is irrevocable, once treated, humans do not have the option of becoming rewired. Thought experiment #4: Practicing wireheads are prohibited from contributing genetically to the future human population. At least part of the motivation of the FAI in the whole wirehead policy is eugenic. The FAI wishes to make happiness more self-actualized in human nature, and less dependent on the FAI and its supplied technologies. Thought experiment #5: This eugenic intervention is in conflict with various other possible eugenic interventions which the FAI is contemplating. In particular, the goal of making mankind more rational seems to be in irreconcilable conflict with the goal of making mankind more happiness-self-actualized. The FAI consults the fine print of its programming and decides in favor of self actualized happiness and against rationality.
1timtyler
Please, carry on with the scare quotes. Or maybe don't use a capital F. Apparently: "Friendly Artificial Intelligence" is a term that was coined by researcher Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence as a term of art distinct from the everyday meaning of the word "friendly". However, nobody seems to be terribly clear about exactly what it means. If you were hoping to pin that down using a consensus, it looks as though you may be out of luck.
0cousin_it
As an aside, I wonder how Eliezer's FAI is going to decide whether to use eugenics. Using the equivalent of worldwide vote doesn't look like a good idea to me.
1Perplexed
How about purely voluntary choice of 'designer babies' for your own reproduction, within guidelines set by worldwide vote? Does that sound any more like a good idea? Frankly, it doesn't seem all that scary to me, at least not as compared with other directions that the FAI might want to take us.
0cousin_it
I agree that eugenics is far from the scariest thing FAI could do. Not sure about designer babies, I don't have any gut reaction to the issue, and a serious elicitation effort will likely cause me to just make stuff up.
4a_parent
Yvain wrote:
2gwern
That's interesting. Hadn't seen that. So you are suggesting that addiction as we know it for drugs etc. is going through the 'wanting' circuit, but wireheading would go through the 'liking' circuit, and so wouldn't resemble the former?
0a_parent
Yvain's post suggested it; I just stuck it in my cache.
3pjeby
Wanting is not the same thing as pleasure. The experiments that created the popular conception of wireheading were not actually stimulating the rats' pleasure center, only the anticipation center. Consider that there are probably many things you enjoy doing when you do them, but which you are not normally motivated to do. (Classic example: I live in Florida, but almost never go to the beach.) Clearly, pleasure in the sense of enjoying something is not addictive. If you stimulated the part of my brain that enjoys the beach, it would not result in me perpetually pushing the button in order to continue having the pleasure. Frankly, I suspect that if somebody invented a way to use TMS or ultrasonics to actually stimulate the pleasure center of the brain, most people would either use them once or twice and put them on the shelf, or else just use them to relax a bit after work. Weirdly enough, most true pleasures aren't really addictive, because you need some sort of challenge to seize the interest of your dopamine reward system. Chaotic relationships, skill development (incl. videogames), gambling... these things are addictive precisely because they're not purely pleasurable, and this stimulates the same parts of the brain that get hit by wireheading and some drugs. To put it another way, the rats kept pushing the button not because it gave them pleasure, but simply because it stimulated the part of their brain that made them want to push the button more. The rats probably died feeling like they were "just about to" get to the next level in a video game, or finally get back with their estranged spouse, or some other just-out-of-reach goal, rather than in orgasmic bliss.
3simplicio
Hm... not obviously so. Any reductionist explanation of happiness from any source is going to end up mentioning hormones & chemicals in the brain, but it doesn't follow that wanting happiness (& hence wanting the attendant chemicals) = wanting to wirehead. I struggle to articulate my objection to wireheading, but it has something to do with the shallowness of pleasure that is totally non-contingent on my actions and thoughts. It is definitely not about some false dichotomy between "natural" and "artificial" happiness; after all, Nature doesn't have a clue what the difference between them is (nor do I).
0gwern
Certainly not, but we do need to understand utility functions and their modification; if we don't, then bad things might happen. For example (I steal this example from EY), a 'FAI' might decide to be Friendly by rewiring our brains to simply be really really happy no matter what, and paperclip the rest of the universe. To most people, this would be a bad outcome, and is an intuitive argument that there are good and bad kinds of happiness, and the distinctions probably have something to do with properties of the external world.
3xamdam
I'm not going to claim having children is "rational", but to judge it by the happiness of "caring for children" is about the same as to judge quality of food by enjoyment of doing the dishes. This is very one-dimensional. Moreover I actually think it's foolish to use any kind of logical process (such as reading this study) to make decisions in this area except for extreme circumstances such as not having enough money or having genetic diseases. The reason for my attitude is that I think besides the positive upsides to having kids (there are many, if you're lucky) there is a huge aspect of regret minimization involved; it seems to me Nature choose stick rather than a carrot here. ETA: I should perhaps say short-term carrot and a long term stick
1RobinZ
I wasn't proposing that parenthood is a joy - I may have misunderstood what SilasBarta meant by "utility function places positive weight".
0SilasBarta
"Utility function of agent A places positive weight on X" is equivalent to "A regards X as a terminal value".
3Alicorn
Now I'm trying to figure out how a parfait could drive a car.
7gwern
Deliciously.
3SilasBarta
From the Simpsons: "We would also have accepted 'snacktacularly'." (For our non-native readers: snacktacular = snack + spectacular.)
2Pavitra
Very well, thank you.
2SilasBarta
Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the "decide to pay"/"decide to care for children" if it had the right decision theory before the "rescue"/"copy to next generation". Does it look similar now?
1RobinZ
I see the parallelism. If you ask me, though, I would say that it's not a Parfitian filter, but a prototypical example of a filter to demonstrate that the idea of a filter is valid.
0SilasBarta
What's the difference?
0RobinZ
Perhaps I am being obtuse. Let me try to articulate a third filter, and get your reasoning on whether it is Parfitian or not. As it happens, there exist certain patterns in nature which may be reliably counted upon to correlate with decision-theory-relevant properties. One example is the changing color of ripening fruit. Now, species with decision theories that attribute significance to these patterns will be more successful at propagating than those that do not, and therefore will be more widespread. This is a filter. Is it Parfitian?
2SilasBarta
No, because a self-interested agent could regard it as optimal to judge based on that pattern by only looking at causal benefits (CaMELs) to itself. In contrast, an agent could only regard it as optimal to care for offspring (to the extent we observe in parents) based on considering SAMELs, or having a utility function contorted to the point that its actions could more easily be explained by reference to SAMELs.
2RobinZ
Let me try to work this out again, from scratch. A Parfit's hitchhiking involves the following steps in order: 1. Omega examines the agent. 2. Omega offers the agent the deal. 3. The agent accepts the deal. 4. Omega gives the agent utility. 5. The agent gives Omega utility. Parenthood breaks this chain in two ways: first, the "Omega" in step 2 is not the "Omega" in step 4, and neither of these are the "Omega" in step 5; and second, step 1 never occurs. Remember, "natural selection" isn't an agent - it's a process, like supply and demand, that necessarily happens. Consider, for contrast, division of labor. (Edit: The following scenario is malformed. See followup comment, below.) Let's say that we have Ag, the agent, and Om, the Omega, in the EEA. Om wants to hunt, but Om has children. 1. Om examines Ag and comes to the conclusion that Ag will cooperate. 2. Om asks Ag to watch Om's children while on the hunt, in exchange for a portion of the proceeds. 3. Ag agrees. 4. Ag watches Om's children while Om hunts. 5. Om returns successful, and gives Ag a share of the bounty. Here, all five steps occur in order, Om is Om throughout and Ag is Ag throughout, and both Om and Ag gain utility (meat, in this case) by the exchange. Does that clarify our disagreement?
2SilasBarta
Somewhat, but I'm confused: * Why does it matter that the Omegas are different? (I dispute that they are, but let's ignore that for now.) The parallel only requires functional equivalence to "whatever Omega would do", not Omega's identity persistence. (And indeed Parfit's other point was that the identity distinction is less clear than we might think.) * Why does it matter that natural selection isn't an agent? All that's necessary is that it be an optimization process -- Omega's role in the canonical PH would be no different if it were somehow specified to "just" be an optimization process rather than an agent. * What is the purpose of the EEA DoL example? It removes a critical aspect of PH and Parfitian filters -- that optimality requires recognition of SAMELs. Here, if Ag doesn't watch the children, Om sees this and can withhold the share of the bounty. If Ag could only consider CaMELs (and couldn't have anything in its utility function that sneaks in recognition of SAMELs), Ag would still see why it should care for the children. * (Wow, that's a lot of abbreviations...)
0RobinZ
Taking your objections out of order: First: yes, I have the scenario wrong - correct would be to switch Ag and Om, and have: 1. Om examines Ag and comes to the conclusion that Ag will cooperate. 2. Om offers to watch Ag's children while Ag hunts, in exchange for a portion of the proceeds. 3. Ag agrees. 4. Om watches Ag's children while Ag hunts. 5. Ag returns successful, and gives Om a share of the bounty. In this case, Om has already given Ag utility - the ability to hunt - on the expectation that Ag will give up utility - meat - at a later time. I will edit in a note indicating the erroneous formulation in the original comment. Second: what we are comparing are cases where an agent gives no utility to cooperating with Omega, but uses a decision theory that does so because it boosts the agent's utility (e.g. the prototypical case) and cases where the agent gives positive utility to cooperating with Omega (e.g. if the agent and Omega were the same person and the net change is sufficiently positive). What we need to do to determine if the isomorphism with Parfit's hitchhiker is sufficient is to identify a case where the agent's actions will differ. It seems to me that the latter case, the agent will give utility to Omega even if Omega never gives utility to the agent. Parfit's hitchhikers do not give money to Nomega, the predictor agent who wasn't at the scene and never gave them a ride - they only give money when the SAMEL is present. Therefore: if a parent is willing to make sacrifices when their parent didn't, the Parfit parallel is poor and Theory 2a is the better fit. Agreed?
0SilasBarta
I'm not sure I understand all the steps in your reasoning, but I think I can start by responding to your conclusion: As best I can understand you, yes. If there's e.g. a species that does not care for its young, then one day, one of them does, that action would not be best explained by its recognition (or acting as it if had recognition) of a SAMEL (because there was no "AM") -- it would have to be chalked up to some random change in its psychology. However -- and this is the important part -- by making that choice, and passing the genes partly responsible for that choice, into the next generation, it opens up the possibility of exploring a new part of the "organism design space": the part which which is improved my modifications predicated on some period of parent-child care [1]. If that change, and further moves into that attractor [2], improve fitness, then future generations will care for their children, with the same psychological impetus as the first one. They feel as if they just care about their children, not that they have to act on a SAMEL. However, 2b remains a superior explanation because it makes fewer assumptions (except for the organism to first have the mutation, which is part of the filter); 2b needn't assume that the welfare of the child is a terminal value. And note that the combined phenomena do produce functional equivalence to recognition of a SAMEL. If the care-for-children mode enhances fitness, then it is correct to say, "If the organism in n-th generation after mutation did not regard it as optimal to care for the (n+1)th generation, it would not be here", and it is correct to say that that phenomenon is responsible for the organism's decision (such as it is a decision) to care for its offspring. Given these factors, an organism that chooses to care for its offspring is acting equivalently to one motivated by the SAMEL. Thus, 2b can account for the same behavior with fewer assumptions. As for the EEA DoL arrangement (if the above remar
0RobinZ
This may not be my true objection (I think it is abundantly clear at this point that I am not adept at identifying my true objections), but I just don't understand your objection to 2a. As far as I can tell, it boils down to "never assume that an agent has terms in its utility functions for other agents", but I'm not assuming - there is an evolutionary advantage to having a term in your utility function for your children. By the optimization criteria of evolution, the only reason not to support a child is if you are convinced that the child is either not related or an evolutionary dead-end (at which point it becomes "no child of mine" or some such). In contrast, the Parfit-hitchhiker mechanism involves upholding contracts, none of which your child offered, and therefore seems an entirely unrelated mechanism at the level of the individual organism. (Regarding my hypothetical, I was merely trying to demonstrate that I understood the nature of the hypothetical - it has no further significance.)
1SilasBarta
No, my objection is: "never assume more terminal values (terms in UF) than necessary", and I've shown how you can get away with not assuming that parents terminally value their children -- just as a theoretical exercise of course, and not to deny the genuine heartfelt love that parents have for their children. There is an evolutionary advantage to having a cognitive system that outputs the action "care for children even at cost to self". At a psychological level, this is accomplished by the feeling of "caring" and "love". But is that love due to a utility function weighting, or to a decision theory that (acts as if it recognizes) SAMELs? The mere fact of the psychology, and of the child-favoring acts does not settle this. (Recall the problem of how a ordering of outcomes can be recast as any combination of utility weightings and probabilities.) You can account for the psychological phenomenon more parsimoniously [1] by assuming the action results from choice-machinery that implicitly recognizes SAMELs -- and on top of that, get a bonus explanation of why a class of reasoning (moral reasoning) feels different -- it's the kind that mustn't be convinced by the lack of a causal benefit to the self. My version is precisely written to exclude contracts -- the ideal PH inferences still go through, and so natural selection (which I argue is a PF) is sufficiently similar. If they don't "attach" themselves to a child-favoring decision theory, they simply don't get "rescued" into the n-th generation of that gene's existence. No need to find an isomorphism to a contract. [1] Holy Shi-ite -- that's three p-words with a different initial consonant sound!
0RobinZ
Why does the cognitive system that identifies SAMELs fire when you have a child? The situation is not visibly similar to that of Parfit's hitchhiker. Unless you are suggesting that parenthood simply activates the same precommitment mechanism that the decision theory uses when Parfit-hitchhiking...?
0SilasBarta
I don't understand the point of these questions. You're stuck with the same explanatory difficulties with the opposite theory: why does the cognitive system that identifies _changes in utility function_ fire when you have a child? Does parenthood activate the same terminal values that a PH survivor does upon waking up?
2Perplexed
A utility function need not change when a child is born. After all, a utility function is a mapping from states-of-the-world to utilities and the birth of a child is merely a change in the state of the world. Nonetheless, utility mapping functions can change as a result of information which doesn't betoken a change in the state-of-the-world, but merely in your understanding your own desires. For example, your first taste of garlic ice cream. Or, more to the point, new parents sometimes report dramatic changes in outlook simply from observation of their baby's first smile. The world has not changed, but somehow your place within it has.
0SilasBarta
See sibling reply to Robin. How are you showing a explanatory advantage to attributing the behavior to utility functions rather than SAMEL recognition? (Or what were you otherwise trying to establish?)
2Perplexed
I wasn't trying to show an advantage. You asked a question about my preferred explanatory framework. I interpreted the question to be something like, "How does the birth of a child trigger a particular special cognitive function?". My answer was that it doesn't. The birth of a baby is a change in the state of the world, and machinery for this (Bayesian updating) is already built in. If you insist that I show an explanatory advantage, I would make two (not intended to be very convincing!) points: * "Occam's razor" suggests that I shouldn't introduce entities (SAMELs, in this case) that I don't really need. * "Perplexed's tweezers" suggests that I shouldn't put too much trust in explanations (SAMELs, in this case) that I don't really understand.
2SilasBarta
Okay, but if your preferred explanatory framework is strictly worse per the MML formalism (equivalent to rationalist Occam's razor), then that would be a reason that my explanation is preferred. You claim that my explanation fails by this metric: However, the two theories we're deciding between (2a and 2b) don't explicitly involve SAMELs in either case. [1] The only entity in 2b that is not in 2a is the claim that parents are limited to implementing decision theories capable of surviving natural selection. But as I said in footnote 2, this doesn't penalize it under Occam's Razor, because that must be assumed in both cases, so there's no net penalty for 2b -- implications of existing assumptions do not count toward the complexity/length of your explanation (for reasons I can explain in greater depth if you wish). But to be honest, I'm losing track of the point being established by your objections (for which I apologize), so I'd appreciate it if you could (for my sake) explicitly put them back in the context of the article and this exchange. [1] Before you glare in frustration at my apparent sudden attempt to throw SAMELs under the bus: the thesis of the article does involve SAMELs, but at that point, it's either explaining more phenomena (i.e. psychology of moral intuitions), or showing the equivalence to acting on SAMELs.
1Perplexed
Ok, I accept your argument that Occam is neutral between you and I. SAMELs aren't involved at decision time in 2b, just as "Inclusive fitness" and "Hamilton's rule" aren't involved at decision time in 2a. I will point out though, since we are looking only at the present, that the utility function in 2a can, in principle, be examined using "revealed preference", whereas your purely selfish child-neutral utility function is a theoretical construct which would be hard to measure, even in principle. Without Occam, I have to fall back on my second objection, the one I facetiously named "Perplexed's tweezers". I simply don't understand your theory well enough to criticize it. Apparently your decision theory (like my offspring-inclusive utility function) is installed by natural selection. Ok, but what is the decision theory you end up with? I claim that my evolution-installed decision theory is just garden-variety utility maximization. What is your evolution-installed decision theory? If you made this clear already and I failed to pick up on it, I apologize.
0SilasBarta
Hold on -- that's not what I said. I said that it was neutral on the issue of including "they can only use decision theories that could survive natural selection". I claim it is not neutral on the supposition of additional terms in the utility function, as 2a does. It doesn't matter. They (inclusive fitness and Hamilton's rule) have to be assumed (or implied by something that has to be assumed) anyway, because we're dealing with people, so they'll add the same complexity to both explanations. As I've explained to you several times, looking at actions does not imply a unique utility function, so you can't claim that you've measured it just by looking at their actions. The utility functions "I care about myself and my child" and "I care about myself" can produce the same actions, as I've demonstrated, because certain (biologically plausible) decision theories can output the action "care for child at expense of self", even in the absence of a causal benefit to the self. It is a class of DTs, the kind that count acausal benefits (SAMELs) on par with causal ones. The SAMELs need not be consciously recognized as such but they do need to feel different to motivate the behavior. However, I could be more helpful if you asked specific questions about specific passages. Previously, you claimed that after reading it, you didn't see how natural selection is like Omega, even after I pointed to the passage. That made me a sad panda. You more than made up for it with the Parfit's robot idea, though :-)
1Perplexed
We are clearly talking past each other, and it does not seem to me that it would be productive to continue. For example, I have repeatedly responded to your claim (not explanation!) that the 2a utility function is not susceptible to "revealed preference". You have never acknowledged my response, but continue claiming that you have explained it to me. I have to interpret that as a policy of using some other kind of "surgery" for counterfactuals. Something other than the standard kind of surgery used in causal decision theory (CDT). So the obvious questions become, "So, what kind of surgery do you advocate?" and "How do you know when to use this strange surgery rather than the one Pearl suggests?". That sentence may mean something to you, but I can't even tell who is doing the feeling, what that feeling is different from, and what (or who) is doing the motivating. It wasn't my idea. It was timtyler's. Maybe you will have better luck explaining your ideas to him. He was patient enough to explain the robot to me twice.
0timtyler
Too many SAMELs and CAMELs for me. I didn't even get as far as seeing the analogy between natural selection and Omega. However, unlike you, I thought: this doesn't sound very interesting; I can't be bothered. Retrospectively, I do now get the bit in the summary - if that is what it is all about. I could probably weigh in on how parental care works in mammals - but without absorbing all the associated context, I doubt I would be contributing positively. Thanks for the robot credit. It doesn't feel like my idea either. After some hanging around Yudkowsky, it soon becomes clear that most of the material about decision theory here is partly in the context of a decision theory for machine intelligence - so substituting in a machine seems very natural. Anyway, we don't want you on too different a page - even if it does produce nice stories about the motivtions of stranded hitch-hikers.
0SilasBarta
You have certainly posted responses; I don't recall you saying anything responsive, though, i.e. something that would establish that seeing someone's actions suffices to identify a unique (enough) utility function, at least in this case -- and I can show you more of the difficulties of such a task, if you would like. But yes, please point me to where you think you've said something responsive, as I just defined responsive. Nothing I've described requires doing anything differently than Pearl's kind of counterfactual surgery. For example, see EY's exposition of Timeless Decision Theory, which does standard CF surgery but differs in how it calculates probabilities on results given a particular surgery, for purposes of calculating expected utility. And that's really the crux of it: The trick in TDT -- and explaining human behavior with SAMELs -- is that you can keep the same (genuinely) terminal values, but have a better chance of achieving them if you change the probability weighting, and change it in a way that assigns more expected utility to SAMEL-based actions. Those probabilities are more like beliefs than values. And as another poster demonstrated a while back, you can take any agent's decision ranking, and claim it was from various different value/belief combinations. For example, if someone reaches for an apple instead of reaching for an orange, you can say, consistently with this observation, that: * they prefer the apple to the orange, and believe they have 100% chance of getting what they reach for (pure value-based decision) * are indifferent between the apple and the orange, but believe that they have a higher chance of getting the reached-for fruit by reaching for the apple (pure belief-based decision) * or anything in between. TDT, then, doesn't need to posit additional values (like "honor") -- it just changes its beliefs about the probabilities. Agents acting on SAMELs do the same thing, and I claim this leads to a simpler description of behavi
1Perplexed
In my haste to shut this conversation down, I have written a falsehood. Allow me to correct it, and then, please, let us stop. Regarding "revealed preference, you ask where I previously responded to you. Here it is. It is not nearly as complete a response as I had remembered. In any case, as I read through what we each have written regarding "revealed preference", I find that not only do we disagree as to what the phrase means, I suspect we also are both wrong. This "revealed preference" dispute is such a mess that I really don't want to continue it. I apologize for claiming I had corrected you, when actually I had only counter-asserted.
-2thomblake
I like the tweezers, but would like a better name for it.
0RobinZ
As Perplexed said, there is no requirement that the utility function change - and, in fact, no reason to believe that it does not already have positive terms for children before reproduction. A lot of people report wanting children. I'm asking these questions because we clearly have not established agreement, and I want to determine why. I assume that either we are using conflicting data, applying incompatible rules of inference, or simply misreading each other's writing. It was this last possibility I was probing with that last question.
4SilasBarta
Okay, but by the same token, there's no need to assume recognition of the SAMEL (that favors producing and caring for children) changes. (And if it matters, a lot of people report not wanting children, but then wanting to care for their children upon involuntary parenthood.) None of the things you're pointing out seem to differentiate the utility function-term explanation from the SAMEL-recognition explanation.
0RobinZ
That's a test that favors the SAMEL explanation, I think.
0SilasBarta
So you're agreeing with me in this one respect? (I don't mean to sound confrontational, I just want to make sure you didn't reverse something by accident.)
0RobinZ
Right - here's what I've got. The pattern of "not wanting children, but then wanting to spend resources to care for the children" is better explained by a SAMEL pattern than by a utility function pattern. The fact of people wanting children can be sufficiently explained by the reasons people give for wanting children: a desire for a legacy, an expected sense of fulfillment from parenthood, etcetera. Finally, the fact that this is a SAMEL pattern doesn't mean that the adaptation works on SAMEL patterns - the ability of Parfit's hitchhiker to precommit to paying Omega is a separate adaptation from the childrearing instinct.
0SilasBarta
I'm still not following: * How does "not wanting children, but then wanting to spend resources to care for the children" involve SAMELs in a way that wanting to have children does not? * Yes, you can explain people's pursuance of goals by the reasons they say. The problem is that this isn't the best explanation. As you keep adding new terminal values to explain the actions, you complicate the explanation. If you can do without these -- and I think I've shown you can -- you're left with a superior explanation. * The fact that it feels like "pursuing a legacy" on the inside does not favor that being the superior explanation. Remember, the desire to pay Omega in PH feels like gratefulness on the inside -- like the Omega has some otherwise inherent deservedness of receiving the payment. But in both cases, "If the survivor did not regard it as optimal to pay, the survivor would not be here", and the SAMEL explanation only requires that humans have choice-machinery that favors acting on these (already given) facts. * There is no pre-commitment on the part of human hitchhikers in the sense that they are inextricably bound to pay -- they are still making a choice, even though selection has been applied on the set of hitchhikers. It is not their precommitment that leads them to pay, but their choice-machinery's having alerted them to the optimality of doing so -- which feels like gratefulness.
0RobinZ
* My tongue detects sweetness, not healthfulness, even though the process that created the adaptation was designed to recognize healthfulness. * Not everyone wants children, as you said. It is not evolutionarily necessary for people to want children - only for them to want sex. That anyone wants children might be explained by the reasons they actually give for wanting children. * See above. * I am not invested in the word "precommitment" - we are describing the same behavior on the part of the hitchhiker.
2SilasBarta
This is the crux of the matter - desire for energy-dense consumables was selected for because quickly gathering energy was adaptive. It feels like sweetness from the inside: this is the region of qualiaspace that corresponds to feeling a motivation to act on that means-ends link. It does not feel like quickly gathering energy. Similarly, being motivated by SAMELs needn't feel like such a recognition -- it feels like an "otherwise-ungrounded inherent deservedness of others of being treated well" (or badly). Okay, reviewing your point, I have to partially agree -- general desire to act on SAMELs need not be (and probably isn't) the same choice machinery that motivates specific child-bearing acts. The purpose of the situation was to show how you can account for behavior without complicating the utility function. Rather than additionally positing that someone terminally values their children, we can say that they are self-interested, but that only certain decision theories ever make it to the next generation. In both cases, we have to rely on "if they did not regard it as optimal to care for their children (and given genetic psychological continuity), they would not be there", but only in 2a must we elevate this caring to a terminal value for purposes of explanation.
3Leonhart
This is good, but is still hiding some confusion (in me, anyway.) Why say that it doesn't feel like quickly gathering energy? What would feel like quickly gathering energy? I'm now imagining a sucking-in-lines-qualia, (warning tvtropes) lurking in a region of qualia-space only accessible to sentient energy weaponry. And I'm kinda jealous.
4SilasBarta
Getting a nutrient feed via IV doesn't feel like sweetness, but does involve quickly getting energy. If you had a cognitive system that directly recognized any gain in energy, and credited it as good, for that reason, then you would have a quale that is best described as "feeling like gathering energy". But that requires a whole different architecture.
0RobinZ
It sounds like we agree.
-2SilasBarta
Including about my claim that it provides a more parsimonious explanation of parents' actions not to include concern for their children as a terminal value?
0RobinZ
Yes - if you expected concern for children to be a terminal value, you would not expect to see adults of breeding age who do not want children. (That is the specific evidence that convinced me.) I don't think I've quite worked out your position on Parfitian hitchhiking, but I don't see any difference between what you claim and what I claim regarding parenthood.
0RobinZ
I spoke correctly - I didn't express agreement on the broader issue because I don't want to update too hastily. I'm still thinking.
0pjeby
You should put that in the article. (True, it's a causal iteration rather than an acausal prediction. But it'll still make the article clearer.)
0SilasBarta
Thanks for the suggestion, I've added it.
1PhilGoetz
Now I want a parfait.
1Snowyowl
Consider this situation: You are given the choice between personally receiving a small prize or giving your children a much larger prize. Whatever you choose, it is possible that your children will one day face a similar choice. Being your children, they resemble you in many ways and are more likely than not to choose similarly to you. Its not quite a Parfit's Hitchhiker even from your childrens' perspective - the consequences of their choice are in the past, not the future - but it's close, and the result is the same.
0RobinZ
I see what you mean, but I think the parallel is pretty weak.

This post has its flaws, as has been pointed out, but to add the required nuance would make a book (or at least a LW sequence) out of what is currently a good and provocative post.

The nuances I think are vital:

  1. The explicit consequence-representing/world-modeling parts of our minds exist alongside direct "function calls" like reflexes.
  2. The decision-theory framework only imperfectly describes even that CR/WM part of our minds.
  3. It's quite possible (and indeed virtually certain) that changing the payoff matrix has been done by evolution in certain
... (read more)
2SilasBarta
Thanks for comment! I had actually struggled to keep it from ballooning, and ended up leaving off the part where I applied it to Haidt's work, but then decided I wasn't familiar enough with it to do it justice. Re 3 and 4, I thought I made clear that SAMELs are not consciously recognized as such by humans, e.g. in the Drescher quote, and when I mentioned that careful self-reflection came late in evolutionary history. However, they do feel differently to act on.

Theory 2a: Parents have a utility function that places positive weight on both themselves and their children.

Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.

Hmm, so basically evolution started out making creatures with a simple selfish utility function and a straightforward causal decision theory. One improvement would have been to make a make the decision theory "better" (more like Timeless Decisi... (read more)

Something just occurred to me - the conclusions you reach in this post and in the version of the post on your blog, seem to contradict each other. If moral intuitions really are "the set of intuitions that were selected for because they saw optimality in the absence of a causal link", and if, as you claim on your blog, Parfit's Hitchhiker is a useful model for intellectual property, then why is it that an entire generation of kids... TWO generations really, have grown up now with nearly unanimous moral intuitions telling them there's nothing wrong with "stealing" IP?

2SilasBarta
I'm not entirely clear on where the contradiction is, but I will say that there are both genetic and memetic Parfitian filters, and prevailing memes about IP have kept us from "branching off" into a world we might regard as better, and have thereby been Parfit-filtered out. I don't claim that the content of moral judgments will be the same, of course, just that similar feelings will exist when one makes them, because (for historical reasons) they impose a very different constraint on our reasoning than the kind that purely considers CaMELs. But again, I would need a better explanation of the contradiction to give a better reply.
0kodos96
What I was trying to say was that if optimal behavior on PH-style problems is selected for, and if IP is indeed analogous to PH (with respecting IP rights equivalent to paying Omega), then why hasn't evolution resulted in everyone having the moral intuition to respect IP rights? I suppose the obvious retort is that evolution is slow and hasn't had time to catch up with internet-era IP issues... although I'm not really clear here on whether we're talking about genetic or memetic evolution... I guess I'm not really sure what I'm trying to say.... I think I'm just gonna have to revert to my original reaction, which is that PH is just not an effective intuition pump for IP, hence the confusion.

it means it is far too strict to require that our decisions all cause a future benefit; we need to count acausal “consequences” (SAMELs) on par with causal ones (CaMELs)

OK, so this may be a completely stupid question, as I'm a total newbie to decision theoryish issues... but couldn't you work non-zero weighting of SAMELs into a decision theory, without abandoning consequentialism, by reformulating "causality" in an MWIish, anthropic kind of way in which you say that an action is causally linked to a consequence if it increases the number of w... (read more)

1SilasBarta
I think my (at the time misdirected) comment here is most responsive to your question. In short, causality has a narrow, technical definition here, which corresponds with wide (but not universal) usage. I see nothing wrong with regarding SAMELs as consequences, or saying that e.g. one-boxing causes the sealed box to be filled, but this is incorrect for standard game-theoretic usage of the terms.

Sure. Morals = the part of our utility function that benefits our genes more than us. But is this telling us anything we didn't know since reading The Selfish Gene? Or any problems with standard decision theory? There's no need to invoke Omega, or a new decision theory. Instead of recognizing that you can use standard decision theory, but measure utility as gene copies rather than as a human carrier's qualia, you seem to be trying to find a decision theory for the human that will implement the gene's utility function.

7SilasBarta
What I added to decision theory beyond Selfish Gene's arguments are: * An explanation for the psychological mechanisms of moral intuitions -- i.e. why reasoning about moral issues feels different, and why we have such a category. * Why you shouldn't take existing and ideal utility functions as being peppered with numerous terminal values (like "honor" and "gratefulness" and "non-backstabbing"), but rather, can view them as having few terminal values, but attached to agents who pursue them by acting on SAMELs. Thus you have a simpler explanation for existing utility functions, and a simpler constraint to satisfy when identifying your own (or forming your own decision theory, given what you regard as your values).

Thinking of it as being limited to using a specific decision theory is incorrect. Instead, it should simply be seen as using a specific decision theory, or one of many. It's not like evolution and such are here right now, guiding your actions. Evolution acts through our genes, which program us to do a specific thing.

Why do the richest people on Earth spend so much time and money helping out the poorest? Is that what a rational agent with a Parfit-winning decision theory would do?

3taw
It's rapidly diminishing utility of money. Every extra million is just worth less and less. At some point all these millions are worth so little than you can as well give them away, if you place even very low value on others' well-being. What else could Buffett spend all his money on now?
2wnoise
In general, they don't. The ones that do (Gates, Buffet, etc) get a fair bit of attention for it.
[-][anonymous]00

Theory 2b: Parents have a utility function that places positive weight on only themselves (!!!); however, they are limited to implementing decision theories capable of surviving natural selection.

I don't think any decision theory that has been proposed by anyone so far has this property. You might want to either change the example, or explain what you're talking about here...

[-]taw-30

Strongly downvoted for using omega to rationalize your pre-existing political positions.

You use methods of rationality to fail at substance of rationality.

4SilasBarta
What simplicio and kodos said. Yes, you may have a point that I shouldn't have linked the blog post at all, nor implied it was a pre-requisite by calling this a follow-up. I did that because of a) the similarity of many points, and b) encouragement from others who had seen it. However, that blog post is not a true pre-requisite; this article does not have any ambiguities that are only cleared up by going to my blog. Second of all, if you think my position on IP was somehow pre-existing and that I was just rationalizing something I'd believe no matter what ... then wow, you have a lot of learning to do. I used to be just as anti-IP as any other Stephan Kinsella acolyte, and I can cite you posts from a period of at least 5 years, before many gnawing objections kept coming to mind. The blog post explores the intuitions that lead me to find the common anti-IP absolutism so troubling -- specifically, it's similarity to the reasoning of "hey, I've already been rescued, why should I pay?" I would be glad to take you through ten years of my internet posts to show that IP isn't something I argue for "no matter what", but something that has inspired tremendous soul-searching. And you'll find that most of my postings aren't even arguing for one side or the other, but pointing out problems in specific arguments, without reference to whether the conclusion is nonetheless valid. The merit of, "Hey, I already know your idea now, why should I give you anything?" is something that they can debate whether or not they support IP. Could you instead share your thoughts on the content I've posted specifically on LW?
4simplicio
(a) In this post? No. His blog, maybe... so why downvote this post? (b) I would call that a concern or suspicion, not a certainty. Moreover Silas appears to be mostly arguing descriptively for an etiology of moral intuitions, so it's not too surprising that his moral intuitions remain relatively unchallenged. I can give you my meta-ethic as well; shockingly, developing it did not result in my deciding that torturing babies is the summum bonum of existence.
5kodos96
I hate intellectual property as much as the next internet peanut gallery member (as you can see if you click through to Silas' personal blog post)... but even I would have to say that comparing it to torturing babies is a bit of a stretch. .......a little bit of a stretch anyway ;)
0simplicio
Heh, I got carried away. I was not making a comparison to IP (about which I'm ambivalent), just pointing out that developing moral theories is one case where we want the theories to mostly fit the intuitions, so changing your mind is less expected.
2taw
This moral theory seems designed just for this kind of rationalizations.
2simplicio
That is a legitimate concern. Can you think of two mutually contradictory moral positions than could both be plausibly argued using this approach? /sets timer for 2 min The best I can do is think of a case where two Parfitian imperatives clash: i.e., a conflict between two of these counterfactual imperatives. Not a very strong objection at all, but then I am sympathetic to the theory evinced here & do not trust my ability to see its flaws.
6Perplexed
Any system that permits the derivation of "ought" from "is" is susceptible to having people with differing experiences regarding "is" come to different conclusions regarding "ought".
4taw
Here's the original form: But it could just as easily be: * Omega represents the people who are making their work freely available, conditional on whether we will keep derivative works likewise freely available * The decision to rescue us is the decision to produce those intellectual works * The decision to pay the $5 represents the decision to make your work freely available, "even though" you can as well stick a copyright on it, and make some money Can it get more opposite? Full rejection of IP, with form identical to supportive argument. You can rationalize anything this way.
2kodos96
Interesting argument, but it should probably have been made on Silas' blog, not here.
2taw
I'm not arguing for or against copyrights on this basis; I was just a convenient example of Parfitian reasoning that I could conveniently twist.
0SilasBarta
Could you instead show us an example of how to twist the Parfitian reasoning that's actually used in the article on this site?
0SilasBarta
Except that you can still make free works available, conditional on derivative works being freely available as well, even in IP systems, but you can't make gated works in any sense without IP; and producing works that are good enough to make money under copyright (but released without) involves a non-trivial cost, unlike the cost of not-using a work that only exists because of a creator (and given that the creator was the reason for its existence). And you've broken the role of SAMELs, as the want-to-profit creators aren't subjunctively self-defeating their ability to produce works, as they wouldn't be able to use the free ones. (SAMEL is an abbreviation I used on this site but not my blog.) So all the crucial properties are absent in the reverse case. A good attempt, though.
4taw
I can think of a lot of nitpicking applicable to both scenarios. Like this: Copyleft is a very limited tool, especially against patents, while without IP you can produce many works with other forms of funding - like work contracts. It's nearly impossible to produce a work that isn't based on other's freely available works (regardless if this kind of derivation counts as legally "derivative work" or not), while sticking IP on trivial things just because you can is commonplace. In sufficiently strong IP system pretty much nothing would ever be created, because everything would violate far too many other people's IP, so it is indeed self-defeating. I'm sure you can find some other minor differences, and we could go on indefinitely, at least until we figured out that maybe this isn't the best way to reason. On another level, I have no idea why you used Omaga as analogous to IP instead of far more obvious analogy to plain old legally enforceable contracts. The only defense for IP that makes even tiniest bit of economic sense is that transaction costs would prevent consumers negotiating with producers. By straightforward Coase theorem reasoning, for any work that would be profitably produced in IP-based system, at least as good or better outcome could be achieved without IP system if transaction and negotiation costs were zero (plus a few other totally unrealistic assumptions, but none worse than assuming omniscient Omega).
2SilasBarta
Much as I'd like to reply, I prefer LW's norm, so I'm going to grant you a heckler's veto until you can move these criticisms to my blog.
-1taw
My point isn't about IP, it's about how easy it is to twist this way of reasoning with story and analogy in any direction you want by choosing a different analogy. If your original post was anti-IP, I'd just twist it into pro-IP case. Or if you used aynrandist story about "self-ownership" + analogy to capitalism, I'd use a different analogy that makes it strongly oppose capitalism. Or whatever. As long as there's "let's pick arbitrary analogy" step anywhere in your reasoning system, it's all infinitely twistable. The part about Coase theorem was about how your analogy choice was highly unusual. Not that using a more obvious one would entirely avoid the problem.
2SilasBarta
Where does the article that is on this site make this flaw in reasoning?
0[anonymous]
Much as I'd like to reply, I prefer LW's norm, so I'm going to grant you a heckler's veto until you can move these criticisms to my blog.
4kodos96
Huh? On his personal blog, yes. In this post? No.