ata comments on Desirable Dispositions and Rational Actions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (180)
There are a few essential questions here:
I'm not convinced that the answer to any of these is "yes", and I don't think you've really argued for them. This post would be stronger and more interesting if you attempted to make the point that agents with irrational dispositions do tend to be rewarded, and tend to be rewarded enough that being irrational is worth it.
(As for #3, I think there was an Eliezer post on that or a related issue, not sure what it was called...)
Edit: I think I was thinking of Doublethink (Choosing to be Biased).
But I don't believe such claims are true, so why would I attempt to argue for them? My claim is purely theoretical: we need to distinguish, conceptually, between desirable dispositions and rational actions. It seems to me that many on LW fail to make this conceptual distinction, which can lead to mistaken (or at least under-argued) theorizing about rationality. The dispute between one-boxers and two-boxers is interesting and significant even if both sides agree about most "real world" cases.
This is because actions only ever arise from dispositions. Yes, given that Omega has predicted you will one-box, it would (as an abstract fact) be to your benefit to two-box; but in order for you to actually two-box, you would have to execute some instruction in your source code, which, if it were present, Omega would have read, and thus would not have predicted that you would one-box.
Hence only dispositions are of interest.
Is this the argument?
Or are you agreeing that you ought to two-box, but claiming that this fact isn't interesting because of premise 1?
At any rate, it seems like a bad argument, since analogous arguments will entail that whenever you have some decisive disposition, it is false that you ought to act differently. (It will entail, for instance, NOT[people who have a decisive loss aversion disposition should follow expected utility theory].)
Yes, if "ought" merely means the outcome would be better, and doesn't imply "can".
As far as I can tell, it would only have that implication in situations where an outcome depended directly on one's disposition (as opposed to one's actions).
I don't think so:
Or, for Newcomb:
Either "ought" applies to dispositions, or actions, but one mustn't equivocate. If "what John ought to do" means "the disposition John should have", then perhaps John ought to maximize expected utility even if he's not currently so disposed. If the outcomes depend on John's disposition only indirectly via his actions, and his current disposition will lead to a suboptimal action, then we may very well say that John "ought" to do something different, meaning that he should have a different disposition.
If, however, John is involved in a Newcomblike problem where there is a causal arrow leading directly from his disposition to the outcome, and his current disposition is optimal with respect to outcome, then one cannot say that he "ought" to do differently, on this (dispositional) usage of "ought".
Everyone agrees about what the best disposition to have is. The disagreement is about what to do. I have uniformly meant "ought" in the action sense, not the dispositional sense. (FYI: this is always the sense in which philosophers (incl. Richard) mean "ought", unless otherwise specified.)
BTW: I still don't understand the relevance of the fact that it is impossible for people with one-boxing dispositions to two-box. If you don't like the arguments that I formalized for you, could you tell me what other premises you are using to reach your conclusion?
That sense is entirely uninteresting, as I explained in my first comment in this thread. It's the sense in which one "ought" to two-box after having been predicted by Omega to one-box -- a stipulated impossibility.
Philosophers who, after having considered the distinction, remain concerned with the "action" sense, would tend to be -- shall we say -- vehemently suspected of non-reductionist thinking; of forgetting that actions are completely determined by dispositions (i.e. the algorithms running in the mind of the agent).
Having said that, if one does use "ought" in the action sense, then there should be no difficulty in saying that one "ought" to two-box in the situation where Omega has predicted you will one-box. That's just a restatement of the assumption that the outcome of (one-box predicted, two-box) is higher in the preference ordering than that of (one-box predicted, one-box).
Normally, the two meanings of "ought" coincide, because outcomes normally depend on actions that happen to be determined by dispositions, not directly on dispositions themselves. Hence it's easy to be deceived into thinking that the action sense is the appropriate sense of "ought". But this breaks down in situations of the Newcomb type. There, the dispositional sense is clearly the right one, because that's the sense in which you ought to one-box; since the dispositional sense also gives the same answers as the action sense for "normal" situations, we may as well say that the dispositional sense is what we mean by "ought" in general.
So, you're really interested in this question: what is the best decision algorithm? And then you're interested, in a subsidiary way, in what you ought to do. You think the "action" sense is silly, since you can't run one algorithm and make some other choice.
Your answer to my objection involving the parody argument is that you ought to do something else (not go with loss aversion) because there is some better decision algorithm (that you could, in some sense of "could", use?) that tells you to do something else.
What do you do with cases where it is impossible for you to run a different algorithm? You can't exactly use your algorithm to switch to some other algorithm, unless your original algorithm told you to do that all along, so these cases won't be that rare. How do you avoid the result that you should just always use whatever algorithm you started with? However you answer this objection, why can't two-boxers who care about the "action sense" of ought answer your objection analogously?
Just take causal decision theory and then crank it with an account of counterfactuals whereby there is probably a counterfactual dependency between your box-choice and your early disposition.
Arntzenius called something like this "counterfactual decision theory" in 2002. The counterfactual decision theorist would assign high probability to the dependency hypotheses "if I were to one-box now then my past disposition was one-boxing" and "if I were to two-box now then my past disposition was two-boxing." She would assign much lower probability to the dependency hypotheses on which her current action is independent of her past disposition (these would be the cognitive glitch/spasm sorts of cases).
I agree that this fact [you can't have a one-boxing disposition and then two box] could appear as premise in an argument, together with an alternative proposed decision theory, for the conclusion that one-boxing is a bad idea. If that was the implicit argument, then I now understand the point.
To be clear: I have not been trying to argue that you ought to take two boxes in Newcomb's problem.
But I thought this fact [you can't have a one-boxing disposition and then two box] was supposed to be a part of an argument that did not use a decision theory as a premise. Maybe I was misreading things, but I thought it was supposed to be clear that two-boxers were irrational, and that this should be pretty clear once we point out that you can't have the one-boxing disposition and then take two boxes.
By "irrational", do you mean in the sense of "would pay the $100 as Parfit's Hitchhiker"? If so, then the answer to all three questions is yes: there are lots of scenarios in real life where we are called upon to pay debts both positive and negative (repay favors, retaliate against aggression) and we think the benefit to be gained from doing so will be less than the cost. There are enough such scenarios that a disposition to pay debts without stopping to do utility calculations usually pays off handsomely over a lifetime.