So I agree. It's lucky I've never met a game theorist in the desert.
Less flippantly. The logic pretty much the same yes. But I don't see that as a problem for the point I'm making; which is that the perfect predictor isn't a thought experiment we should worry about.
Elsewhere on this comment thread I've discussed why I think those "rules" are not interesting. Basically, because they're impossible to implement.
According to what rules? And anyway I have preferences for all kinds of impossible things. For example, I prefer cooperating with copies of myself, even though I know it would never happen, since we'd both accept the dominance reasoning and defect.
So these alternative decision theories have relations of dependence going back in time? Are they sort of couterfactual dependences like "If I were to one-box, Omega would have put the million in the box"? That just sounds like the Evidentialist "news value" account. So it must be some other kind of relation of dependence going backwards in time that rules out the dominance reasoning. I guess I need "Other Decision Theories: A Less Wrong Primer".
See mine and orthonormal's comments on the PD on this post for my view of that.
The point I'm struggling to express is that I don't think we should worry about the thought experiment, because I have the feeling that Omega is somehow impossible. The suggestion is that Newcomb's problem makes a problem with CDT clearer. But I argue that Newcomb's problem makes the problem. The flaw is not with the decision theory, but with the concept of such a predictor. So you can't use CDT's "failure" in this circumstance as evidence that CDT is wrong.
Here's a re...
Aha. So when agents' actions are probabilistically independent, only then does the dominance reasoning kick in?
So the causal decision theorist will say that the dominance reasoning is applicable whenever the agents' actions are causally independent. So do these other decision theories deny this? That is, do they claim that the dominance reasoning can be unsound even when my choice doesn't causally impact the choice of the other?
Given the discussion, strictly speaking the pill reduces Ghandi's reluctance to murder by 1 percentage point. Not 1%.
Wouldn't you like to be the type of agent who cooperates with near-copies of yourself? Wouldn't you like to be the type of agent who one-boxes?
Yes, but it would be strictly better (for me) to be the kind of agent who defects against near-copies of myself when they co-operate in one-shot games. It would be better to be the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box.
But the point is really that I don't see it as the job of an alternative decision theory to get "the right" answers to these sorts of questions.
we might ask whether it is preferable to be the type of person who two boxes or the type of person who one boxes. As it turns out it seems to be more preferable to one-box
No. What is preferable is to be the kind of person Omega will predict will one-box, and then actually two-box. As long as you "trick" Omega, you get strictly more money. But I guess your point is you can't trick Omega this way.
Which brings me back to whether Omega is feasible. I just don't share the intuition that Omega is capable of the sort of predictive capacity required of it.
There are a couple of things I find odd about this. First, it seems to be taken for granted that one-boxing is obviously better than two boxing, but I'm not sure that's right. J.M. Joyce has an argument (in his foundations of causal decision theory) that is supposed to convince you that two-boxing is the right solution. Importantly, he accepts that you might still wish you weren't a CDT (so that Omega predicted you would one-box). But, he says, in either case, once the boxes are in front of you, whether you are a CDT or a EDT, you should two-box! The domin...
As I understand what is meant by satisficing, this misses the mark. A satisficer will search for an action until it finds one that is good enough, then it will do that. A maximiser will search for the best action and then do that. A bounded maximser will search for the "best" (best according to its bounded utility function) and then do that.
So what the satisficer picks depends on what order the possible actions are presented to it in a way it doesn't for either maximiser. Now, if easier options are presented to it first then I guess your conclusion still follows, as long as we grant the premise that self-transforming will be easy.
But I don't think it's right to identify bounded maximisers and satisficers.
Any logically coherent body of doctrine is sure to be in part painful and contrary to current prejudices
– Bertrand Russell, History of Western Philosophy p. 98
Bertie is a goldmine of rationality quotes.
Also don't confuse "logically coherent" with "true".
P6 entails that there are (uncountably) infinitely many events. It is at least compatible with modern physics that the world is fundamentally discrete both spatially and temporally. The visible universe is bounded. So it may be that there are only finitely many possible configurations of the universe. It's a big number sure, but if it's finite, then Savage's theorem is irrelevant. It doesn't tell us anything about what to believe in our world. This is perhaps a silly point, and there's probably a nearby theorem that works for "appropriately large fini...
The greatest challenge to any thinker is stating the problem, in a way that will allow a solution
– Bertrand Russell
Anyone who can handle a needle convincingly can make us see a thread which isn't there
-E.H. Gombrich
Ah I see now. Glad we cleared that up.
Still, I think there's something to the idea that if there is a genuine debate about some claim that lasts a long time, then there might well be some truth on either side. So perhaps Russell was wrong to universally quantify over "debates" (as your counterexamples might show), but I think there is something to the claim.
But why ought the world be such that such a partition exists for us to name? That doesn't seem normative. I guess there's a minor normative element in that it demands "If the world conspires to allow us to have partitions like the ones needed in P6, then the agent must be able to know of them and reason about them" but that still seems secondary to the demand that the world is thus and so.
Er. What? You can call it a false generalisation all you like, that isn't in itself enough to convince me it is false. (It may well be false, that's not what's at stake here). You seem to be suggesting that merely by calling it a generalisation is enough to impugn its status.
And in homage to your unconvential arguing style, here are some non sequituurs: How many angels can dance on the head of a pin? Did Thomas Aquinas prefer red wine or white wine? Was Stalin lefthanded? What colour were Sherlock Holmes' eyes?
This thought isn't original to me, but it's probably worth making. It feels like there are two sorts of axioms. I am following tradition in describing them as "rationality axioms" and "structure axioms". The rationality axioms (like the transitivity of the order among acts) are norms on action. The structure axioms (like P6) aren't normative at all. (It's about structure on the world, how bizarre is it to say "The world ought to be such that P6 holds of it"?)
Given this, and given the necessity of the structure axioms for the proof, it feels like Savage's theorem can't serve as a justification of Bayesian epistemolgy as a norm of rational behaviour.
What the Dutch book theorem gives you are restrictions on the kinds of will-to-wager numbers you can exhibit and still avoid sure loss. It's a big leap to claim that these numbers perfectly reflect what your degrees of belief ought to be.
But that's not really what's at issue. The point I was making is that even among imperfect reasoners, there are better and worse ways to reason. We've sorted out the perfect case now. It's been done to death. Let's look at what kind of imperfect reasoning is best.
What do you mean "the statement is affected by a generalisation"? What does it mean for something to be "affected by a generalisation"? What does it mean for a statement to be "affected"?
The claim is a general one. Are general claims always false? I highly doubt that. That said, this generalisation might be false, but it seems like establishing that would require more than just pointing out that the claim is general.
I think this misses the point, somewhat. There are important norms on rational action that don't apply only in the abstract case of the perfect bayesian reasoner. For example, some kinds of nonprobabilistic "bid/ask" betting strategies can be Dutch-booked and some can't. So even if we don't have point-valued will-to-wager values, there are still sensible and not sensible ways to decide what bets to take.
If you weaken your will-to-wager assumption and effectively allow your agents to offer bid-ask spreads on bets (i'll buy bets on H for x, but sell them for y) then you get "Dutch book like" arguments that show that your beliefs conform to Dempster-Shafer belief functions, or Choquet capacities, depending on what other constraints you allow.
Or, if you allow that the world is non-classical – that the function that decides which propositions are true is not a classical logic valuation function – then you get similar results.
Other arguments for havin...
This seems to be orthogonal to the current argument. The Dutch book argument says that your will-to-wager fair betting prices for dollar stakes had better conform to the axioms of probability. Cox's theorem says that your real-valued logic of plausible inference had better conform to the axioms of probability. So you need the extra step of saying that your betting behaviour should match up with your logic of plausible inference before the arguments support each other.
Savage's representation theorem in Foundations of Statistics starts assuming neither. He just needs some axioms about preference over acts, some independence concepts and some pretty darn strong assumptions about the nature of events.
So it's possible to do it without assuming a utility scale or a probability function.
I've had rosewater flavoured ice cream.
I bet cabbage ice cream does not taste as nice.
Sorry I'm new. I don't understand. What do you mean?
I have lots of particular views and some general views on decision theory. I picked on decision theory posts because it's something I know something about. I know less about some of the other things that crop up on this site…
it is clear that each party to this dispute – as to all that persist through long periods of time – is partly right and partly wrong
— Bertrand Russell History of Western Philosophy (from the introduction, again.)
Uncertainty, in the presence of vivid hopes and fears, is painful, but must be endured if we wish to live without the support of comforting fairy tales
— Bertrand Russell, History of Western Philosophy (from the introduction)
Hi. I'll mostly be making snarky comments on decision theory related posts.
The VNM utility theorem implies there is some good we value highest? Where has this come from? I can't see how this could be true. The utility theorem only applies once you've fixed what your decision problem looks like…
Signals by Brian Skyrms is a great book in this area. It shows how signalling can evolve in even quite simple set-ups.