Jonathan_Graehl comments on Decision Theories: A Less Wrong Primer - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (172)
There are a couple of things I find odd about this. First, it seems to be taken for granted that one-boxing is obviously better than two boxing, but I'm not sure that's right. J.M. Joyce has an argument (in his foundations of causal decision theory) that is supposed to convince you that two-boxing is the right solution. Importantly, he accepts that you might still wish you weren't a CDT (so that Omega predicted you would one-box). But, he says, in either case, once the boxes are in front of you, whether you are a CDT or a EDT, you should two-box! The dominance reasoning works in either case, once the prediction has been made and the boxes are in front of you.
But this leads me on to my second point. I'm not sure how much of a flaw Newcomb's problem is in a decision theory, given that it relies on the intervention of an alien that can accurately predict what you will do. Let's leave aside the general problem of predicting real agents' actions with that degree of accuracy. If you know that the prediction of your choice affects the success of your choices, I think that reflexivity or self reference simply makes the prediction meaningless. We're all used to self-reference being tricky, and I think in this case it just undermines the whole set up. That is, I don't see the force of the objection from Newcomb's problem, because I don't think it's a problem we could ever possibly face.
Here's an example of a related kind of "reflexivity makes prediction meaningless". Let's say Omega bets you $100 that she can predict what you will eat for breakfast. Once you accept this bet, you now try to think of something that you would never otherwise think to eat for breakfast, in order to win the bet. The fact that your actions and the prediction of your actions have been connected in this way by the bet makes your actions unpredictable.
Going on to the prisoner's dilemma. Again, I don't think that it's the job of decision theory to get "the right" result in PD. Again, the dominance reasoning seems impeccable to me. In fact, I'm tempted to say that I would want any future advanced decision theory to satisfy some form of this dominance principle: it's crazy to ever choice an act that is guaranteed to be worse. All you need to do to "fix" PD is to have the agent attach enough weight to the welfare of others. That's not a modification of the decision theory, that's a modification of the utility function.
I generally share your reservations.
But as I understand it, proponents of alternative DTs are talking about a conditional PD where you know you face an opponent executing a particular DT. The fancy-DT-users all defect on PD when the prior of their PD-partner being on CDT or similar is high enough, right?
Wouldn't you like to be the type of agent who cooperates with near-copies of yourself? Wouldn't you like to be the type of agent who one-boxes? The trick is to satisfy this desire without using a bunch of stupid special-case rules, and show that it doesn't lead to poor decisions elsewhere.
(Yes, you are correct!)
Yes, but it would be strictly better (for me) to be the kind of agent who defects against near-copies of myself when they co-operate in one-shot games. It would be better to be the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box.
But the point is really that I don't see it as the job of an alternative decision theory to get "the right" answers to these sorts of questions.
The larger point makes sense. Those two things you prefer are impossible according to the rules, though.
They're not necessarily impossible. If you have genuine reason to believe you can outsmart Omega, or that you can outsmart the near-copy of yourself in PD, then you should two-box or defect.
But if the only information you have is that you're playing against a near-copy of yourself in PD, then cooperating is probably the smart thing to do. I understand this kind of thing is still being figured out.
According to what rules? And anyway I have preferences for all kinds of impossible things. For example, I prefer cooperating with copies of myself, even though I know it would never happen, since we'd both accept the dominance reasoning and defect.
I think he meant according to the rules of the thought experiments. In Newcomb's problem, Omega predicts what you do. Whatever you choose to do, that's what Omega predicted you would choose to do. You cannot to choose to do something that Omega wouldn't predict - it's impossible. There is no such thing as "the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box".
Elsewhere on this comment thread I've discussed why I think those "rules" are not interesting. Basically, because they're impossible to implement.
Right. The rules of the respective thought experiments.. Similarly, if you're the sort to defect against near copies of yourself in one-shot PD, then so is your near copy. (edit: I see now that scmbradley already wrote about that - sorry for the redundancy).