Jonathan_Graehl comments on Decision Theories: A Less Wrong Primer - Less Wrong

69 Post author: orthonormal 13 March 2012 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jonathan_Graehl 15 March 2012 01:19:05AM 4 points [-]

I generally share your reservations.

But as I understand it, proponents of alternative DTs are talking about a conditional PD where you know you face an opponent executing a particular DT. The fancy-DT-users all defect on PD when the prior of their PD-partner being on CDT or similar is high enough, right?

Wouldn't you like to be the type of agent who cooperates with near-copies of yourself? Wouldn't you like to be the type of agent who one-boxes? The trick is to satisfy this desire without using a bunch of stupid special-case rules, and show that it doesn't lead to poor decisions elsewhere.

Comment author: wedrifid 15 March 2012 08:35:04AM 3 points [-]

But as I understand it, proponents of alternative DTs are talking about a conditional PD where you know you face an opponent executing a particular DT. The fancy-DT-users all defect on PD when the prior of their PD-partner being on CDT or similar is high enough, right?

(Yes, you are correct!)

Comment author: scmbradley 16 March 2012 01:54:47PM 0 points [-]

Wouldn't you like to be the type of agent who cooperates with near-copies of yourself? Wouldn't you like to be the type of agent who one-boxes?

Yes, but it would be strictly better (for me) to be the kind of agent who defects against near-copies of myself when they co-operate in one-shot games. It would be better to be the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box.

But the point is really that I don't see it as the job of an alternative decision theory to get "the right" answers to these sorts of questions.

Comment author: Jonathan_Graehl 16 March 2012 09:24:28PM 0 points [-]

The larger point makes sense. Those two things you prefer are impossible according to the rules, though.

Comment author: Giles 17 March 2012 04:45:55PM 0 points [-]

They're not necessarily impossible. If you have genuine reason to believe you can outsmart Omega, or that you can outsmart the near-copy of yourself in PD, then you should two-box or defect.

But if the only information you have is that you're playing against a near-copy of yourself in PD, then cooperating is probably the smart thing to do. I understand this kind of thing is still being figured out.

Comment author: scmbradley 17 March 2012 12:53:24PM 0 points [-]

According to what rules? And anyway I have preferences for all kinds of impossible things. For example, I prefer cooperating with copies of myself, even though I know it would never happen, since we'd both accept the dominance reasoning and defect.

Comment author: APMason 17 March 2012 07:10:24PM 1 point [-]

According to what rules?

I think he meant according to the rules of the thought experiments. In Newcomb's problem, Omega predicts what you do. Whatever you choose to do, that's what Omega predicted you would choose to do. You cannot to choose to do something that Omega wouldn't predict - it's impossible. There is no such thing as "the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box".

Comment author: scmbradley 20 March 2012 05:44:34PM 0 points [-]

Elsewhere on this comment thread I've discussed why I think those "rules" are not interesting. Basically, because they're impossible to implement.

Comment author: Jonathan_Graehl 17 March 2012 11:23:24PM *  0 points [-]

Right. The rules of the respective thought experiments.. Similarly, if you're the sort to defect against near copies of yourself in one-shot PD, then so is your near copy. (edit: I see now that scmbradley already wrote about that - sorry for the redundancy).