dankane comments on Causal decision theory is unsatisfactory - LessWrong

20 Post author: So8res 13 September 2014 05:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (158)

You are viewing a single comment's thread. Show more comments above.

Comment author: dankane 14 September 2014 11:36:15PM *  0 points [-]

I think even TDT says that you should 2-box in Newcomb's problem when the box is full if and only if false.

But more seriously, presumably in your scenario the behavior of a "perfectly rational agent" actually means the behavior of an agent whose behavior is specified by some fixed, known program. In this case, the participant can determine whether or not the box is full. Thus, either the box is always full or the box is always empty, and the participant knows which is the case. If you are playing Newcomb's problem with the box always full, you 2-box. If you play Newcomb's problem with the box always empty, you 2-box. Therefore you 2-box. Therefore, the perfectly rational agent 2-boxes. Therefore, the box is always empty.

OK. OK. OK. You TDT people will say something like "but I am a perfectly rational agent and therefore my actions are non-causally related to whether or not the box is full, thus I should 1-box as it will cause the box to be full." On the other hand, if I modify your code to 2-box in this type of Newcomb's problem you do better and thus you were never perfectly rational to begin with.

On the other hand, if the universe can punish you directly (i.e. not simply via your behavior) for running the wrong program, the program that does best depends heavily on which universe you are in and thus there cannot be a "perfectly rational agent" unless you assume a fixed prior over possible universes.