IlyaShpitser comments on Causal decision theory is unsatisfactory - LessWrong

20 Post author: So8res 13 September 2014 05:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (158)

You are viewing a single comment's thread. Show more comments above.

Comment author: IlyaShpitser 17 September 2014 03:40:13AM 1 point [-]

This is fighting the hypothetical, Omega can say it will only put a million in if it can find a proof of you one boxing quickly enough.

Comment author: Jiro 17 September 2014 02:21:50PM *  1 point [-]

If Omega only puts the million in if it finds a proof fast enough, it is then possible that you will one-box and not get the million.

(And saying "there isn't any such Omega" may be fighting the hypothetical. Saying there can't in principle be such an Omega is not.)

Comment author: nshepperd 17 September 2014 03:18:47PM 2 points [-]

If Omega only puts the million in if it finds a proof fast enough, it is then possible that you will one-box and not get the million.

Yes, it's possible, and serves you right for trying to be clever. Solving the halting problem isn't actually hard for a large class of programs, including the usual case for an agent in a typical decision problem (ie. those that in fact do halt quickly enough to make an actual decision about the boxes in less than a day). If you try to deliberately write a very hard to predict program, then of course omega takes away the money in retaliation, just like the other attempts to "trick" omega by acting randomly or looking inside the boxes with xrays.

Comment author: Jiro 17 September 2014 04:01:46PM *  -1 points [-]

The problem requires that Omega be always able to figure out what you do. If Omega can only figure out what you can do under a limited set of circumstances, you've changed one of the fundamental constraints of the problem.

You seem to be thinking of this as "the only time someone won't come to a decision fast enough is if they deliberately stall", which is sort of the reverse of fighting the hypothetical--you're deciding that an objection can't apply because the objection applies to an unlikely situation.

Suppose that in order to decide what to do, I simulate Omega in my head as one of the steps of the process? That is not intentionally delaying, but it still could result in halting problem considerations. Or do you just say that Omega doesn't give me the money if I try to simulate him?