Gary_Drescher comments on Discussion for Eliezer Yudkowsky's paper: Timeless Decision Theory - Less Wrong

10 Post author: Alexei 06 January 2011 12:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 06 January 2011 11:33:31AM *  1 point [-]

By "tit for tat" I am referring to the notable strategy in the iterated prisoner's dilemma. Agents using this strategy will keep cooperating as long as the other person cooperates, but if the other person defects then they will defect too. It's an excellent strategy by many measures, beating out more complicated strategies, and we probably have something like it built into our heads.

By analogy, a "tit for tat" strategy in Newcomb's problem with transparent boxes would be to one-box if the Predictor "cooperates," and two-box if the Predictor "defects."

But what does the Predictor see when it looks into the future of an agent with this strategy? Either way it chooses, it will have chosen correctly, so the Predictor needs some other, non-decision-determined criterion to decide.

Alternately you could think of it as making the decision-type of the agent undefined (at the time the Predictor is filling the boxes), thus making it impossible for the problem to have any well-defined decision-determined statement.

Comment author: Gary_Drescher 11 January 2011 09:38:52PM *  4 points [-]

Just to clarify, I think your analysis here doesn't apply to the transparent-boxes version that I presented in Good and Real. There, the predictor's task is not necessarily to predict what the agent does for real, but rather to predict what the agent would do in the event that the agent sees $1M in the box. (That is, the predictor simulates what--according to physics--the agent's configuration would do, if presented with the $1M environment; or equivalently, what the agent's 'source code' returns if called with the $1M argument.)

If the agent would one-box if $1M is in the box, but the predictor leaves the box empty, then the predictor has not predicted correctly, even if the agent (correctly) two-boxes upon seeing the empty box.

Comment author: Manfred 12 January 2011 01:12:16AM 0 points [-]

Interesting. This would seem to return it to the class of decision-determined problems, and for an illuminating reason - the algorithm is only run with one set of information - just like how in Newcomb's problem the algorithm has only one set of information no matter the contents of the boxes.

This way of thinking makes Vladimir's position more intuitive. To put words in his mouth, instead of being not decision determined, the "unfixed" version is merely two-decision determined, and then left undefined for half the bloody problem.

Comment author: Gary_Drescher 12 January 2011 02:30:39PM 0 points [-]

and for an illuminating reason - the algorithm is only run with one set of information

That's not essential, though (see the dual-simulation variant in Good and Real).

Comment author: Manfred 12 January 2011 04:05:56PM 0 points [-]

Well, yeah, so long as all the decisions have defined responses.