You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mitchell_Porter comments on Is causal decision theory plus self-modification enough? - Less Wrong Discussion

-4 Post author: Mitchell_Porter 10 March 2012 08:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (52)

You are viewing a single comment's thread. Show more comments above.

Comment author: Mitchell_Porter 11 March 2012 06:47:49AM 0 points [-]

I agree - at least if this CDT agent has the foresight to self-modify before getting "scanned" by Omega

Could you have a CDT agent that's never thought about Newcomb problems, out for a stroll, then Omega appears and explains the situation, and then the CDT agent reasons its way to one-boxing anyway? Maybe, AIXI-style, it does an exhaustive investigation of the payoffs resulting from various actions, it notices that changing itself into a one-boxer is correlated with a higher payoff, and so it performs the act!

Comment author: orthonormal 11 March 2012 07:58:41PM 2 points [-]

It wouldn't work as you've stated it. The action of changing itself to a one-boxer would, according to its current decision theory, increase payoffs for every Newcomb's Problem it would encounter from that moment forward, but not for any in which the Predictor had already made its decision.

Seriously, you can work this out for yourself.

Comment author: Mitchell_Porter 11 March 2012 10:26:30PM 0 points [-]

What confuses me here is that a causal model of reality would still tell it that being a one-boxer now will maximize the payoff now, if it examines possible worlds in the right way. It seems to come down to cognitive contingencies - whether its heuristics manage to generate this observation, without it then being countered by a "can't-change-the-past" heuristic.

I may need to examine the decision-theory literature to see what I can reasonably call a "CDT agent", especially Gibbard & Harper, where the distinction with evidential decision theory is apparently defined.

Comment author: orthonormal 11 March 2012 10:39:59PM 1 point [-]

if it examines possible worlds in the right way

That's the main difference between decision theories like CDT, TDT and UDT.

Comment author: Will_Newsome 12 March 2012 09:26:03PM 1 point [-]

I think it's the only difference between CDT and TDT: TDT gets a semi-correct causal graph, CDT doesn't. (Only semi-correct because the way Eliezer deals with Platonic nodes, i.e. straightforward Bayesian updating, doesn't seem likely to work in general. This is where UDT seems better than TDT.)

Comment author: Manfred 11 March 2012 08:18:27AM *  1 point [-]

What is this "correlated" you speak of? :P I think if Omega pops up with already-filled boxes, the standard argument for two-boxing goes through whether the CDT agent is self-modifying or not.