private_messaging comments on Decision Theory FAQ - Less Wrong

52 Post author: lukeprog 28 February 2013 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (467)

You are viewing a single comment's thread. Show more comments above.

Comment author: incogn 11 March 2013 08:55:27AM *  4 points [-]

Then I guess I will try to leave it to you to come up with a satisfactory example. The challenge is to include Newcomblike predictive power for Omega, but not without substantiating how Omega achieves this, while still passing your own standards of subject makes choice from own point of view. It is very easy to accidentally create paradoxes in mathematics, by assuming mutually exclusive properties for an object, and the best way to discover these is generally to see if it is possible construct or find an instance of the object described.

I don't think it is, actually. It just seems so because it presupposes that your own choice is predetermined, which is kind of hard to reason with when you're right in the process of making the choice. But that's a problem with your reasoning, not with the scenario. In particular, the CDT agent has a problem with conceiving of his own choice as predetermined, and therefore has trouble formulating Newcomb's problem in a way that he can use - he has to choose between getting two-boxing as the solution or assuming backward causation, neither of which is attractive.

This is not a failure of CDT, but one of your imagination. Here is a simple, five minute model which has no problems conceiving Newcomb's problem without any backwards causation:

  • T=0: Subject is initiated in a deterministic state which can be predicted by Omega.
  • T=1: Omega makes an accurate prediction for the subject's decision in Newcomb's problem by magic / simulation / reading code / infallible heuristics. Denote the possible predictions P1 (one-box) and P2.
  • T=2: Omega sets up Newcomb's problem with appropriate box contents.
  • T=3: Omega explains the setup to the subject and disappears.
  • T=4: Subject deliberates.
  • T=5: Subject chooses either C1 (one-box) or C2.
  • T=6: Subject opens box(es) and receives payoff dependent on P and C.

You can pretend to enter this situation at T=4 as suggested by the standard Newcomb's problem. Then you can use the dominance principle and you will lose. But this just using a terrible model. You entered at T=0, because you were needed at T=1 for Omega's inspection. If you did not enter the situation at T=0, then you can freely make a choice C at T=5 without any correlation to P, but that is not Newcomb's problem.

Instead, at T=4 you become aware of the situation, and your decision making algorithm must return a value for C. If you consider this only from T=4 and onward, this is completely uninteresting, because C is already determined. At T=1, P was determined to either P1 or P2, and the value of C follow directly from this. Obviously, healthy one-boxing code wins and unhealthy two-boxing code loses, but there is no choice being made here, just different code with different return values being rewarded differently, and that is not Newcomb's problem either.

Finally, we will work under illusion of choice with Omega as a perfect predictor. We realize that T=0 is the critical moment, seeing as all subsequent T follows directly from this. We work backwards as follows:

  • T=6: My preferences are P1C2 > P1C1 > P2C2 > P2C1.
  • T=5: I should choose either C2 or C1 depending on the current value of P.
  • T=4: this is when all this introspection is happening
  • T=3: this is why
  • T=2: I would really like there to be a million dollars present.
  • T=1: I want Omega to make prediction P1.
  • T=0: Whew, I'm glad I could do all this introspection which made me realize that I want P1 and the way to achieve this is C1. It would have been terrible if my decision making just worked by the dominance principle. Luckily, the epiphany I just had, C1, was already predetermined at T=0, Omega would have been aware of this at T=1 and made the prediction P1, so (...) and P1 C1 = a million dollars is mine.

Shorthand version of all the above; if the decision is necessarily predetermined before T=4, then you should not pretend you make the decision at T=4. Insert a decision making step at T=0.5, which causally determines the value of P and C. Apply your CDT to this step.

This is the only way of doing CDT honestly, and it is the slightest bit messy, but that is exactly what happens when you create a reference to the decision the decision theory is going to make in the future in the problem itself with perfect correlation to the decision before the decision has overtly been made. This sort of self reference creates impossibilities out of the thin air every day of week, such as when Pinocchio says my nose will grow now. The good news is that this way of doing it is a lot less messy than inventing a new, superfluous decision theory, and it also allows you to deal with problems like the psychopath button without any trouble whatsoever.

Comment author: private_messaging 15 March 2013 04:15:22PM *  -1 points [-]

Well, a practically important example is a deterministic agent which is copied and then copies play prisoner's dilemma against each other.

There you have agents that use physics. Those, when evaluating hypothetical choices, use some model of physics, where an agent can model itself as a copyable deterministic process which it can't directly simulate (i.e. it knows that the matter inside it's head obeys known laws of physics). In the hypothetical that it cooperates, after processing the physics, it is found that copy cooperates, in the hypothetical that it defects, it is found that copy defects.

And then there's philosophers. The worse ones don't know much about causality. They presumably have some sort of ill specified oracle that we don't know how to construct, which will tell them what is a 'consequence' and what is a 'cause' , and they'll only process the 'consequences' of the choice as the 'cause'. This weird oracle tells us that other agent's choice is not a 'consequence' of the decision, so it can not be processed. It's very silly and not worth spending brain cells on.

Comment author: incogn 15 March 2013 04:37:36PM 0 points [-]

Playing prisoner's dilemma against a copy of yourself is mostly the same problem as Newcomb's. Instead of Omega's prediction being perfectly correlated with your choice, you have an identical agent whose choice will be perfectly correlated with yours - or, possibly, randomly distributed in the same manner. If you can also assume that both copies know this with certainty, then you can do the exact same analysis as for Newcomb's problem.

Whether you have a prediction made by an Omega or a decision made by a copy really does not matter, as long as they both are automatically going to be the same as your own choice, by assumption in the problem statement.

Comment author: private_messaging 15 March 2013 06:23:06PM *  -1 points [-]

The copy problem is well specified, though. Unlike the "predictor". I clarified more in private. The worst part about Newcomb's is that all the ex religious folks seem to substitute something they formerly knew as 'god' for predictor. The agent can also be further specified; e.g. as a finite Turing machine made of cogs and levers and tape with holes in it. The agent can't simulate itself directly, of course, but it knows some properties of itself without simulation. E.g. it knows that in the alternative that it chooses to cooperate, it's initial state was in set A - the states that result in cooperation, in the alternative that it chooses to defect, it's initial state was in set B - the states that result in defection, and that no state is in both sets.