Vladimir_Nesov comments on Is causal decision theory plus self-modification enough? - Less Wrong

-4 Post author: Mitchell_Porter 10 March 2012 08:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (52)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 12 March 2012 03:38:59PM *  0 points [-]

If no facts about the nature of the "noise" is specified, then the phrase "probability of correct decision by Omega is 0.9" does not make sense.

That is just what "probability" means: it quantifies possibilities that can't be ruled out, where it's not possible to distinguish those that do take place from those that don't.

Comment author: gRR 12 March 2012 04:01:10PM 0 points [-]

Bayesians say all probabilities are conditional. The question here is on what this "0.9" probability is conditioned.

Comment author: Vladimir_Nesov 12 March 2012 04:29:26PM 0 points [-]

On me having chicken for supper. Unless you can unpack "being conditional" to more than a bureaucratic hoop that's easily jumped through, it's of no use.

Comment author: gRR 12 March 2012 09:28:23PM 0 points [-]

On reflection, my previous comment was off the mark. Knowing that Omega always predicts "two-box" is an obvious correlation between a property of agents and the quality of prediction. So, your correction basically states that the second view is the "natural" one: Omega always predicts correctly and then modifies the answer in 10% cases.

In such case, the "simulation uncertainty" argument should work the same way as in the "pure" Newcomb's problem, with the correction for the 10% noise (which does not change the answer).

Comment author: gRR 12 March 2012 04:52:24PM 0 points [-]

Oh, come on. According to Janes, the marginal probability P(Omega is correct | Omega predicts something) is supposed to be additionally conditioned on everything you know about the situation. If you know that Omega always predicts "two-box", then P(Omega is correct | Omega predicts something) is equal to the relative frequency of two-boxers in the population. If you know that Omega first always predicts correctly and then modifies its answer in 10% cases, then it's something completely different. If you have no knowledge about whether the first or the second is true, then what can you do? Presumably, try Solomonoff induction, too bad it's incomputable.