Gabriel comments on Decision Theories: A Less Wrong Primer - Less Wrong

69 Post author: orthonormal 13 March 2012 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 12 March 2012 01:29:57PM 1 point [-]

If Omega makes the decision by analyzing the agent's psychological tests taken in childhood, then the agent should two-box.

Sorry, could you explain this in more detail?

Comment author: Gabriel 12 March 2012 03:03:19PM *  7 points [-]

I think the idea is that even if Omega always predicted two-boxing, it still could be said to predict with 90% accuracy if 10% of the human population happened to be one-boxers. And yet you should two-box in that case. So basically, the non-deterministic version of Newcomb's problem isn't specified clearly enough.

Comment author: ksvanhorn 13 March 2012 06:47:23PM 2 points [-]

I disagree. To be at all meaningful to the problem, the "90% accuracy" has to mean that, given all the information available to you, you assign a 90% probability to Omega correctly predicting your choice. This is quite different from correctly predicting the choices of 90% of the human population.

Comment author: drnickbone 13 March 2012 07:37:03PM 0 points [-]

I don't think this works in the example given, where Omega always predicts 2-boxing. We agree that the correct thing to do in that case is to 2-box. And if I've decided to 2-box then I can be > 90% confident that Omega will predict my personal actions correctly. But this still shouldn't make me 1-box.

I've commented on Newcomb in previous threads... in my view it really does matter how Omega makes its predictions, and whether they are perfectly reliable or just very reliable.

Comment author: jimmy 14 March 2012 06:18:16PM *  0 points [-]

Agreed for that case, but perfect reliability still isn't necessary (consider omega 99.99% accurate/10% one boxers for example)

What matters is that your uncertainty in omegas prediction is tied to your uncertainty in your actions. If you're 90% confident that omega gets it right conditioning on deciding to one box and 90% confident that omega gets it right conditional on deciding to two box, then you should one box. (0.9 * 1M>1K+0.1 * 1M)

Comment author: gRR 12 March 2012 03:33:14PM 1 point [-]

Far better explanation than mine, thanks!

Comment author: orthonormal 12 March 2012 04:22:00PM 0 points [-]

Good point. I don't think this is worth going into within this post, but I introduced a weasel word to signify that the circumstances of a 90% Predictor do matter.

Comment author: cousin_it 12 March 2012 03:09:11PM 0 points [-]

Very nice, thanks!