Kevin comments on Newcomb's Problem and Regret of Rationality - Less Wrong

64 Post author: Eliezer_Yudkowsky 31 January 2008 07:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (588)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: simplicio 06 March 2010 06:15:53AM 0 points [-]

My solution to the problem of the two boxes:

Flip a coin. If heads, both A & B. If tails, only A. (If the superintelligence can predict a coin flip, make it a radioactive decay or something. Eat quantum, Hal.)

In all seriousness, this is a very odd problem (I love it!). Of course two boxes is the rational solution - it's not as if post-facto cogitation is going to change anything. But the problem statement seems to imply that it is actually impossible for me to choose the choice I don't choose, i.e., choice is actually impossible.

Something is absurd here. I suspect it's the idea that my choice is totally predictable. There can be a random element to my choice if I so choose, which kills Omega's plan.

Comment author: Kevin 06 March 2010 08:54:38AM *  2 points [-]

I suspect it's the idea that my choice is totally predictable

At face, that does sound absurd. The problem is that you are underestimating a superintelligence. Imagine that the universe is a computer simulation, so that a set of physical laws plus a very, very long string of random numbers is a complete causal model of reality. The superintelligence knows the laws and all of the random numbers. You still make a choice, even though that choice ultimately depends on everything that preceded it. See http://wiki.lesswrong.com/wiki/Free_will

I think much of the debate about Newcomb's Problem is about the definition of superintelligence.