ArisKatsaris comments on Newcomb's Problem and Regret of Rationality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (588)
It is not possible for an agent to make a rational choice between 1 or 2 boxes if the agent and Omega can both be simulated by Turing machines. Proof: Omega predicts the agent's decision by simulating it. This requires Omega to have greater algorithmic complexity than the agent (including the nonzero complexity of the compiler or interpreter). But a rational choice by the agent requires that it simulate Omega, which requires that the agent have greater algorithmic complexity instead.
In other words, the agent X, with complexity K(X), must model Omega which has complexity K(X + "put $1 million in box B if X does not take box A"), which is slightly greater than K(X).
In the framework of the ideal rational agent in AIXI, the agent guesses that Omega is the shortest program consistent with the observed interaction so far. But it can never guess Omega because its complexity is greater than that of the agent. Since AIXI is optimal, no other agent can make a rational choice either.
As an aside, this is also a wonderful demonstration of the illusion of free will.
Not so. I don't need to simulate a hungry tiger in order to stay safely (and rationally) away from it, even though I don't know the exact methods by which its brain will identify me as a tasty treat. If you think that one can't "rationally" stay away from hungry tigers, then we're using the word "rationally" vastly differently.