Vladimir_Nesov comments on Rationality is Systematized Winning - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (252)
That's brilliant! (I'm not sure what you mean by understand though.)
In other words, Omega does one of the two things: it either offers you $1000 + $1, or only $10. It offers you $1000 + $1 only if it predicts that you won't take the $1, otherwise it only gives you $10.
This is a variant of counterfactual mugging, except that there is no chance involved. Your past self prefers to precommit to not taking the $1, while your present self faced with that situation prefers to take the 1$.
Hmmm... It looks like the decision to take the $1 determines the situation where you make that decision out of reality. Effects of precommitment being restricted to the counterfactual branches are a usual thing, but in this problem they stare you right in the face, which is rather daring.
Another variation, playing only on real/counterfactual, without motivating the real decision. Omega comes to you and offers $1, if and only if it predicts that you won't take it. What do you do? It looks neutral, since expected gain in both cases is zero. But the decision to take the $1 sounds rather bizarre: if you take the $1, then you don't exist!
Agents self-consistent under reflection are counterfactual zombies, indifferent to whether they are real or not.
Seems roughly as disturbing as Wikipedia's article on Gaussian adaptation:
If you want your source code to be self-consistent under reflection, you know what you have to do.