Stuart_Armstrong comments on Should logical probabilities be updateless too? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (49)
Well, it seems obvious that it's true - but tricky to formalise. Subtle problems like agent simulates predictor (when you know more than Omega) and maybe some diagonal agents (who apply diagonal reasoning to you) seem to be relatively believable situations. It's a bit like Godel's theorem - initially, the only examples were weird and specifically constructed, but then people found more natural examples.
But "do what you would have precommitted to doing" seems to be much better than other strategies, even if it's not provably ideal.