hairyfigment comments on Newcomb's Problem and Regret of Rationality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (588)
While I disagree that one-boxing still wins, I'm most interested in seeing the "no future peeking" and the actual Omega success rate being defined as givens. It's important that I can rely on the 99.9% value, rather than wondering whether it is perhaps inferred from their past 100 correct predictions (which could, with a non-negligible probability, have been a fluke).
That does indeed seem like the standard version of Newcomb's. (Though I don't understand your last sentence, assuming "non-negligible" does not mean 1/2 to the power of 100.)
Can you spell out what you mean by "if" in this context? Because a lot of us are explicitly talking about the best algorithm to program into an AI.