hairyfigment comments on Newcomb's Problem and Regret of Rationality - Less Wrong

64 Post author: Eliezer_Yudkowsky 31 January 2008 07:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (588)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: rstarkov 29 August 2011 04:38:01PM 0 points [-]

While I disagree that one-boxing still wins, I'm most interested in seeing the "no future peeking" and the actual Omega success rate being defined as givens. It's important that I can rely on the 99.9% value, rather than wondering whether it is perhaps inferred from their past 100 correct predictions (which could, with a non-negligible probability, have been a fluke).

Comment author: hairyfigment 29 August 2011 05:04:09PM 1 point [-]

That does indeed seem like the standard version of Newcomb's. (Though I don't understand your last sentence, assuming "non-negligible" does not mean 1/2 to the power of 100.)

Can you spell out what you mean by "if" in this context? Because a lot of us are explicitly talking about the best algorithm to program into an AI.