Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

FAWS comments on Counterfactual Mugging - Less Wrong

48 Post author: Vladimir_Nesov 19 March 2009 06:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (235)

You are viewing a single comment's thread. Show more comments above.

Comment author: FAWS 06 September 2010 04:16:50AM -1 points [-]

The obvious extensions of the problem to cases with failable Omega are:

  1. P( $1,000,000) = P(onebox)
  2. Reward = $1,000,000 * P(onebox)
Comment author: taw 06 September 2010 04:36:22AM -2 points [-]

In Bayesian interpretation P() would be Omega's subjective probability. In frequentist interpretation, the question doesn't make any sense as you make a single boxing decision, not large number of tiny boxing decisions. Either way P() is very ill-defined.

Comment author: FAWS 06 September 2010 04:47:18AM *  1 point [-]

Either way P() is very ill-defined.

No more so than other probabilities. Probabilities about future decisions of other actors aren't disprivileged, that would be free will confusion. And are you seriously claiming that the probabilities of a coin flip don't make sense in a frequentist interpretation? That was the context. In the general case it would be the long term relative frequency of possible versions of you similar enough to you to be indistinguishable for Omega deciding that way or something like that, if you insisted on using frequentist statistics for some reason.

Comment author: taw 06 September 2010 05:45:32AM 1 point [-]

(this comment assumes "Reward = $1,000,000 * P(onebox)")

You misunderstand frequentist interpretation - sample size is 1 - you either decide yes or decide no. To generalize from a single decider needs prior reference class ("toin cosses"), getting us into Bayesian subjective interpretations. Frequentists don't have any concept of "probability of hypothesis" at all, only "probability of data given hypothesis" and the only way to connect them is using priors. "Frequency among possible worlds" is also a Bayesian thing that weirds frequentists out.

Anyway, if Omega has amazing prediction powers, and P() can be deterministically known by looking into the box this is far more valuable than mere $1,000,000! Let's say I make my decision by randomly generating some string and checking if it's a valid proof of Riemann hypothesis - if P() is non-zero, I made myself $1,000,000 anyway.

I understand that there's an obvious technical problem if Omega rounds the number to whole dollars, but that's just minor detail.

And actually, it is a lot worse in popular problem formulation of "if your decision relies on randomness, there will be no million" that tries to work around coin tossing. In such case a person randomly trying to prove false statement gets a million (as no proof could work, so his decision was reliable), and a person randomly trying to prove true statement gets $0 (as there's non-zero chance of him randomly generating correct proof).

Another fun idea would be measuring both position and velocity of an electron - tossing a coin to decide either way, measuring one and getting the other from Omega.

Possibilities are just endless.

Comment author: FAWS 06 September 2010 06:21:04AM 0 points [-]

The issue was whether the formulation makes sense, not whether it makes frequentialists freak out (and it's not substantially different than e. g. drawing from an urn for the first time). In either case P() was the probablitity of an event, not a hypothesis.

In these sorts of problems you are supposed to assume that the dollar amounts match your actual utilities (as you observe your exploit doesn't work anyway for tests with a probability of <0.5*10^-9 if rounding to cents, and you could just assume that you already have gained all knowledge you could gain through such test, or that Omega possesses exactly the same knowledge as you except for human psychology, or whatever).