Manfred comments on Counterfactual Calculation and Observational Knowledge - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (183)
See the edit. If Omega randomly visits a possible world I can say ahead of time that there is a 99% chance that in that particular world the calculator result is correct and the decision will affect 1% of all worlds and a 1% chance that the result is wrong and the decision affects 99% of all worlds.
So you know a priori that the answer is Even, without even looking at the calculator? That can't be right.
(You're assuming that you know that Omega only arrives in "even" worlds, and updating on observing Omega, even before observing it. But in the same movement, you update on the calculator showing "even". Omega doesn't show up in the "odd" world, so you can't update on the fact that it shows up, other than by observing it, or alternatively observing "even" given the assumption of equivalence of these events.)
Of course not.
No. I'm assuming that either even is correct in all worlds or odd is correct in all worlds (0.5 prior for either). If Omega randomly picks a world, the chance of the calculator being correct is independent of that and 99% everywhere, then there is a 99% chance of the calculator being correct in the particular world Omega arrives in. If odd is correct Omega is 99% likely to arrive in a world where the calculator says odd, and if the calculator says odd in the particular world Omega arrives in there is a 99% chance that's because odd is correct.
EDIT:
If I were
the probability of even being correct would be 50% no matter what, and there would be a 50% chance each for affecting 99% of all worlds or 1% of all worlds.
I seem to agree with all of the above statements. The conditional probabilities are indeed this way. But it's incorrect to use these conditional probabilities (which is to say, probabilities of Odd/Even after updating on observing "even") to compute expected utility for the counterfactual. In a prior comment, you write:
99% is P(Even|Omega,"even"), that is to say it's probability of Even updated by observations (events) that Omega and "even".
No. There is no problem with using conditional probabilities if you use the correct conditional probabilities, that is the probabilities from wherever the decision happens, not from what you personally encounter. And I never claimed that any of the pieces you were quoting were part of an updateless analysis, just that it made no difference.
I would try to write a Wei Dai style world program at this point, but I know no programming at all and am unsure how drawing at random is supposed to be represented. It would be the same as the program for this game, though:
1 black and 99 white balls in an urn. You prefer white balls. You may decide to draw a ball and change all balls of the other color to balls of the color drawn, and must decide before the draw is made. (or to make it slightly more complicated: Someone else secretly flips a coin whether you get points for black or white balls. You get 99 balls of the color you get points for and one ball of the other color).
It would help a lot if you just wrote the formulas you use for computing expected utility (or the probabilities you named) in symbols, as in P(Odd|"odd")=0.99,
P(Odd|"odd")*100+P(Even|"odd")*0 = 0.99*100+0.01*0 = 99.
Do you need more than that? I don't see how this could possibly help, but:
N(worlds)=100
For each world:
P(correct)=0.99
U_world(correct)=1
U_world(~correct) = 0
P(Omega)=0.01
P(correct|Omega)=P(correct|~Omega) = 0.99
If choosing to replace:
correct ∧ Omega ⇒for all worlds: U_world(~correct) = 1
~correct ∧ Omega ⇒for all worlds: U_world(correct) = 0
This is imprecise in that exactly one world ends up with Omega.
I give up, sorry. Read up on standard concepts/notation for expected utility/conditional probability maybe.
I don't think there is a standard notation for what I was trying to express (if there was formalizing the simple equivalent game I gave should be trivial, so why didn't you do that?) if you are happy with just the end result here is another attempt:
P(Odd|"odd")=P(Even|"even")=P("odd"|Odd)=P("even"|Even)=0.99, P(Odd)=P(Even)=0.5, P("odd" n Odd)= P("even" n Even) =0.495
U_not_replace = P("odd" n Odd)*100 + P("even" n Odd)*0 +P("even" n Even)*100 + P("odd" n Even)*0 = 0.495*100 + 0.005*0 + 0.495*100 + 0.005*0 = 99
U_replace= P("odd"|Odd)*( P("odd" n Odd)*100 + P("even" n Odd)*100) + P("even"|Odd)*( P("odd" n Odd)*0 + P("even" n Odd)*0) + P("even"|Even)*( P("even" n Even)*100 + P("odd" n Even)*100) + P("odd"|Even)*( P("even" n Even)*0 + P("odd" n Even)*0) = 0.99*( 0.495*100 + 0.005*100) + 0.01* ( 0.495*0 + 0.005*0) +0.99*( 0.495*100 + 0.005*100) + 0.01* ( 0.495*0 + 0.005*0) =99
Probabilities correct, U_not_replace correct, U_replace I don't see what's going on with (what's the first conceptual step that generates that formula?). Correct U_replace is just this:
U_replace_updateless = P("odd" n Odd)*0 + P("even" n Odd)*0 +P("even" n Even)*100 + P("odd" n Even)*100 = 0.495*0 + 0.005*0 + 0.495*100 + 0.005*100 = 50