We are in the world where the calculator displays even, and we are 99% sure it is the world where the calculator has not made an error. This is Even World, Right Calculator. Counterfactual worlds:
All Omega told us was that the counterfactual world we are deciding for, the calculator shows Odd. We can therefore eliminate Odd World, Wrong Calculator. Answering the question is, in essence, deciding which world we think we're looking at.
So, in the counterfactual world, we're either looking at Even World, Wrong Calculator or Odd World, Right Calculator. We have an equal prior for the world being Odd or Even - or, we think the number of Odd Worlds is equal to the number of Even Worlds. We know the ratio of Wrong Calculator worlds to Right Calculator worlds (1:99). This is, therefore, 99% evidence for Odd World. The correct decision for the counterfactual you in that world is to decide Odd World. The correct decision for you?
Ignoring Bostrom's book on how to deal with observer selection effects (did Omega go looking for a Wrong Calculator wo...
Suppose you believe that 2+2=4, with the caveat that you are aware that there is some negligible but non-zero probability that The Dark Lords of the Matrix have tricked you into believing that.
Omega appears and tells you that in an alternate reality, you believe that 2+2=3 with the same amount of credence, and asks whether this changes your own amount of credence that 2+2=4.
The answer is the same. You ask Omega what rules he's playing by.
If he says "I'm visiting you in every reality. In each reality, I'm selecting a counterfactual where your answe...
In what way, if any, is this problem importantly different from the following "less mathy" problem?
...You have a sealed box containing a loose coin. You shake the box and then set it on the table. There is no a priori reason for you to think that the coin is more or less likely to have landed heads than tails. You then take a test, which includes the question: "Did the coin land heads?" Fortunately, you have a scanning device, which you can point at the box and which will tell you whether the coin landed heads or tails. Unfortunatel
I suspect that the question sounds confusing because it conflates different counterfactual worlds. Where exactly does the world presented to you by Omega diverge from the actual world, at what point does the intervention take place? If Omega only changes the calculator display, you should say "even". If it fixes an error in the calculator's inner workings, you should say "odd".
What does it even mean to write an answer on a counterfactual test sheet?
Is it correct to to interpret this as "if-counterfactual the calculator had showed odd, Omega would have shown up and (somehow knowing what choice you would have made in the "even" world) altered the test answer as you specify"?
Viewing this problem from before you use the calculator, your distribution is P(even) = P(odd) = 0.5. There are various rules Omega could be playing by:
Why does observational knowledge work in your own possible worlds, but not in counterfactuals?
It does not work in this counterfactual. Omega could have specified the counterfactual such that the observational knowledge in the counterfactual was as usable as that in the 'real' world. (Most obviously by flat out saying it is so.)
The reason we cannot use the knowledge from this particular counterfactual is that we have no knowledge about how the counterfactual was selected. The 99% figure (as far as we know) is not at all relevant to how likely it is that ...
This seems easy. Q is most likely even, so in the counterfactual the calculator is most likely in error, and we prefer Omega to write "even". What am I missing?
Consider the following thought experiment
You have a bag with a red and a blue ball in it. You pull a ball from the bag, but don't look at it. What is the probability that it is blue?
Now imagine a counterfactual world. In this other world you drew the red ball from the bag. Now imagine a hippo eating an octopus. What is the probability that you drew the blue ball?
"Why does observational knowledge work in your own possible worlds, but not in counterfactuals?" is the key question here. Perhaps it's easier to parse like this: "Why isn'...
The thing is, the other world was chosen specifically BECAUSE it had the opposite answer, not randomly like the world you're in.
This is the intuition I find helpful: Your decision only matters when the calculator shows odd. There is a 99% chance your decision matters if it's odd and a 1% chance your decision matters if it's not odd. Therefore the situation where you're told it's even is evidence that it's odd.
In this scenario, we are the counterfactual. The calculator really showed up odd, not even.
Once your calculator returns the result "even", you assign 99% probability to the condition "Q is even". Changing that opinion would require strong bayesian evidence. In this case, we're considering hypothetical bayesian evidence provided by Omega. Based on our prior probabilities, we would say that if Omega randomly chose an Everett branch (I'm going with the quantum calculator, just because it makes vocabulary a bit easier), 99% of the time Omega would chose another Everett branch in which the calculator also read "even". Ho...
Here's a possible argument.
Assume what you do in the counterfactual is equivalent to what you do in IRL, with even/odd swapped. Then TDT says that choosing in the counterfactual ALSO chooses for you in the real world. So you should choose odd there so that you can choose even in the real world and get it right.
I wonder if the question is enough specified. Naïvely, I would say that Omega will write down "even" with p=0.99, simply because Omega appearing and telling me "consider the counterfactual" is not useful evidence for anything. P(Omega appears|Q even) and P(Omega appears|Q odd) are hard to specify, but I don't see reason to assume that the first probability is greater than the second one, or vice versa.
Of course, the above holds under assumption that all counterfactual worlds have the same value of Q. I am also not sure how to interpret ...
My understanding is that the question is about how to do counterfactual math. There is no essential distinction between the two types (observational vs. logical) of knowledge, they are "limiting cases" of each other (you always only observe your mental reasoning, or calculator outputs, or publications on one end; Laplace's demon on the other end).
ETA: my thinking went an U-turn from setting the calculator value without severing the Q->calculator correlation (i.e. treating calculator as an observed variable with a fictional observation), to set...
Consider the counterfactual where the calculator displayed "odd" instead of "even", after you've just typed in the formula Q.
This consists of just reapplying the algorithm or re-reading the previous paragraph with "even" replaced with "odd", so the answer should be 99% odd.
This is based on my understanding of counterfactual as considering what you would do in some hypothetical alternate branch 'what-if'.
I'm not sure what's supposed to be tricky about this. It's trading off a 99% chance of doing better in 1% of all worlds against a 1% chance of doing worse in 99% of all worlds (if I am in a world where the calculator malfunctioned). Being risk averse I prefer being wrong in some small fraction of the worlds to an equally small chance of being wrong in all of them so I'd want Omega to write "odd" (or even better leave it up to the counterfactual me which should have the same effect but feels better).
Consider the following thought experiment ("Counterfactual Calculation"):
Should you write "even" on the counterfactual test sheet, given that you're 99% sure that the answer is "even"?
This thought experiment contrasts "logical knowledge" (the usual kind) and "observational knowledge" (what you get when you look at a calculator display). The kind of knowledge you obtain by observing things is not like the kind of knowledge you obtain by thinking yourself. What is the difference (if there actually is a difference)? Why does observational knowledge work in your own possible worlds, but not in counterfactuals? How much of logical knowledge is like observational knowledge, and what are the conditions of its applicability? Can things that we consider "logical knowledge" fail to apply to some counterfactuals?
(Updateless analysis would say "observational knowledge is not knowledge" or that it's knowledge only in the sense that you should bet a certain way. This doesn't analyze the intuition of knowing the result after looking at a calculator display. There is a very salient sense in which the result becomes known, and the purpose of this thought experiment is to explore some of counterintuitive properties of such knowledge.)