Consider the following thought experiment ("Counterfactual Calculation"):
You are taking a test, which includes a question: "Is Q an even number?", where Q is a complicated formula that resolves to some natural number. There is no a priori reason for you to expect that Q is more likely even or odd, and the formula is too complicated to compute the number (or its parity) on your own. Fortunately, you have an old calculator, which you can use to type in the formula and observe the parity of the result on display. This calculator is not very reliable, and is only correct 99% of the time, furthermore its errors are stochastic (or even involve quantum randomness), so for any given problem statement, it's probably correct but has a chance of making an error. You type in the formula and observe the result (it's "even"). You're now 99% sure that the answer is "even", so naturally you write that down on the test sheet.
Then, unsurprisingly, Omega (a trustworthy all-powerful device) appears and presents you with the following decision. Consider the counterfactual where the calculator displayed "odd" instead of "even", after you've just typed in the (same) formula Q, on the same occasion (i.e. all possible worlds that fit this description). The counterfactual diverges only in the calculator showing a different result (and what follows). You are to determine what is to be written (by Omega, at your command) as the final answer to the same question on the test sheet in that counterfactual (the actions of your counterfactual self who takes the test in the counterfactual are ignored).
Should you write "even" on the counterfactual test sheet, given that you're 99% sure that the answer is "even"?
This thought experiment contrasts "logical knowledge" (the usual kind) and "observational knowledge" (what you get when you look at a calculator display). The kind of knowledge you obtain by observing things is not like the kind of knowledge you obtain by thinking yourself. What is the difference (if there actually is a difference)? Why does observational knowledge work in your own possible worlds, but not in counterfactuals? How much of logical knowledge is like observational knowledge, and what are the conditions of its applicability? Can things that we consider "logical knowledge" fail to apply to some counterfactuals?
(Updateless analysis would say "observational knowledge is not knowledge" or that it's knowledge only in the sense that you should bet a certain way. This doesn't analyze the intuition of knowing the result after looking at a calculator display. There is a very salient sense in which the result becomes known, and the purpose of this thought experiment is to explore some of counterintuitive properties of such knowledge.)
This would be correct if Q could be different, but Q is the same both in the counterfactual and the actual word. There is no possibility for the actual world being Even World and the counterfactual Odd World.
The possibilities are:
Actual: Even World, Right Calculator (99% of Even Words); Counterfactual: Even World, Wrong Calculator (1% of Even Worlds).
Actual: Odd World, Wrong Calculator (1% of Odd Words); Counterfactual: Odd World, Right Calculator (99% of Odd Words).
The prior probability of either is 50%. If we assume That Omega randomly picks one you out of 100% of possible words(either 100% of all Even Worlds or 100% of all Odd Words) to decide for all possible words where the calculator result is different (but the correct answer is the same), then there is a 99% chance all worlds are Even and your choice affects 1% of all worlds and a 1% chance all words are Odd and your choice affects 99% of all worlds. The result of the calculator in the counterfactual world doesn't provide any evidence on whether all words are Even or all worlds are Odd since in either case there would be such a world to talk about.
If we assume that Omega randomly visits one world and randomly mentions the calculator result of one other possible world and it just happened to be the case that in that other world the result was different; or if Omega randomly picks a world, then randomly picks a world with the opposite calculator result and tosses a coin as to which world to visit and which to mention then the calculator result in the counterfactual word is equally relevant and hearing Omega talk about it just as good as running the calculator twice. In this case you are equally likely to be in a Odd world and might just as well toss a coin as to which result you fill in yourself.
This doesn't square with my interpretation of the premises of the question. We are unsure of Q's parity. Our prior is 50:50 odd, even. We are also unsure of calculator's trustworthiness. Our prior is 99:1 right, wrong. Therefore - on my understanding of counterfactuality - both options for both uncertainties need to be on the table.
I am unconvinced you can ignore your uncertainty on Q's parity by arguing that it will come out only one way regardless of your uncertainty - this is true for coinflips in deterministic physics, but that doesn't mean we can't consider the counterfactual where the coin comes up tails.