Consider the following thought experiment ("Counterfactual Calculation"):
You are taking a test, which includes a question: "Is Q an even number?", where Q is a complicated formula that resolves to some natural number. There is no a priori reason for you to expect that Q is more likely even or odd, and the formula is too complicated to compute the number (or its parity) on your own. Fortunately, you have an old calculator, which you can use to type in the formula and observe the parity of the result on display. This calculator is not very reliable, and is only correct 99% of the time, furthermore its errors are stochastic (or even involve quantum randomness), so for any given problem statement, it's probably correct but has a chance of making an error. You type in the formula and observe the result (it's "even"). You're now 99% sure that the answer is "even", so naturally you write that down on the test sheet.
Then, unsurprisingly, Omega (a trustworthy all-powerful device) appears and presents you with the following decision. Consider the counterfactual where the calculator displayed "odd" instead of "even", after you've just typed in the (same) formula Q, on the same occasion (i.e. all possible worlds that fit this description). The counterfactual diverges only in the calculator showing a different result (and what follows). You are to determine what is to be written (by Omega, at your command) as the final answer to the same question on the test sheet in that counterfactual (the actions of your counterfactual self who takes the test in the counterfactual are ignored).
Should you write "even" on the counterfactual test sheet, given that you're 99% sure that the answer is "even"?
This thought experiment contrasts "logical knowledge" (the usual kind) and "observational knowledge" (what you get when you look at a calculator display). The kind of knowledge you obtain by observing things is not like the kind of knowledge you obtain by thinking yourself. What is the difference (if there actually is a difference)? Why does observational knowledge work in your own possible worlds, but not in counterfactuals? How much of logical knowledge is like observational knowledge, and what are the conditions of its applicability? Can things that we consider "logical knowledge" fail to apply to some counterfactuals?
(Updateless analysis would say "observational knowledge is not knowledge" or that it's knowledge only in the sense that you should bet a certain way. This doesn't analyze the intuition of knowing the result after looking at a calculator display. There is a very salient sense in which the result becomes known, and the purpose of this thought experiment is to explore some of counterintuitive properties of such knowledge.)
When you speak of "worlds" here, do you mean the "world-programs" in the UDT1.1 formalism? If that is what you mean, then one of us is confused about how UDT1.1 formalizes probabilities. I'm not sure how to resolve this except to repeat my request that you give your own formalization of your problem in UDT1.1.
For my part, I am going to say some stuff on which I think that we agree. But, at some point, I will slide into saying stuff on which we disagree. Where is the point at which you start to disagree with the following?
(I follow the notation in my write-up of UDT1.1 (pdf).)
UDT1.1 formalizes two different kinds of probability in two very different ways:
One kind of probability is applied to predicates of world-programs, especially predicates that might be satisfied by some of the world-programs while not being satisfied by the others. The probability (in the present sense) of such a predicate R is formalized as the measure of the set of world-programs satisfying R. (In particular, R is supposed to be a predicate such that whether a world-program satisfies R does not depend on the agent's decisions.)
The other kind of probability comes from the probability M(f, E) that the agent's mathematical intuition M assigns to the proposition that the sequence E of execution histories would occur if the agent were to implement input-output map f. This gives us probability measures P_f over sequences of execution histories: Given a predicate T of execution-history sequences, P_f(T) is the sum of the values M(f, E) as E ranges over the execution-history sequences satisfying predicate T.
I took the calculator's 99% correctness rate to be a probability of the first kind. There is a correct calculator in 99% of the world-programs (the "correct-calculator worlds") and an incorrect calculator in the remaining 1%.*
However, I took the probability of 1/2 that Q is even to be a probability of the second kind. It's not as though Q is even in some of the execution histories, while that same Q is odd in some others. Either Q is even in all of the execution histories, or Q is odd in all of the execution histories.* But the agent's mathematical intuition has no idea which is the case, so the induced probability distributions give P_f(even) = 1/2 (for all f), where even is the predicate such that, for all execution-history sequences E*,
Likewise, I was referring to the second kind of probability when I wrote that, "according to the agent's mathematical intuition, Omega is just as likely to offer the decision in a correct-calculator world as in the incorrect-calculator world". The truth or falsity of "Omega offers the decision in a correct-calculator world" is a property of an entire execution-history sequence. This proposition is either true with respect to all the execution histories in the sequence, or false with respect to all of them.
The upshot is that, when you write "99% of 'even' worlds are the ones where calculator is correct, while you clearly assign 50% as probability of your event", you are talking about two very different kinds of probabilities.
* Alternatively, this weighting can be incorporated into how the utility function over execution-history sequences responds to an event occurring in one world-program vs. another. If I had used this approach in my UDT1.1 formalization of your problem, I would have had just two world-programs: a correct-calculator world and an incorrect-calculator world. Then, having the correct parity on the answer sheet in the correct-calculator world would have been worth 99 times as much as having the correct parity in the incorrect-calculator world. But this would not have changed my computations. I don't think that this issue is the locus of our present disagreement.
* You must be disagreeing with me by this point, because I have contradicted your claim that "Omega offers the decision in 'even' worlds, in some of which 'even' is correct, and in some of which it's not*". (Emphasis added.)
World-programs are a bad model for possible worlds. For all you know, there could be just one world-program (indeed you can consider an equivalent variant of the theory where it's so: just have that single world program enumerate all outputs of all possible programs). The element of UDT analogous to possible worlds is executio... (read more)