Consider the following thought experiment ("Counterfactual Calculation"):

You are taking a test, which includes a question: "Is Q an even number?", where Q is a complicated formula that resolves to some natural number. There is no a priori reason for you to expect that Q is more likely even or odd, and the formula is too complicated to compute the number (or its parity) on your own. Fortunately, you have an old calculator, which you can use to type in the formula and observe the parity of the result on display. This calculator is not very reliable, and is only correct 99% of the time, furthermore its errors are stochastic (or even involve quantum randomness), so for any given problem statement, it's probably correct but has a chance of making an error. You type in the formula and observe the result (it's "even"). You're now 99% sure that the answer is "even", so naturally you write that down on the test sheet.

Then, unsurprisingly, Omega (a trustworthy all-powerful device) appears and presents you with the following decision. Consider the counterfactual where the calculator displayed "odd" instead of "even", after you've just typed in the (same) formula Q, on the same occasion (i.e. all possible worlds that fit this description). The counterfactual diverges only in the calculator showing a different result (and what follows). You are to determine what is to be written (by Omega, at your command) as the final answer to the same question on the test sheet in that counterfactual (the actions of your counterfactual self who takes the test in the counterfactual are ignored).

Should you write "even" on the counterfactual test sheet, given that you're 99% sure that the answer is "even"?

This thought experiment contrasts "logical knowledge" (the usual kind) and "observational knowledge" (what you get when you look at a calculator display). The kind of knowledge you obtain by observing things is not like the kind of knowledge you obtain by thinking yourself. What is the difference (if there actually is a difference)? Why does observational knowledge work in your own possible worlds, but not in counterfactuals? How much of logical knowledge is like observational knowledge, and what are the conditions of its applicability? Can things that we consider "logical knowledge" fail to apply to some counterfactuals?

(Updateless analysis would say "observational knowledge is not knowledge" or that it's knowledge only in the sense that you should bet a certain way. This doesn't analyze the intuition of knowing the result after looking at a calculator display. There is a very salient sense in which the result becomes known, and the purpose of this thought experiment is to explore some of counterintuitive properties of such knowledge.)

Counterfactual Calculation and Observational Knowledge
New Comment
188 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

We are in the world where the calculator displays even, and we are 99% sure it is the world where the calculator has not made an error. This is Even World, Right Calculator. Counterfactual worlds:

  • Even World, Wrong Calculator (1% of Even Worlds)
  • Odd World, Right Calculator (99% of Odd Worlds)
  • Odd World, Wrong Calculator (1% of Odd Worlds)

All Omega told us was that the counterfactual world we are deciding for, the calculator shows Odd. We can therefore eliminate Odd World, Wrong Calculator. Answering the question is, in essence, deciding which world we think we're looking at.

So, in the counterfactual world, we're either looking at Even World, Wrong Calculator or Odd World, Right Calculator. We have an equal prior for the world being Odd or Even - or, we think the number of Odd Worlds is equal to the number of Even Worlds. We know the ratio of Wrong Calculator worlds to Right Calculator worlds (1:99). This is, therefore, 99% evidence for Odd World. The correct decision for the counterfactual you in that world is to decide Odd World. The correct decision for you?

Ignoring Bostrom's book on how to deal with observer selection effects (did Omega go looking for a Wrong Calculator wo... (read more)

1Vladimir_Nesov
I believe the above is correct updateless analysis of the thought experiment. (Which is a natural step to take in considering it, but not the point of the post, see its last paragraph.)
0shokwave
Exactly. The correct decision for factual you may be different to the correct decision for counterfactual you.
0lukstafi
Vladimir says that "Omega doesn't touch any calculator". If the counterfactual is entered at the point where the computation starts and Omega tells you that it results in Odd (ETA2: rereading Vladimir's comment, this is not the case), then it is a second observation contributed by Omega running the calculator and should affect both worlds. If on the other hand the counterfactual is just about the display, then the counterfactual Omega will likely write down Odd (ETA3: not my current answer). So I agree with your analysis. I see it this way: real Omegas cannot write on counterfactual paper. ETA: -- the "counterfactual" built as "being in another quantum branch of exactly the same universe" strikes me as being of the sort where Omega does run the calculator again, so it should affect both worlds as another observation. ETA2: I've changed my mind about there being an independent observation.
0MC_Escherichia
Actually, isn't this the very heart of the matter? In my other comment here I assumed Omega would always ask what the correct answer is if the calculator shows The Other Result; if that's not the case everything changes.
0Vladimir_Nesov
The answer does depend on this fact, but since this fact wasn't specified, assume uncertainty (say, Omega always appears when you observe "even" and had pasta for breakfast).
0lukstafi
Not by my understanding (but I decided to address it in a top-level comment). ETA: yes, in my updated understanding.
-2FAWS
This would be correct if Q could be different, but Q is the same both in the counterfactual and the actual word. There is no possibility for the actual world being Even World and the counterfactual Odd World. The possibilities are: 1. Actual: Even World, Right Calculator (99% of Even Words); Counterfactual: Even World, Wrong Calculator (1% of Even Worlds). 2. Actual: Odd World, Wrong Calculator (1% of Odd Words); Counterfactual: Odd World, Right Calculator (99% of Odd Words). The prior probability of either is 50%. If we assume That Omega randomly picks one you out of 100% of possible words(either 100% of all Even Worlds or 100% of all Odd Words) to decide for all possible words where the calculator result is different (but the correct answer is the same), then there is a 99% chance all worlds are Even and your choice affects 1% of all worlds and a 1% chance all words are Odd and your choice affects 99% of all worlds. The result of the calculator in the counterfactual world doesn't provide any evidence on whether all words are Even or all worlds are Odd since in either case there would be such a world to talk about. If we assume that Omega randomly visits one world and randomly mentions the calculator result of one other possible world and it just happened to be the case that in that other world the result was different; or if Omega randomly picks a world, then randomly picks a world with the opposite calculator result and tosses a coin as to which world to visit and which to mention then the calculator result in the counterfactual word is equally relevant and hearing Omega talk about it just as good as running the calculator twice. In this case you are equally likely to be in a Odd world and might just as well toss a coin as to which result you fill in yourself.
0shokwave
This doesn't square with my interpretation of the premises of the question. We are unsure of Q's parity. Our prior is 50:50 odd, even. We are also unsure of calculator's trustworthiness. Our prior is 99:1 right, wrong. Therefore - on my understanding of counterfactuality - both options for both uncertainties need to be on the table. I am unconvinced you can ignore your uncertainty on Q's parity by arguing that it will come out only one way regardless of your uncertainty - this is true for coinflips in deterministic physics, but that doesn't mean we can't consider the counterfactual where the coin comes up tails.
0FAWS
From the original post: Clarified here and here.
0shokwave
We cannot determine Q's parity, except by fallible calculator. When you say Q is the same, you seem to be including "Q's parity is the same". Hmm. Maybe this will help? But consider this situation: These situations are clearly the counterfactuals of each other - that is, when scenario 1 says "the counterfactual world" it is saying "scenario 2", and vice versa. The interpretations given in the second half of each contradict each other - the first scenario attempts to decide for the second scenario and gets it wrong; the second scenario attempts to decide for the first and gets it wrong. Whence this contradiction?
0FAWS
Yes, that would be a counterfactual. But NOT the counterfactual under consideration. The counterfactual under consideration was the calculator result being different but Q (both the number and the formula, and thus their parity) being the same. Unless Nesov was either deliberately misleading or completely failed his intention to clarify anything the comments linked to. If Q is the same formula is supposed to be clear in any way then everything about Q has to be the same. If the representation of Q in the formula was supposed be the same, but the actual value possibly counterfactually different then only answering that the formula is the same is obscuration, not clarification.
0shokwave
I disagree. Recall that I specified this in each case: Q (both the number and the formula, and thus the parity) is the same in both scenarios. The actual value is not counterfactually different - it's the same value in the safe, both times.
0FAWS
If you agree that Q's parity is the same I'm not sure what you are disagreeing with. Its not possible for Q to be odd in the counterfactual and even in actuality, so if Q is odd in the counterfactual that implies it is also odd in actuality and vice versa. Thus it's not possible for the calculator to be right in both counterfactual and reality simultaneously, and assuming it to be right in the counter-factual implies that it's wrong in actuality. Therefore you can reduce everything to the two cases I used, Q even/actual calculator right/counterfactual calculator wrong or Q odd/actual calculator wrong/counterfactual calculator right.
0Vladimir_Nesov
Maybe this could be more enlightening. When you control things, one of the necessary requirements is that you have logical uncertainty about some property of the thing you control. You start with having a definition of the control target, but not knowing some of its properties. And then you might be able to infer a dependence of one of its properties on your action. This allows you to personally determine what is that property of a structure whose definition you already know. See my posts on ADT for more detail.
0shokwave
I have been positing that these two cases are counterfactuals of each other. Before one of these two cases occurs, we don't know which one will occur. It is possible to consider being in the other case.
0FAWS
The problem is symmetrical. You can just copy everything, replace odd with even and vice versa and multiply everything with 0.5, then you also have the worlds where you see odd and Omega offers you to replace the result in counterfactuals where it came up even and where Q has the same parity. Doesn't change that Q is the same in the world that decides and the counterfactuals that are effected. Omega also transposing your choice to impossible worlds (or predicting what would happen in impossible worlds and imposing that on what happens in real worlds) would be a different problem (that violates that Q be the same in the counterfactual, but seems to be the problem you solved).
0FAWS
If someone is sure enough that I'm wrong to downvote all my post on this they should be able to tell me where I'm wrong. I would be extremely interested in finding out.
0Perplexed
I don't know why you were downvoted. But I do notice that somewhere on this thread, the meaning of "Even World" has changed from what it was when Shokwave introduced the term. Originally it meant a world whose calculator showed 'Even'.
-1Vladimir_Nesov
You're reasoning about the counterfactual using observational knowledge, i.e. making exactly the error whose nature puzzles me and is the subject of the post. In safely correct (but unenlightening about this error) updateless analysis, on the other hand, you don't update on observations, so shouldn't say things like "there is a 99% chance all worlds are Even".
1FAWS
No. That's completely insubstantial. Replace "even" with "same parity" and "odd" with "different parity" in my argument and the outcome is the same. The decision can be safely made before making any observations at all. EDIT: And even in the formulation given I don't update on personally having seen the even outcome (which is irrelevant, there is no substantial difference between me and the mes at that point) but Omega visiting me in a world where the calculator result came up even.
0Vladimir_Nesov
Please restate in more detail how you arrived at the following conclusion, and what made it so instead of the prior 50/50 for Even/Odd. It appears that it must be the observation of "even", otherwise what privileged Even over Odd?
1FAWS
See the edit. If Omega randomly visits a possible world I can say ahead of time that there is a 99% chance that in that particular world the calculator result is correct and the decision will affect 1% of all worlds and a 1% chance that the result is wrong and the decision affects 99% of all worlds.
0Vladimir_Nesov
So you know a priori that the answer is Even, without even looking at the calculator? That can't be right. (You're assuming that you know that Omega only arrives in "even" worlds, and updating on observing Omega, even before observing it. But in the same movement, you update on the calculator showing "even". Omega doesn't show up in the "odd" world, so you can't update on the fact that it shows up, other than by observing it, or alternatively observing "even" given the assumption of equivalence of these events.)
1FAWS
Of course not. No. I'm assuming that either even is correct in all worlds or odd is correct in all worlds (0.5 prior for either). If Omega randomly picks a world, the chance of the calculator being correct is independent of that and 99% everywhere, then there is a 99% chance of the calculator being correct in the particular world Omega arrives in. If odd is correct Omega is 99% likely to arrive in a world where the calculator says odd, and if the calculator says odd in the particular world Omega arrives in there is a 99% chance that's because odd is correct. EDIT: If I were the probability of even being correct would be 50% no matter what, and there would be a 50% chance each for affecting 99% of all worlds or 1% of all worlds.
0Vladimir_Nesov
I seem to agree with all of the above statements. The conditional probabilities are indeed this way. But it's incorrect to use these conditional probabilities (which is to say, probabilities of Odd/Even after updating on observing "even") to compute expected utility for the counterfactual. In a prior comment, you write: 99% is P(Even|Omega,"even"), that is to say it's probability of Even updated by observations (events) that Omega and "even".
2FAWS
No. There is no problem with using conditional probabilities if you use the correct conditional probabilities, that is the probabilities from wherever the decision happens, not from what you personally encounter. And I never claimed that any of the pieces you were quoting were part of an updateless analysis, just that it made no difference. I would try to write a Wei Dai style world program at this point, but I know no programming at all and am unsure how drawing at random is supposed to be represented. It would be the same as the program for this game, though: 1 black and 99 white balls in an urn. You prefer white balls. You may decide to draw a ball and change all balls of the other color to balls of the color drawn, and must decide before the draw is made. (or to make it slightly more complicated: Someone else secretly flips a coin whether you get points for black or white balls. You get 99 balls of the color you get points for and one ball of the other color).
1Vladimir_Nesov
It would help a lot if you just wrote the formulas you use for computing expected utility (or the probabilities you named) in symbols, as in P(Odd|"odd")=0.99, P(Odd|"odd")*100+P(Even|"odd")*0 = 0.99*100+0.01*0 = 99.
0FAWS
Do you need more than that? I don't see how this could possibly help, but: N(worlds)=100 For each world: P(correct)=0.99 U_world(correct)=1 U_world(~correct) = 0 P(Omega)=0.01 P(correct|Omega)=P(correct|~Omega) = 0.99 If choosing to replace: correct ∧ Omega ⇒for all worlds: U_world(~correct) = 1 ~correct ∧ Omega ⇒for all worlds: U_world(correct) = 0 This is imprecise in that exactly one world ends up with Omega.
0Vladimir_Nesov
I give up, sorry. Read up on standard concepts/notation for expected utility/conditional probability maybe.
0FAWS
I don't think there is a standard notation for what I was trying to express (if there was formalizing the simple equivalent game I gave should be trivial, so why didn't you do that?) if you are happy with just the end result here is another attempt: P(Odd|"odd")=P(Even|"even")=P("odd"|Odd)=P("even"|Even)=0.99, P(Odd)=P(Even)=0.5, P("odd" n Odd)= P("even" n Even) =0.495 U_not_replace = P("odd" n Odd)*100 + P("even" n Odd)*0 +P("even" n Even)*100 + P("odd" n Even)*0 = 0.495*100 + 0.005*0 + 0.495*100 + 0.005*0 = 99 U_replace= P("odd"|Odd)*( P("odd" n Odd)*100 + P("even" n Odd)*100) + P("even"|Odd)*( P("odd" n Odd)*0 + P("even" n Odd)*0) + P("even"|Even)*( P("even" n Even)*100 + P("odd" n Even)*100) + P("odd"|Even)*( P("even" n Even)*0 + P("odd" n Even)*0) = 0.99*( 0.495*100 + 0.005*100) + 0.01* ( 0.495*0 + 0.005*0) +0.99*( 0.495*100 + 0.005*100) + 0.01* ( 0.495*0 + 0.005*0) =99
2Vladimir_Nesov
Probabilities correct, U_not_replace correct, U_replace I don't see what's going on with (what's the first conceptual step that generates that formula?). Correct U_replace is just this: U_replace_updateless = P("odd" n Odd)*0 + P("even" n Odd)*0 +P("even" n Even)*100 + P("odd" n Even)*100 = 0.495*0 + 0.005*0 + 0.495*100 + 0.005*100 = 50
1FAWS
That seems obviously incorrect to me because as an updateless decision maker you don't know you are in the branch where you replace odds with evens. Your utility is half way between a correct updateless analysis and a correct analysis with updates. Or it is the correct utility if Omega also replaces the result in worlds where the parity of Q is different (so either Q is different or Omega randomly decides whether it's actually going to visit anyone or just predict what you would decide if the situation was different and applies that to whatever happens), in which case you have done a horrible job of miscommunication. I have only a vague idea what exactly required more explanation so I'll try to explain everything. My U_replace is the utility if you act on the general policy of replacing the result in counterfactual branches with the result in the branch Omega visits. It's the average over all imaginable worlds (imaginable worlds where Q is even and those where Q is odd), the probability of a world multiplied with its utility. P("odd"|Odd)*( P("odd" n Odd)*100 + P("even" n Odd)*100) + P("even"|Odd)*( P("odd" n Odd)*0 + P("even" n Odd)*0) is the utility for the half of imaginable worlds where Q is odd (all possible worlds if Q is odd). P("odd"|Odd) is the probability that the calculator shows odd in whatever other possible world Omega visits, conditional on Q being odd (which is correct to use because here only imaginable worlds where Q is odd are considered, the even worlds come later). If that happens the utility for worlds where the calculator shows even is replaced with 100. P("even"|Odd) is the probability that the calculator shows even in the other possible (=odd) world Omega visits. If that happens the utility for possible worlds where the calculator shows odd is replaced with 0. At this point I'd just say replace odd with even for the other half, but last time I said something like that it didn't seem to work so here's it replaced manually: P("even"|eve
0Vladimir_Nesov
Consider expected utility [P("odd" n Odd)*100 + P("even" n Odd)*100)] from your formula. What event and decision is this the expected utility of? It seems to consider two events, ["odd" n Odd] and ["even" n Odd]. For both of them to get 100 utils, the strategy (decision) you're considering must be, always answer-odd (since you can only answer in response to indication on the calculators, and here we have both indications and the same answer necessary for success in both events). But U_replace estimates the expected utility of a different strategy, of strategy where you answer-even on your own "even" branch and also answer-even on the "odd" branch with Omega's help. So you're already computing something different. Then, in the same formula, you have [P("odd" n Odd)*0 + P("even" n Odd)*0]. But to get 0 utils in both cases, you have to answer incorrectly in both cases, and since we're considering Odd, this must be unconditional answer-even. This contradicts the way you did your expected utility calculation in the first terms of the formula (where you were considering the strategy of unconditional answer-odd). Expected utility is computed for one strategy at a time, and values of expected utility computed separately for each strategy are used to compare the strategies. You seem to be doing something else.
1FAWS
I'm calculating for one strategy, the strategy of "fill in whatever the calculator in the world Omega appeared in showed", but I have a probability distribution across what that entails (see my other reply). I'm multiplying the utility of picking "odd" with the probability of picking "odd" and the utility of picking "even" with the probability of picking "even".
0Vladimir_Nesov
So that's what happens when you don't describe what strategy you're computing expected utility of in enough detail in advance. By problem statement, the calculator in the world in which Omega showed shows "even". But even if you expect Omega to appear on either side, this still isn't right. Where's the probability of Omega appearing on either side in your calculation? The event of Omega appearing on one or the other side must enter the model, and it wasn't explicitly referenced in any of your formulas.
0FAWS
But implicitly. P(Omega_in_Odd_world)=P(Omega_in_Even_world)=0.5, but P(Omega_in_Odd_world|Odd)= P(Omega_in_Even_world|Even)=1 And since every summand includes a P(Odd n X) or a P(Even n X) everything is already multiplied with P(Even) or P(Odd) as appropriate. In retrospect it would have been a lot clearer if I had factored that out, but I wrote U_not_replace first in the way that seemed most obvious and merely modified that to U_replace so it never occured to me to do that.
0Vladimir_Nesov
Omega visits either the "odd" world or "even" world, not Odd world or Even world. For example, in Odd world it'd still need to decide between "odd" and "even".
0FAWS
That's what multiplying with P("odd"|Odd) etc was about. (the probability that, given Omega appearing in an Odd world it would appear in an "odd" world). I thought I explained that?
0Vladimir_Nesov
Since you don't know what parity of Q is, you can't refer to the class of worlds where it's "the same" or "different", in particular because it can't be different. So again, I don't know what you describe here. (It's still correct to talk about the sets of possible worlds that rely on Q being either even or odd, because that's your model of uncertainty, and you are uncertain about whether Q is even or odd. But not of sets of possible worlds that have your parity of Q, just as it doesn't make sense to talk of the actual state of the world (as opposed to the current observational event, which is defined by past observations).)
0FAWS
I'm merely trying to exclude a possible misunderstanding that would mean both of us being correct in the version of the problem we are talking about. Here's another attempt. The only difference between the world Omega shows up in and the counterfactual worlds Omega affects regarding the calculator result is whether or not the calculator malfunctioned, you just don't know on which side it malfunctioned. Is that correct?
0Vladimir_Nesov
Sounds right, although when you speak of the only difference, it's easy to miss something.
0Vladimir_Nesov
I don't understand what this refers to. (Which branch is that? What do you mean by "replace"? Does your 'odd' refer to calculator-shows-odd or it's-actually-odd or 'let's-write-"odd"-on-the-test-sheet etc.?) Also, updateless decision-maker reasons about strategies, which describe responses to all possible observations, and in this sense updateless analysis does take possible observations into account. (The downside of long replies and asynchronous communication: it's better to be able to interrupt after a few words and make sure we won't talk past each other for another hour.)
1FAWS
Here's another attempt at explaining your error (as it appears to me): In the terminology of Wei Dai's original post an updateless agent considers the consequences of a program S(X) returning Y on input X, where X includes all observations and memories, and the agent is updateless in respect to things included in X. For an ideal updateless agent this X includes everything, including the memory of having seen the calculator come up even. So it does not make sense for such an agent to consider the unconditional strategy of choosing even, and doing so does not properly model an updating agent choosing even after seeing even, it models an updating agent choosing even without having seen anything. An obvious simplification of an (computationally extremely expensive) updateless agent would be to simplify X. If X is made up of the parts X1 and X2 and X1 is identical for all instances of S being called, then it makes sense to incorporate X1 into a modified version of S, S' (more precisely the part of S or S' that generates the world programs S or S' tries to maximize). In that case a normal Bayesian update would be performed (UDT is not a blanket rejection of Bayesianism, see Wei Dai's original post). S' would be updateless with resepct to X2, but not with respect to X1. If X1 is indeed always part of the argument when S is called S' should always give back the same output as S. Your utility implies an S' with respect to having observed "even", but without the corresponding update, so it generates faulty world programs, and a different utility expectation than the original S or a correctly simplified version S'' (which in this case is not updateless because there is nothing else to be updateless towards).
0Vladimir_Nesov
(This question seems to depend on resolving this first.)
1FAWS
The updateless analogue to the updater strategy "ask Omega to fill in the answer "even" in counterfactual worlds because you have seen the calculator result "even"" is "ask Omega to fill in the answer the calculator gives whereever Omega shows up". As an updateless decision maker you don't know that the calculator showed "even" in your world because "your world" doesn't even make sense to an updateless reasoner. The updateless replacing strategy is a fixed strategy that has a particular observation as parameter. An updateless strategy without parameter would be equivalent to an updater strategy of asking Omega to write in "even" in other worlds before seeing any calculator result.
0Vladimir_Nesov
Updateless strategies describe how you react to observations. You do react to observations in updateless strategies. In our case, we don't even need that, since all observations are fixed by the problem statement: you observe "even", case closed. The strategies you consider specify what you write down on your own "even" test sheet, and what you write on the "odd" counterfactual test sheet, all independently of observations. The "updateless" aspect is in not forgetting about counterfactuals and using prior probabilities everywhere, instead of updated probabilities. So, you use P(Odd n "odd") to describe the situation where Q is Odd and the counterfactual calculator shows "odd", instead of using P(Odd n "odd"|"even"), which doesn't even make sense.
0FAWS
More generally, you can have updateless analysis being wrong on any kind of problem, simply by incorporating an observation into the problem statement and then not updating on it.
0Vladimir_Nesov
Huh? If you don't update, you don't need to update, so to speak. By not forgetting about events, you do take into account their relative probability in the context of the sub-events relevant for your problem. Examples please.
0FAWS
here
0FAWS
Holding observations fixed but not updating on them is simply a misapplication of UDT. For an ideal updateless agent no observation is fixed and everything (every memory and observation) part of the variable input X. See this comment
0Vladimir_Nesov
A misapplication, strictly speaking, but not "simply". Without restricting your attention to particular situations, while ignoring other situations, you won't be able to consider any thought experiments. For any thought experiment I show you, you'll say that you have to compute expected utility over all possible thought experiments, and that would be end of it. So in applying UDT in real life, it's necessary to stipulate the problem statement, the boundary event in which all relevant possibilities are contained, and over which we compute expected utility. You, too, introduced such an event, you just did it a step earlier than what's given in the problem statement, by paying attention to the term "observation" attached to the calculator, and the fact that all other elements of the problem are observations also. (On unrelated note, I have doubts about correctness of your work with that broader event too, see this comment.)
0FAWS
Yes, of course. But you perform normal Bayesian updates for everything else (everything you hold fixed). Holding something fixed and not updating leads to errors. Simple example: An urn with either 90% red and 10% blue balls or 90% blue and 10% red balls (0.5 prior for either). You have drawn a red ball and put it back. What's the updateless expected utility of drawing another ball, assuming you get 1 util for drawing a ball in the same color and -2 utils for drawing a ball in a different color? Calculating as getting 1 util for red balls and -2 for blue, but not updating on the observation of having drawn a red ball suggests that it's -0.5, when in fact it's 0.46. EDIT: miscalculated the utilities, but the general thrust is the same. P(RedU)=P(BlueU)=P(red)=P(blue)=0.5 P(red|RedU)=P(RedU|red)=P(blue|BlueU)=P(BlueU|blue)=0.9 P(blue|RedU)=P(RedU|blue)=P(BlueU|red)=P(Red|BlueU)=0.1 U_updating=P(RedU|red)*P(red|RedU)*1 + P(BlueU|red)*Pred(|BlueU)*1 - P(RedU|red)*P(blue|RedU)*2 - P(BlueU|red)*P(blue|BlueU)*2 = 0.9*0.9+0.1*0.1-0.9*0.1*2*2= 0.46 U_semi_updateless=P(red)*1-P(blue)*2=-0.5 U_updateless= P(red)(P(RedU|red)*P(red|RedU)*1 + P(BlueU|red)*Pred(|BlueU)*1 - P(RedU|red)*P(blue|RedU)*2 - P(BlueU|red)*P(blue|BlueU)*2) +P(blue)(P(BlueU|blue)*P(blue|BlueU)*1 + P(RedU|blue)*P(blue|RedU)*1 - P(BlueU|blue)*P(red|BlueU)*2 - P(RedU|blue)*P(red|RedU)*2) =0.5*(0.9*0.9+0.1*0.1-0.9*0.1*2*2)+0.5* (0.9*0.9+0.1*0.1-0.9*0.1*2*2)=0.46 (though normally you'd probably come up with U_updateless in a differently factored form) EDIT3: More sensible/readable factorization of U_updateless: P(RedU)((P(red|RedU)(P(red|RedU)*1-P(blue|RedU)*2)+(P(blue|RedU)(P(blue|RedU)*1-P(red|RedU)*2)) + P(BlueU)((P(blue|BlueU)(P(blue|BlueU)*1-P(red|BlueU)*2)+(P(red|BlueU)(P(red|BlueU)*1-P(blue|BlueU)*2))
0Vladimir_Nesov
No, controlling something and updating it away leads to errors. Fixed terms in expected utility don't influence optimality, you just lose ability to consider the influence of various strategies on them. Here, the strategies under considerations don't have any relevant effects outside the problem statement. (I'll look into your example another time.)
0FAWS
Just to make sure: You mean something like updating on the box being empty in transparent Newcomb's here, right? Not relevant as far as I can see.
0FAWS
I admit that I did not anticipate you replying in this way and even though I think I understand what you are saying I still don't understand why. This is the main source of my uncertainty on whether I'm right at this point. It seems increasingly clear that at least one of us doesn't properly understand UDT. I hope we can clear this up and if it turns out the misunderstanding was on my part I commit to upvoting all comments by you that contributed to enlightening me about that.
0FAWS
Unless I completely misunderstand you that's a completely different context for/meaning of "fixed term" and while true not at all relevant here. I mean fixed in the sense of knowing the utilities of red and blue balls in the example I gave.
0FAWS
Also leads to errors, obviously. And I'm not doing that anyway. Something leading to errors is extremely weak evidence against something else also leading to error, so how is this relevant?
0Vladimir_Nesov
This is the very error which UDT (at least, this aspect of it) is correction for.
1FAWS
That still doesn't make it evidence for something different not being an error. (and formal UDT is not the only way to avoid that error)
0Vladimir_Nesov
Not updating never leads to errors. Holding fixed what isn't can.
0FAWS
Correct (if you mean to say that all errors apparently caused by lack of updating can also be framed as being caused by wrongly holding something fixed) for a sufficiently wide sense of not fixed. The fact that you are considering to replace odd results in counterfactual worlds with even results and not the other way round, or the fact that the utility of drawing a red ball is 1 and for a blue ball -2 in my example (did you get around to taking a look at it?) both have to be considered not fixed in that sense. Basically in the terminology of this comment you can consider anything in X1 fixed and avoid the error I'm talking about by updating. Or you can avoid that error by not holding it fixed in the first place. The same holds for anything in X2 for which the decision will never have any consequences anywhere it's not true (or at least all its implications fully carry over), though that's obviously more dangerous (and has the side effect of splitting the agent into different versions in different environments). The error you're talking about (the very error which UDT is correction for) is holding something in X2 fixed and updating when it does have outside consequences. Sometimes the error will only manifest when you actually update and only holding fixed gives results equivalent to the correct ones. The test to see whether it's allowable to update on x is to check whether the update results in the same answers as an updateless analysis that does not hold x fixed. If an analysis with update on x and one that holds x fixed but does not update disagree the problem is not always with the analysis with update. In fact in all problems CDT and UDT agree (most boring problems) the version with update should be correct and the version that only holds fixed might not be.

Suppose you believe that 2+2=4, with the caveat that you are aware that there is some negligible but non-zero probability that The Dark Lords of the Matrix have tricked you into believing that.

Omega appears and tells you that in an alternate reality, you believe that 2+2=3 with the same amount of credence, and asks whether this changes your own amount of credence that 2+2=4.

The answer is the same. You ask Omega what rules he's playing by.

If he says "I'm visiting you in every reality. In each reality, I'm selecting a counterfactual where your answe... (read more)

2Vladimir_Nesov
You are not asked to update your belief about the answer being "even" upon observing Omega (in any sense of "knowledge" of those discussed in the post). You knew that the other possibility existed all along, you don't need Omega to see that. You are asked to decide what to do in the counterfactual. Consider uncertainty about when Omega visits you part of the problem statement, but clearly if a tricky condition such as "it only visits you when your decision will make it worse for you" was assumed, it would be stated.

In what way, if any, is this problem importantly different from the following "less mathy" problem?

You have a sealed box containing a loose coin. You shake the box and then set it on the table. There is no a priori reason for you to think that the coin is more or less likely to have landed heads than tails. You then take a test, which includes the question: "Did the coin land heads?" Fortunately, you have a scanning device, which you can point at the box and which will tell you whether the coin landed heads or tails. Unfortunatel

... (read more)
0Vladimir_Nesov
I don't think it's any different. You could have a Q in the box, and include a person that types it in a calculator as part of the scanning device. Does your variant evoke different intuitions about observational knowledge? It looks similar in all relevant respects to me.
0Tyrrell_McAllister
No. Our intuitions agree here. When I wrote the comment, I didn't understand what point you were making by having the problem be about a mathematical fact. I wanted to be sure that you weren't saying that the math version was different from the coin version. I'm still not certain that I understand the point you're making. I think you're pointing out that, e.g., a UDT1.1 agent doesn't worry about the probability that it has computed the correct value for the expected utility EU(f) of an input-output map f. In contrast, such an agent does involve probabilities when considering a statement like "Q evaluates to an even number". I'm not sure whether you would agree, but I would say moreover that the agent would involve probabilities when considering the statement "the digit 2, which I am considering as an object of thought in my own mind, denotes an even number." Is that a correct interpretation of your point? The distinction between the way that the agent treats "EU(f)" and "Q" seems to me to be this: The agent doesn't think about the expression "EU(f)" as an object of thought. The agent doesn't look at "EU(f)" and wonder whether it evaluates to greater than or less than some other value EU(f'). The agent just runs through a sequence of states that can be seen, from the outside, as instantiating a procedure that maximizes the function EU. But for the agent to think this way would be like having the agent worry about whether it's doing what it was programmed to do. From the outside, we can worry about whether the agent is in fact programmed to do what we intended to program it to do. But that won't be the agent's concern. The agent will just do what it does. Along the way, it might wonder about whether Q denotes an even number. But the agent won't wonder whether EU(f) > EU(f'), although its ultimate action might certify that fact. FWIW, here is my UDT1.1 analysis of the problem in the OP. In UDT terms, the way I think of it is to suppose that there are 99 world progr
0Vladimir_Nesov
I think considerably more than two things have to go well for your interpretation to succeed in describing this post... I don't necessarily disagree with what you wrote, in that I don't see clear enough statements that I disagree with, and some things seem correct, but I don't understand it well. Also, calculator is correct 99% of the time, so you've probably labeled things in a confusing way that could lead to incorrect solution, although the actual resulting numbers seem fine for whatever reason. The reason I used a logical statement instead of a coin, was to compare logical and observational knowledge, since logical knowledge, in its usual understanding, applies mostly to logical statements, and doesn't care what you reason about using it. This can allow extending the thought experiment, for example, in this way.
0Tyrrell_McAllister
I'm not seeing why that extended thought experiment couldn't have used a coin and two scanners of different reliability.
0Vladimir_Nesov
The point is in showing that having a magical kind of knowledge certified by proofs doesn't help (presumably) in that thought experiment, and hopefully reducing events of possible worlds to logical statements. So I want to use as many logical kinds of building blocks as possible, in order to see the rest in their terms.
0Tyrrell_McAllister
Fair enough. To me it seems more illuminating to see logical facts (like the parity of Q) as physical facts (in this case, a statement about what certain kinds of physical mechanisms would do under certain circumstances.) But, at any rate, we seem to agree that these two kinds of facts ought to be thought of in the same way.
0Tyrrell_McAllister
Indeed. That is because you needed more than two things to go right for your post to succeed in communicating your point ;). My confusion is over this sentence from your post: My difficulty is that everything that I would call knowledge is like what you get when you look at a calculator display. Suppose that the test had asked you whether "2+2" reduced to an even number. Then you would perform certain mental operations on this expression, and you would answer in accordance with how those operations concluded. (For example, you might picture two sets of two dots, one set next to the other, and see whether you can pair off elements in one set with elements in the other. Or you might visualize a proof in Peano arithmetic in your mind, and check whether each line follows from the previous line in accordance with the rules of inference.) At any rate, whatever you do, it amounts to relying on the imperfect wetware calculator that is your brain. If a counterfactual version of you got a different answer with his brain, you would still want his test sheet to match his answer. So, what is the residue left over, after we set aside observational knowledge? What is this "logical knowledge"? Calling it "the usual kind" is not sufficing to pick out what you mean for me. My guess was that your "logical knowledge" includes (in your terminology) the "moral arguments" that "the agent can prove" in the "theory it uses". The analogous role in Wie Dai's "brute-force" UDT is served by the agent's computation of an expected utility EU(f) for an input-output map f. Is this a correct interpretation of what you meant by "logical knowledge"? (I know that I may need more than two things to go right to have interpreted you correctly. That is why I am giving you my interpretation of what you said. If I got it right, great. But my main motivation arises in the case where I am wrong. My hope is that you will then restate your claim, this time calibrating for the way that I am evidently primed
2Vladimir_Nesov
In some sense, sure. But you still have to use certain specific reasoning procedure to think about imperfection of knowledge-acquisition methods. That level where you just perform the algorithm is where logic resides. It's not clear to me how to merge these considerations seamlessly. Yes. This theory can include tools for reasoning about observational and logical uncertainty, where logical uncertainty refers to inability to reach the conclusions (explore long enough proofs) rather than uncertainty about whether the reasoning apparatus would do something unintended. I referred to this statement you made: It's not clear what the "Omega offers the decision in a correct-calculator world" event is, since we already know that Omega offers the decision in "even" worlds, in some of which "even" is correct, and in some of which it's not (as far as you know), and 99% of "even" worlds are the ones where calculator is correct, while you clearly assign 50% as probability of your event.
0Tyrrell_McAllister
When you speak of "worlds" here, do you mean the "world-programs" in the UDT1.1 formalism? If that is what you mean, then one of us is confused about how UDT1.1 formalizes probabilities. I'm not sure how to resolve this except to repeat my request that you give your own formalization of your problem in UDT1.1. For my part, I am going to say some stuff on which I think that we agree. But, at some point, I will slide into saying stuff on which we disagree. Where is the point at which you start to disagree with the following? (I follow the notation in my write-up of UDT1.1 (pdf).) UDT1.1 formalizes two different kinds of probability in two very different ways: 1. One kind of probability is applied to predicates of world-programs, especially predicates that might be satisfied by some of the world-programs while not being satisfied by the others. The probability (in the present sense) of such a predicate R is formalized as the measure of the set of world-programs satisfying R. (In particular, R is supposed to be a predicate such that whether a world-program satisfies R does not depend on the agent's decisions.) 2. The other kind of probability comes from the probability M(f, E) that the agent's mathematical intuition M assigns to the proposition that the sequence E of execution histories would occur if the agent were to implement input-output map f. This gives us probability measures P_f over sequences of execution histories: Given a predicate T of execution-history sequences, P_f(T) is the sum of the values M(f, E) as E ranges over the execution-history sequences satisfying predicate T. I took the calculator's 99% correctness rate to be a probability of the first kind. There is a correct calculator in 99% of the world-programs (the "correct-calculator worlds") and an incorrect calculator in the remaining 1%.* However, I took the probability of 1/2 that Q is even to be a probability of the second kind. It's not as though Q is even in some of the execution histor
0Vladimir_Nesov
World-programs are a bad model for possible worlds. For all you know, there could be just one world-program (indeed you can consider an equivalent variant of the theory where it's so: just have that single world program enumerate all outputs of all possible programs). The element of UDT analogous to possible worlds is execution histories. And some execution histories easily indicate that 2+2=5 (if we take execution histories to be enumerations of logical theories, with world-programs axiomatic definitions of theories). Observations, other background facts, and your actions are all elements that specify (sets/events of) execution histories. Utility function is defined on execution histories (and it's usually defined on possible worlds). Probability given by mathematical intuition can be read as naming probability that given execution history (possible world) is an actual one.
0Tyrrell_McAllister
So, you intended that the equivalence * "Omega offers the decision" <==> "the calculator says 'even' " be known to the agent's mathematical intuition? I didn't realize that, but my solution still applies without change. It just means that, as far as the agent's mathematical intuition is concerned, we have the following equivalences between predicates over sequences of execution histories: * "Omega offers the decision in a correct-calculator world" is equivalent to * "The calculator says 'even' in the 99 correct-calculator worlds", while * "Omega offers the decision in an incorrect-calculator world" is equivalent to * "The calculator says 'even' in the one incorrect-calculator world". Below, I give my guess at your UDT1.1 approach to the problem in the OP. If I'm right, then we use the UDT1.1 concepts differently, but the math amounts to just a rearrangement of terms. I see merits in each conceptual approach over the other. I haven't decided which one I like best. At any rate, here is my guess at your formalization: We have one world-program. We consider the following one-place predicates over possible execution histories for this program: Given any execution history E, * CalculatorIsCorrect(E) asserts that, in E, the calculator gives the correct parity for Q. * "even"(E) asserts that, in E, the calculator says "even". Omega then appears to the agent and asks it what Omega should have written on the test sheet in an execution history in which (1) Omega blocks the agent from writing on the answer sheet and (2) the calculator says "odd". * "odd"(E) asserts that, in E, the calculator says "odd". Omega then (1) blocks the agent from writing on the test sheet and (2) computes what the agent would have said to Omega in an execution history F such that "even"(F). Omega then writes what the agent would say in F on the answer sheet in E. Borrowing notation from my last comment, we make the following assumptions about the probability measures P_f. For al
0lukstafi
I "weakly" argue for the 50% probability as well. My argument follows the Pearl-type of counterfactual (Drescher calls it "choice-friendly") -- when you counterfactually set a variable, you cut directed arrows that lead to it, but not directed arrows that lead out or undirected arrows (which in another comment I mistakenly called bi-directed). My intuition is that the "causing" node might possibly be logically established before the "caused" node thus possibly leading to contradiction in the counterfactual, while the opposite direction is not possible (the "caused" node cannot be logically established earlier than the "causing" node). Directly logically establishing the counterfactual node is harmless in that it invalidates the counterfactual straight away, the argument "fears" of the "gap" where we possibly operate by using a contradictory counterfactual.
2Vladimir_Nesov
Pearl's counterfactuals (or even causal diagrams) are unhelpful, as they ignore the finer points of logical control that are possibly relevant here. For example, that definitions (facts) are independent should refer to the absence of logical correlation between them, that is inability to infer (facts about) one from the other. But this, too, is shaky in the context of this puzzle, where the nature of logical knowledge is called into question.
0lukstafi
Is it a trivial remark regarding the probability theory behind Pearl's "causality", or an intuition with regard to future theories that resemble Pearl's approach?
0Vladimir_Nesov
It is a statement following from my investigation of logical/ambient control and reality-as-normative-anticipation thesis which I haven't written much about, but this all is regardless called in question as adequate foundation in light of the thought experiment.
0shokwave
It is no different, as far as I can tell. You can't go from the coin landed the same as it did in your world to only considering worlds where the coin is heads - which is the premise you need if you want to conclude that this is the 1% case of scanner going wrong.
0lukstafi
In my opinion the original post (barring the later comment by the author) does not imply that Q is the same in the real world and in the counterfactual world. Am I wrong here? Then, if Omega is a trustworthy all-powerful device, it would not construct a counterfactual that is straight-out impossible just to play a trick on me. Therefore I conclude that the counterfactual amounts to running an identical scanner another time and getting a different result. But now I no longer think that it is an independent copy of the scanner -- actually it is completely dependent (it is determined to return a different answer), so I no longer think that the conclusion about the coins is fifty-fifty, but that we shouldn't update.
3Tyrrell_McAllister
I have been assuming that Q is the same complicated formula in both worlds.
2lukstafi
Of course it is the same formula. And it is the same calculator as well.
0Vladimir_Nesov
This is correct. Clarified in the post.

I suspect that the question sounds confusing because it conflates different counterfactual worlds. Where exactly does the world presented to you by Omega diverge from the actual world, at what point does the intervention take place? If Omega only changes the calculator display, you should say "even". If it fixes an error in the calculator's inner workings, you should say "odd".

0[anonymous]
-
0Vladimir_Nesov
Calculator is stochastic, Omega doesn't touch any calculator, it assists you with determining what gets written on the counterfactual test sheet (not by counterfactual you, but by Omega personally, counterfactual you is ignored). The worlds diverge at a point where the calculator happens to display different answers to the same question.

I take out a pen and some paper, and work out what the answer really is. ;)

6Vladimir_Nesov
Indeed. Consider a variant of the thought experiment where in the "actual" world you used a very reliable process, that's only wrong 1 time in a trillion, while in the counterfactual you're offered to control, you know only of an old calculator that is wrong 1 time in 10, and indicated a different answer from what you worked out. Updateless analysis says that you still have to go with old calculator's result. Knowledge seems to apply only to the event that produced it, even "logical" knowledge. Even if you prove something, you can't be absolutely sure, so in the counterfactual you trust an old calculator instead of your proof. This would actually be a good variant of this thought experiment ("Counterfactual Proof"), interesting in its own right, by showing that "logical knowledge" has the same limitations, and perhaps further highlighting the nature of these limitations.
0lukstafi
Do you build counterfactuals the Judea Pearl way, or some other way (for example the Gary Drescher way of chap. 5 "Good and Real")? Or do you think our current formalisms do not "transfer" to handling logical uncertainty (i.e. are not good analogues of a theory of logical uncertainty)?
0Vladimir_Nesov
I don't have a clear enough idea of the way I myself think about counterfactuals to compare. Pearl's counterfactuals are philosophically unenlightening, they stop at explicit definitions, and I still haven't systematically read Drescher's book, only select passages. The idea I use is that any counterfactual/event is a logically defined set (of possible worlds), equipped with necessary structures that allow reasoning about it or its subevents. The definition implies certain properties, such as its expected utility, the outcome, in a logically non-transparent way, and we can use these definitions to reason about dependence of outcome (expected utility, probability, etc.) on action-definition, query-replies, etc., through ambient control.
0lukstafi
Pardon me if I repeat someone. Q causes the answer of the calculator, so if we set calculator's answer counterfactually we lose dependency between Q and the calculator, and so we don't have any knowledge of the counterfactual Q. Whereas if we had a formula R of comparable logical complexity to Q, drawn from a class of formula pairs with 90% correlation of values, then the dependency is bidirectional and counterfactually setting R we gain the knowledge about the counterfactual Q. Does "in the counterfactual you trust an old calculator instead of your proof" mean that you don't agree (with this analysis)? (I have the impression that the problem statement drifted somewhat from "counterfactual" to a more "conditional" interpretation where we don't sever any dependencies.)

What does it even mean to write an answer on a counterfactual test sheet?

Is it correct to to interpret this as "if-counterfactual the calculator had showed odd, Omega would have shown up and (somehow knowing what choice you would have made in the "even" world) altered the test answer as you specify"?

Viewing this problem from before you use the calculator, your distribution is P(even) = P(odd) = 0.5. There are various rules Omega could be playing by:

  • Omega always (for some reason uncorrelated to the parity of Q) asks you what to do iff
... (read more)
0Vladimir_Nesov
Yes.

Why does observational knowledge work in your own possible worlds, but not in counterfactuals?

It does not work in this counterfactual. Omega could have specified the counterfactual such that the observational knowledge in the counterfactual was as usable as that in the 'real' world. (Most obviously by flat out saying it is so.)

The reason we cannot use the knowledge from this particular counterfactual is that we have no knowledge about how the counterfactual was selected. The 99% figure (as far as we know) is not at all relevant to how likely it is that ... (read more)

0Vladimir_Nesov
Yes, clearly in some counterfactuals such knowledge works. What do you additionally need to know about the counterfactuals? Where is the ambiguity (among what two examples of possible interpretations that change the analysis)? What do you mean by "selected"?
0Soki
It may not be what wedrifid meant, but does Omega always appear after you see the result on the calculator? Does Omega always ask : "Consider the counterfactual where the calculator displayed opposite_of_what_you_saw instead of what_you_saw" ? If that is true, then I guess it means that what Omega replaces your answer with on the test sheet in the worlds where you see "even" is the answer you write on the counterfactual test sheet in the worlds where you see "odd". And the same with "even" and "odd" exchanged.
0Scott Alexander
I agree with this answer. I believe in the question as given the answer is probably "even", but if Omega clarifies that in a counterfactual world randomly selected from the pool of all counterfactual worlds the calculator displayed "odd", then you should have a 50% probability each way. The reason observational evidence works in your world but not in other not randomly selected possible worlds is that if Omega selected the world in any way other than at random, then we're talking about a world that may have been specifically selected for being improbable.
0Vladimir_Nesov
Omega changes the test sheet in all possible worlds where the calculator shows "odd". (A "counterfactual" is an event, not a particular possible world, which is more natural since you name counterfactuals by specifying high-level properties, which is not sufficient to select only one possible world, if that notion even makes sense.) Clarified in the post.

This seems easy. Q is most likely even, so in the counterfactual the calculator is most likely in error, and we prefer Omega to write "even". What am I missing?

3shokwave
Derived from the likelihood of the calculator being in error You can't conclude this - think about what evidence you have that the calculator is in error!
1Nisan
Oh, you're right. I see.
0lukstafi
You can't conclude this, but for a different reason: changing the value on the display means changing Omega. You cannot have the same Omega and a different value of the same process. (ETA: and calling the name of Everett does not affect my reasoning here ETA2 meaning that I don't think Q is most likely even in the counterfactual universe.)
-1MC_Escherichia
Yes you can. The real calculator in the real world had a 99% chance of being right. The counterfactual case is (in all probability) the 1% chance where it was wrong.
0Nisan
Nah. See, given that the real calculator says "even", there's a 0.99% chance that it's correct and that, in a repetition of the experiment, it would say incorrectly say "odd". There's also a 0.99% chance that the real calculator is incorrect and that, in a repetition of the experiment, it would correctly say "odd". The counterfactual case is just as likely to be the calculator being correct as the calculator being incorrect. ETA: The above is wrong. I was confused about the problem because I wasn't thinking updatelessly. It's like Newcomb's problem.
4MC_Escherichia
I'm not following you. Imagine this scenario happens 10000 times, with different formulae. In 9900 of those cases, the calculator says , and Omega asks what the answer is if the calculator says . In 100 of those cases, the calculator says , and Omega asks what the answer is if the calculator says . So you are more likely to be in the first scenario.
4AlexMennen
This is a dispute over the premises of the problem (whether Omega's counterfactual is always different than yours, or is correct 99% of the time and independent of yours), not a dispute about how to solve the problem. The actual premise needs to be made clear before the question can be properly answered.
4Manfred
"I have no probability assignment, you haven't told me your motives" is not an allowed answer. Pretend Omega holds a gun to your head and will fire unless you answer in ten seconds. There is always some information, I promise. You can avoid getting shot. EDIT: Upon reflection, this post was too simplistic. If we have some prior information about Omega (e.g. can we ascribe human-like motives to it?), then we would have to use it in making our decision, which would add an element of apparent subjectivity. But I think it's safe to make the simplifying assumption that we can't say anything about Omega, to preserve the intent of the question.
0AlexMennen
If Omega doesn't tell you what premises he's using, then you will have some probability distribution over possible premises. However, that distribution, or the information that led to it, needs to be made explicit for the thought experiment to be useful. If you assume that your prior is 50% that Omega's counterfactual is always different than yours and 50% that it is independent, then updating on the fact that the counterfactual is different in this case gives you a posterior of 99% always different and 1% independent. This means that there is a (99%)^2 + 0.5% chance that your answer is right, and a 1.49% chance that your answer is wrong. Interestingly, since the question is about math rather than a feature of the world, your answer should be the same for real life and the counterfactual, meaning that if you know that the counterfactual calculator is right mod 2 99% of the time and independent of yours, you should be indifferent to righting "even" or "odd" on your real-life paper.
0Manfred
Good point, but I think the following is wrong: "Interestingly, since the question is about math rather than a feature of the world, your answer should be the same for real life and the counterfactual." This does not follow. The correct answer is the same, yes, but the best answer you can give depends on your state of knowledge, not on the unknown true answer. I would argue that you should give the best answer you can, since that's the only way to give an answer at all.
2AlexMennen
The question wasn't "What would you write if the calculator said odd?". It was "Given that you already know your calculator says even, what answer would you like written down in the counterfactual in which the calculator said odd?". This means that you are not obligated to ignore any evidence in either real life or the counterfactual, and the answers are the same in each. Therefor your probability distribution should be the same in regards to the answers in each.
-2Manfred
Omega asks you "what is the true answer?" Vladimir asks you "what does Omega say in the counterfactual that your calculator returned odd?" Since Omega always writes the true answer at the end, the question is equivalent to "what is the true answer if your calculator returned odd?" Since the true answer is not affected by the calculator, this is further equivalent to "What is the true answer?" So it's possible we were just answering different questions.
2FAWS
No, in this problem Omega writes whatever you tell Omega to write, whether it's true or not. (Apparently Omega does not consider that a lie)
0Manfred
Ah, hm, I missed that. I'd just assumed "determine" was meant in the other sense. So there's no effect from being correct or incorrect? This post seems to get less interesting under more analysis.
0AlexMennen
"What is the true answer?" is the question I was trying to answer. What question are you trying to answer?
0Manfred
The same. Ah, wait! By "the answer" in your last sentence (in regards to the answers in each), did you mean the true answer, not your own answer? That would be much more... factually correct, though your second to last sentence still makes it sound like you're counting fictional evidence.
0AlexMennen
Yes, I meant the true answer. And my point was that if Omega took the correct answer into account when creating the counterfactual, the evidence gained from the counterfactual is not fictional.
0Manfred
Yay! It looks like I've managed to understand you then.
1shokwave
Given our prior, 5000 of the times the actual answer is even, and 5000 times the answer is odd. In 4950 of the 5000 Q-is-even cases, the calculator says . And in the other 50 cases of Q-is-even, the calculator says . Then, in 4950 of the Q-is-odd cases, the calculator says and in 50 cases it says . Note that we still have 9900 cases of and 100 cases of . Omega presents you with a counterfactual world that might be one of the 50 cases of Q-is-even, or one of the 4950 cases of Q-is-odd, . So you're equally likely (5000:5000) to be in either scenario (Q-is-odd, Q-is-even) for actually writing down the right answer (as opposed to writing down the answer the calculator gave you).
-1MC_Escherichia
I'm still not following. Either the answer is even in every possible world, or it is odd in every possible world. It can't be legitimate to consider worlds where it is even and worlds where it is odd, as if they both actually existed.
3Vladimir_Nesov
If you don't know which is the case, considering such possibly impossible possible worlds is a standard tool. When you're making a decision, all possible decisions except the actual one are actually impossible, but you still have to consider those possibilities, and infer their morally relevant high-level properties, in the course of coming to a decision. See, for example, Controlling Constant Programs.
2shokwave
Which is the case? What do you do if you're uncertain about which is the case?
0MC_Escherichia
Your initial read off your calculator tells you with 99% certainty. Now Omega comes in and asks you to consider the opposite case. It matters how Omega decided what to say to you. If Omega was always going to contradict your calculator, then what Omega says offers no new information. But if Omega essentially had its own calculator, and was always going to tell you the result even if it didn't contradict yours, then the probabilities become 50%.
0Manfred
True, but I'd like to jump in and say that you can still make a probability estimate with limited information - that's the whole point of having probabilities, after all. If you had unlimited information it wouldn't be much of a probability.
0[anonymous]
Yes. You've most likely observed the correct answer, says observational knowledge. The argument in the parent comment doesn't disagree with Nisan's point.

Consider the following thought experiment

You have a bag with a red and a blue ball in it. You pull a ball from the bag, but don't look at it. What is the probability that it is blue?

Now imagine a counterfactual world. In this other world you drew the red ball from the bag. Now imagine a hippo eating an octopus. What is the probability that you drew the blue ball?

"Why does observational knowledge work in your own possible worlds, but not in counterfactuals?" is the key question here. Perhaps it's easier to parse like this: "Why isn'... (read more)

-2[anonymous]
It seems like one answer to "Why isn't anything you can think of evidence?" might be that "anything you can think of" becomes incomputable very quickly. Let's say you were to ask a computer to consider "Anything you can think of" with respect to this problem. Imagine each unique hard drive configuration is a thought, And it can process 1 thought per second per hertz. Let's make it a 5ghz computer. It can think of anything on a 32 bit drive in a bit less then 1 second since 2^32 is 4,294,967,296, which is less then 5 billion. The problem is, in uncompressed Ascii where you would need 8bits for a character, you can't even fit the thought "32bit" onto a 32 bit harddrive, since it's 5 bytes/40 bits long. If we double the harddrive to 64 bits to give ourselves more room for longer thoughts, our 5ghz computer goes from being able to calculate all possible thoughts in less then a second to being able to calculate it in around a human lifetime, because of the exponential growth involved. (At least, assuming I've made no math errors.) We actually have computers do this when we try to have them crack passwords with brute force. A computer trying to brute force a password is essentially trying "Anything it can think of" to open the password protected data.

The thing is, the other world was chosen specifically BECAUSE it had the opposite answer, not randomly like the world you're in.

[This comment is no longer endorsed by its author]Reply

This is the intuition I find helpful: Your decision only matters when the calculator shows odd. There is a 99% chance your decision matters if it's odd and a 1% chance your decision matters if it's not odd. Therefore the situation where you're told it's even is evidence that it's odd.

In this scenario, we are the counterfactual. The calculator really showed up odd, not even.

Once your calculator returns the result "even", you assign 99% probability to the condition "Q is even". Changing that opinion would require strong bayesian evidence. In this case, we're considering hypothetical bayesian evidence provided by Omega. Based on our prior probabilities, we would say that if Omega randomly chose an Everett branch (I'm going with the quantum calculator, just because it makes vocabulary a bit easier), 99% of the time Omega would chose another Everett branch in which the calculator also read "even". Ho... (read more)

Here's a possible argument.

Assume what you do in the counterfactual is equivalent to what you do in IRL, with even/odd swapped. Then TDT says that choosing in the counterfactual ALSO chooses for you in the real world. So you should choose odd there so that you can choose even in the real world and get it right.

0Vladimir_Nesov
Cheating.
0Will_Sawin
Create a new version of the problem that eliminates that argument then.
0Vladimir_Nesov
It doesn't apply to my version of the problem.
0Will_Sawin
Elaborate?
0Vladimir_Nesov
"Assume what you do in the counterfactual is equivalent to what you do in IRL, with even/odd swapped" doesn't hold. See this thread.

Is this an attempt to replicate in UDT the problems of TDT?

4Vladimir_Nesov
Huh?

I wonder if the question is enough specified. Naïvely, I would say that Omega will write down "even" with p=0.99, simply because Omega appearing and telling me "consider the counterfactual" is not useful evidence for anything. P(Omega appears|Q even) and P(Omega appears|Q odd) are hard to specify, but I don't see reason to assume that the first probability is greater than the second one, or vice versa.

Of course, the above holds under assumption that all counterfactual worlds have the same value of Q. I am also not sure how to interpret ... (read more)

0Vladimir_Nesov
Omega writes the final answer on the counterfactual test sheet, it doesn't rewrite the question. The question is the same, Q, everywhere, as is the process of typing it in the calculators. It writes what you say it to write, correctness doesn't matter. Clarified in the post.
2prase
I would probably need some more detailed analysis why this example is interesting. It seems to me that (if I care about my counterfactual self passing the test) I analyse the probabilities of Omega appearing given the specifict correct answer and then update accordingly. But 1) that would be trivial, and 2) you have said in one of your comments that counterfactual is an event, rather than a possible world, so it may as well be impossible. Also, I'd like to know why should I care about what is counterfactually written by Omega in a counterfactual situation, and not answer "whatever".
0Vladimir_Nesov
See Counterfactual Mugging.
1prase
This doesn't seem the same. In Counterfactual Mugging, my reward depends on my hypothetical behaviour in the counterfactual scenario. Here, you have explicitly ruled out that the counterfactual me can influence something. Suppose a reward $1000 for passing the test. Let's also assume 100 copies of some person taking the test. If the copies are the sort of people who agree with the calculator no matter what Omega says, 99 of them would obtain $1000, for trivial reasons, and one gets nothing. This justifies the 99% confidence. Even if Omega rewrote the answers of actual copies based on decision of other actual copies (I don't think this follows from the description of the problem), still it would be better to stick with the calculator. If the copies knew specifically that Omega appears only to those copies who have received wrong answer from the calculator, only then would another strategy become justified, but again for trivial reasons. What am I doing wrong?
3datadataeverywhere
I think your final (larger) paragraph is confusing, but your conclusion is correct. That Omega presents you with a counterfactual only provides evidence that Omega is a jerk, not that you chose incorrectly.
0prase
I am pretty sure that I have interpreted the problem wrongly and the confusingness of the paragraph is the result. (The only non-trivial interpretation which occured to me yesteday was that Omega is scanning a set of people and is changing actual answers of those who obtained "even" based on instructions given by those who obtained "odd", which was, in hindsight, quite absurd way to understand it.) See also my last reply to Vladimir Nesov in this thread.
0Vaniver
Upvoted for "Omega is a jerk."
0Vladimir_Nesov
I don't understand this passage. What "actual copies"? What doesn't follow how? What does it mean to "stick with the calculator"? (Which calculator? Who does the "sticking"?)
2prase
Let me try again, then, hopefully more clearly. Suppose that I am asked to precommit to a strategy before I know the result of the calculation (such assumption removes the potential disagreement with CDT in Counterfactual Mugging). Also, I expect that Omega appears with certainty, no matter what result the calculator gives. So, I know that I will be given the calculator result, which is 99% correct, and asked by Omega to imagine a counterfactual world where the result was the opposite, and that I am free to determine what should Omega write in that counterfactual world. The only chance why I should care is when I think that Omega could rewrite my result in the actual world. But I was not sure what algorithm Omega would follow. From the description of the problem it seemed that Omega simply asks the question and "modifies the counterfactual world", which I interpret as "changing Omega's beliefs about the counterfactual world." But anybody can do that, there is no need for Omega's exceptional qualities here, and I am certainly not going to change my beliefs after being asked this question by a janitor in place of Omega. So Omega must be following some distinct algorithm. He may scan my mind and always rewrite the result depending on how would I respond in the counterfactual world. Hence I have asked whether it rewrites the answers of the actual people, rather than only changing its fantasies about the counterfactual. Probably that interpretation was the natural one when Omega was included, but it didn't occur to me after reading the original post. I continue within this interpretation. I have four pure strategies: Precommit to tell Omega to write down (in the counterfactual world) 1. the actual calculator output. 2. the counterfactual (i.e. opposite) output. 3. always even. 4. always odd. The first one always leads Omega to rewrite my answer to the opposite, which leaves me with 99% chance of losing. The second one wins in 99% of cases. The remaining two ar
1Vladimir_Nesov
This could work if you give up control over your own test sheet to the counterfactual you mediated by Omega (and have your own decision control the counterfactual test sheet using counterfactual Omega). That's an elegant variant of the problem, with an additional symmetry. (In my thought experiment, the you that observed "odd" doesn't participate in the thought experiment at all, and the test sheet on "even" side is controlled by the you that observed "even".) Can't parse a significant portion of the rest you wrote, but the strategies you consider and consequences of their use are correct for your variant of the thought experiment.
0prase
So, what does Omega do in your experiment? What algorithm it follows? (If my question sounds repetitive, it is because not only I am confused, but I don't see a way out from the confusion.)
0Vladimir_Nesov
Omega on the "odd" side predicts what the you on "even" side would command to be done with the test sheet on "odd" side, and does that. That's all Omegas do. You could have a janitor ask you the question on "even" side as easily, we only use "trustworthiness" attribute on "even" side, but need "predictive capability" attribute on "odd" side. An Omega always appears on "even" side to ask the question, and always appears on "odd" side to do the answer-writing.
2prase
Thanks, I have automatically assumed that Omega is parity-symmetric. Edit: So, the strategies lead to: 1. If Q is even, I get it right in 99% of cases. If Q is odd, Omega changes my answer, and I get it wrong 99% of the time. Success rate = 0.5. 2. The same reversed. If Q is even, I write down false answer 99% of the time, but if Q is odd, Omega steps in and changes the answer leading to 99% success. Overall 0.5. 3. If Q is even, I get it right always, and if Q is odd, the result is wrong always. Success rate = 0.5. 4. If Q is even, I get it wrong always, but if it is odd, I get it right. Also 0.5. Can it be lifted above 0.5? The ability to write "even" on the even side leads to Omega putting "even" on the odd side. It even seems that the randomness of the calculator is not needed to create the effect.

My understanding is that the question is about how to do counterfactual math. There is no essential distinction between the two types (observational vs. logical) of knowledge, they are "limiting cases" of each other (you always only observe your mental reasoning, or calculator outputs, or publications on one end; Laplace's demon on the other end).

ETA: my thinking went an U-turn from setting the calculator value without severing the Q->calculator correlation (i.e. treating calculator as an observed variable with a fictional observation), to set... (read more)

0lukstafi
OK, my final understanding is that the question is whether to build the two world models with a shared Q node or with separate Q nodes. We have separate calculator nodes so by analogy I see no strong reason for there to be a shared Q node, but also no strong reason for separate Q nodes since the counterfactual calculator is severed from the Q node. My inclination is that sharing nodes (as opposed to structure+parameters) between counterfactual worlds is the wrong thing to do, but sharing nodes is a limiting case of sharing structure+parameters... so the "logical" nodes should be shared and I've been the most wrong (by entertaining all other solutions). (But then the "logical" here is defined exactly as what is shared between all legitimate counterfactuals, so it is weaker than the "classically logical"; not all formulas are logical in this sense, but the ones that a mere calculator can compute probably are.)

Consider the counterfactual where the calculator displayed "odd" instead of "even", after you've just typed in the formula Q.

This consists of just reapplying the algorithm or re-reading the previous paragraph with "even" replaced with "odd", so the answer should be 99% odd.

This is based on my understanding of counterfactual as considering what you would do in some hypothetical alternate branch 'what-if'.

2b1shop
This is how I interpreted it as well. I'm assuming something else is going on with the "updateless" part, but I don't know what it is.
0wedrifid
I took that as somewhat of a red herring. No 'updateless' reasoning seems to be required - just careful thinking.
0wedrifid
Vladmir explicitly ruled out caring what your counterfactual self would do:
0jacob_cannell
The fact that he put that in quotes: as if it should relate to the Omega clause, made me ignore. (couldn't figure out what it could apply to). So perhaps I couldn't parse Vlad.
[-]FAWS-10

I'm not sure what's supposed to be tricky about this. It's trading off a 99% chance of doing better in 1% of all worlds against a 1% chance of doing worse in 99% of all worlds (if I am in a world where the calculator malfunctioned). Being risk averse I prefer being wrong in some small fraction of the worlds to an equally small chance of being wrong in all of them so I'd want Omega to write "odd" (or even better leave it up to the counterfactual me which should have the same effect but feels better).

0Vladimir_Nesov
(Apologies for a long string of mutually-contradictory replies I made to this and then deleted. Apparently I'm not in the best shape now, and the parent comment pattern-matches to elements of the correct solution, while still not making sense on further examination. One point that's clearly wrong is that risk-attitude matters for which solution is correct, whatever the other elements of this analysis mean.)
0FAWS
I don't see how you could possibly know that without knowing where the error in my reasoning is unless you already know with high confidence that in the correct solution the options are either nowhere close to being balanced or identical in every way anyone with consistent preferences could possibly care about. That would imply that you already know the correct solution and are just testing us. Why don't you simply post it here (at least rot13ed)? Wouldn't that greatly facilitate determining whether other solutions are due to misunderstandings/underspecifications of the problem statement or errors in reasoning?
0Vladimir_Nesov
That's the case. Updateless analysis is pretty straightforward, see shokwave's comment. Solving the thought experiment is not the question posed by the post, just an exercise. (Although seeing the difficulty many readers had with interpreting the intended setup of the experiment, including a solution might have prevented such misunderstanding. Anyway, I think the description of the thought experiment is sufficiently debugged now, thanks to feedback in the comments.)
0FAWS
This raised by confidence that I'm right and both of you are wrong (I had updated based on your previous comment to 0.3 confidence I'm right, now I'm back to 0.8). Skokwave's analysis would be correct if Q was different in the counterfactual world. I'm going to reply there in more detail.
0[anonymous]
Correct, assuming you're only talking about the possible worlds included in the counterfactual (I didn't see this assumption first, so wrote some likely incorrect comments which are now removed). See the disclaimer in the last paragraph. The topic of the post is not how to solve the thought experiment, that must be obvious with UDT. It's about the nature of our apparently somewhat broken intuition of observational knowledge.
0[anonymous]
Still wrong (99%/1% figures are incorrect), although maybe starting from a correct intuition. Why has nobody posted a careful UDT analysis yet, just to see what actually goes on in the problem? I expected better, hence didn't include such analysis myself. The topic of the post is not how to solve the thought experiment, that must be obvious with UDT. It's about the nature of our apparently somewhat broken intuition of observational knowledge. Although at least one should clearly see the UDT analysis first, in order to discuss that.
0[anonymous]
Edit: (Although the 99% correct/1% wrong you give are the wrong figures, I wonder if I should retract this comment...) Yes, "odd" is the correct answer, and you seem to have arrived at it by updateless analysis of the decision problem (without making logical assumptions about which answer is correct, only considering possible observations) which I disclaimed about in the last paragraph. The question that the post poses, using this thought experiment, is not which answer is correct (we already have necessary tools to reliably tell), but what is the nature of observational knowledge, which apparently fails in this thought experiment but is a crucial element of most other reasoning, and in what sense logical knowledge is different. (Note that this analysis doesn't face any under-specification problems that too many of the other commenters complained about without clearly explaining what examples of relevant ambiguity remain.)

You are Sokaling us, right?

3Perplexed
Well, from the lack of a reply and the four downvotes, I take it that the question is sincere and that at least four people believe it is meaningful. So, I have two questions: 1. How many of the people who have responded so far seem to have understood the question? 2. Suppose (counterfactually) that counterfactual Omega asked counterfactual you what factual Omega should write in the factual test (ignoring what factual you actually does, of course). Should the answer (the instruction to Omega to write either "even" or "odd") be the opposite in this counterfactual case than in the case you originally presented? I don't understand the problem, but it seems that you think that the result on the calculator affects some kind of objective probability that Q is even - a probability that is the same in both factual and counterfactual worlds. It doesn't, of course. All probability is subjective. Evidence observed in one world has no influence on counterfactual worlds where the evidence did not appear. But since I suspect you already know this, it seems likely that I simply don't have a clue what your question was and why you decided to ask it in that way.
3Perplexed
Two more questions. As in the original scenario, but instead of an unreliable calculator, you have a reliable (so far) theorem prover. Type in a proposition to be proved and hit the "ProveIt" button, and immediately the display shows "Working". Then, an unpredictable amount of time later, the display may change to show either "Proven" or "Disproven". So, the base case here is that you type "Q is even" into the device and hit "ProveIt". You plan to only allow 5 minutes for the device to find a proof, and then to just guess, but fortunately the display changes to "Proven" in 4 minutes. But then just as you finish writing "Even" on your test paper, Omega appears. 1. This time, Omega asks you to consider the counterfactual world in which the device still shows "Working" after 5 minutes. Should counter-factual Omega still write "Even" on the test? 2. In a different Omega-suggested counterfactual world, a black swan flies in the window after 4 1/2 minutes and the display shows "Disproven". You know that this means that either a). Arithmetic is inconsistent. b). The theorem prover device is unreliable. or c). Omega is messing with you. Does thinking about this situation cause you to change your answer to the previous question? My opinion: Evidence, counter-evidence, and lack of evidence have no effect on the truth of necessary statements. They only impact the subjective probability of those statements. And subjective probabilities cannot flow backward in time (surviving the erasure of the evidence that produced those subjective probabilities). Even Omega cannot mediate this kind of paradoxical information flow.
0Vladimir_Nesov
It should write whatever you would write if you observed no answer, in this case we have indifference between the answers (betting with confidence 50%). If device is unreliable, it's unreliable in your own event in the same sense, so your answer could be wrong (as improbably), so the original solution stands (i.e. you write "odd" in the counterfactual). Even if Omega proves to you that arithmetic is inconsistent, this won't cause you to abandon morality, just to change the way you use arithmetic. Omega is not lying by problem statement. We discussed in the other thread how your description of this idea doesn't make sense to me. I have no idea what your statement means, so can't rule whether I disagree with it, but certainly I can't agree with what I don't understand.
-1Perplexed
Ok, so we seem to be in agreement regarding everything except my attempt to capture the rules with the (admittedly meaningless if taken literally) slogan "subjective probabilities cannot flow backward in time". It is interesting that neither of us sees any practical difference between necessary facts (the true value of Q) and contingent facts (whether the calculator made a mistake) in this exercise. The reason apparently being that we can only construct counterfactuals on contingent facts (for example, observations). We can't directly go counterfactual on necessary facts - only on observations that provide evidence regarding necessary facts. But it is impossible for observations to provide so much evidence regarding a necessary fact that we are justified in telling Omega that his counterfactual is impossible. But that apparently means that dragging Omega into this problem didn't change anything - his presence just confused people. (I notice that Shokwave - the one person who you claimed had understood the problem - is now saying that the value of Q is different in the counterfactual worlds). I am becoming ever more convinced that allowing Omega into a decision-theory example is as harmful as allowing a GoTo statement into a computer program. But then, as my analogy reveals, I am from a completely different generation.
0Vladimir_Nesov
Yes we can. Omega could offer you to control worlds where Q is actually odd. Link? The value of Q is uncertain, and this holds in considering either possible observation.
0Perplexed
I want to answer "No he can't. Not if I am in a world in which Q is actually even. Not if we are talking about the same arithmetic formula Q in each case." But I'm coming to realize that we may not even be talking the same language. For example, I don't really understand what is meant by "Omega could offer you to control worlds where ___". Are you suggesting that Omega could make the offer, though he might not have to deliver anything should such worlds not exist? I was referring to this
0Vladimir_Nesov
Yes. The offer would be, to enact a given property in all possible worlds of specified event. If there are no possible worlds in that event, this requirement is met by doing nothing.
0shokwave
I wish. If I understood the problem, I would be solving it. As far as I've noticed, he claimed I had the updateless analysis mostly right.
1Vladimir_Nesov
So far, shokwave clearly gets it. Compare to any other sophisticated question asked in a language you aren't familiar with. Here, you need to be sufficiently comfortable with counterfactuals, for the number of its usages in a problem statement not to act as a pattern for ridiculousness. I don't think that. I don't see how "subjective" helps here. It's not clear what sense of "influence" you intend.
1Perplexed
I fully agree. Which is why I find it surprising that you did not attempt to answer the question.
0Perplexed
I intended to include whatever causes your answer to Omega in this world to make a difference in what counterfactual Omega writes on the paper in the counterfactual world.
2Vladimir_Nesov
As in Newcomb's problem, or Counterfactual Mugging, counterfactual Omega can predict your command (made in "actual" world in response to "actual" observations, including observing "actual" Omega), while remaining in the counterfactual world. It's your decision, which is a logical fact, that controls counterfactual Omega's actions.
0Perplexed
I understand that Omega (before the world-split) can predict what I will do for each possible result from the calculator. As well as predicting my response to all kinds of logic puzzles. And that this ability of Omega to predict is the thing that permits this spooky kind of acausal influence or interaction between possible worlds. But are we also giving Omega the ability to predict the results from the calculator? If so, I think that the whole meaning of the word 'counterfactual' is brought into question.
0Vladimir_Nesov
I don't see when it needs that knowledge. The calculator being deterministic (and so potentially predictable) won't change the analysis (as long as it's deterministic in a way uncorrelated with other facts under consideration), but that's the topic of Counterfactual Mugging, not this post, so I granted even quantum randomness to avoid this discussion.
0Perplexed
My point is that Omega, before the world split, knows what I will do should the calculator return "even". And he knows how I will answer various logical puzzles in that case. But unless he actually knows (in advance) what the calculator will do, there is no way that he can transfer information dependent on the "even" from me in the "even" world to the paper in the "odd" world. Omega is powerless here. His presence is irrelevant to the question. Which is why I originally thought you were Sokaling. One shouldn't multiply Omegas without necessity.
0Vladimir_Nesov
Unpack "transfer information". If Omega in "odd" world knows what you'd answer should the calculator return "even", it can use this fact to control things in its own "odd" world, all of this without it being able to predict whether the calculator displays "even" or "odd". Considering the question in advance of observing the calculator display is not necessary.
0Perplexed
Yes, and Omega in "even" world knows all about what would have happened in "odd" world. But neither Omega knows what "really" happened; that was the whole point of my question; the one in which I apparently used the word 'counterfactual' an excessive number of times. Let me try again by asking this question: What knowledge does the 'odd' Omega need to have so as to write 'odd' on the exam paper? Does he need to know (subject says to write 'odd' & subject sees 'even' on calculator)? Or does he instead need to know (subject says to write 'odd' | subject sees 'even' on calculator)? Because I am claiming that the two are different and that the second is all that Omega has. Even if Omega knows whether Q is really odd or even.
0Vladimir_Nesov
I don't know what the first option you listed means, and agree that Omega follows the second.
0Vladimir_Nesov
I agree, "actuality" is not a property of possible worlds (if we forget about impossible possible worlds for a moment), but it does make sense to talk about "current observational event" (what we usually call actual reality), and counterfactuals located outside it (where one of the observations went differently). These notions would then be referred to from the context of a particular agent.