shokwave comments on Counterfactual Calculation and Observational Knowledge - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (183)
We are in the world where the calculator displays even, and we are 99% sure it is the world where the calculator has not made an error. This is Even World, Right Calculator. Counterfactual worlds:
All Omega told us was that the counterfactual world we are deciding for, the calculator shows Odd. We can therefore eliminate Odd World, Wrong Calculator. Answering the question is, in essence, deciding which world we think we're looking at.
So, in the counterfactual world, we're either looking at Even World, Wrong Calculator or Odd World, Right Calculator. We have an equal prior for the world being Odd or Even - or, we think the number of Odd Worlds is equal to the number of Even Worlds. We know the ratio of Wrong Calculator worlds to Right Calculator worlds (1:99). This is, therefore, 99% evidence for Odd World. The correct decision for the counterfactual you in that world is to decide Odd World. The correct decision for you?
Ignoring Bostrom's book on how to deal with observer selection effects (did Omega go looking for a Wrong Calculator world and report it? Did Omega go looking for an Odd World to report to you? Did Omega pick at random from all possible worlds? Did Omega roll a three-sided die to determine which counterfactual world to report?), I believe the correct decision is to answer Odd World for the counterfactual world, with 99% certainty if you are allowed to specify as such.
I reason that by virtue of it being a counterfactual world, it is contingent on my not having the observation of my factual world; factual world observations are screened off by the word "counterfactual".
The other possibility (which I tentatively think is wrong) is that our 99% confidence of Even World (from our factual world) comes up against our 99% confidence of Odd World (from our counterfactual) and they cancel out, bringing you back to your prior. So you should flip a coin to decide even or odd. I think this is wrong because 1) I think you could reason from 50% in the countefactual world to 50% in the factual world, which is wrong, and 2) this setup is identical to punching in the formula, pressing the button and observing "even", then pressing the button again and observing "odd". I don't think you can treat counterfactual worlds as additional observations in this manner.
edit: It occurs to me that with Omega telling you about the counterfactual world, you are receiving a second observation. For this understanding, you would specify Even World with 99% confidence in the factual world and either Even or Odd World depending on how the coin landed for the counterfactual world.
Vladimir says that "Omega doesn't touch any calculator". If the counterfactual is entered at the point where the computation starts and Omega tells you that it results in Odd (ETA2: rereading Vladimir's comment, this is not the case), then it is a second observation contributed by Omega running the calculator and should affect both worlds. If on the other hand the counterfactual is just about the display, then the counterfactual Omega will likely write down Odd (ETA3: not my current answer). So I agree with your analysis. I see it this way: real Omegas cannot write on counterfactual paper.
ETA: -- the "counterfactual" built as "being in another quantum branch of exactly the same universe" strikes me as being of the sort where Omega does run the calculator again, so it should affect both worlds as another observation.
ETA2: I've changed my mind about there being an independent observation.
Actually, isn't this the very heart of the matter? In my other comment here I assumed Omega would always ask what the correct answer is if the calculator shows The Other Result; if that's not the case everything changes.
The answer does depend on this fact, but since this fact wasn't specified, assume uncertainty (say, Omega always appears when you observe "even" and had pasta for breakfast).
Not by my understanding (but I decided to address it in a top-level comment). ETA: yes, in my updated understanding.
This would be correct if Q could be different, but Q is the same both in the counterfactual and the actual word. There is no possibility for the actual world being Even World and the counterfactual Odd World.
The possibilities are:
Actual: Even World, Right Calculator (99% of Even Words); Counterfactual: Even World, Wrong Calculator (1% of Even Worlds).
Actual: Odd World, Wrong Calculator (1% of Odd Words); Counterfactual: Odd World, Right Calculator (99% of Odd Words).
The prior probability of either is 50%. If we assume That Omega randomly picks one you out of 100% of possible words(either 100% of all Even Worlds or 100% of all Odd Words) to decide for all possible words where the calculator result is different (but the correct answer is the same), then there is a 99% chance all worlds are Even and your choice affects 1% of all worlds and a 1% chance all words are Odd and your choice affects 99% of all worlds. The result of the calculator in the counterfactual world doesn't provide any evidence on whether all words are Even or all worlds are Odd since in either case there would be such a world to talk about.
If we assume that Omega randomly visits one world and randomly mentions the calculator result of one other possible world and it just happened to be the case that in that other world the result was different; or if Omega randomly picks a world, then randomly picks a world with the opposite calculator result and tosses a coin as to which world to visit and which to mention then the calculator result in the counterfactual word is equally relevant and hearing Omega talk about it just as good as running the calculator twice. In this case you are equally likely to be in a Odd world and might just as well toss a coin as to which result you fill in yourself.
This doesn't square with my interpretation of the premises of the question. We are unsure of Q's parity. Our prior is 50:50 odd, even. We are also unsure of calculator's trustworthiness. Our prior is 99:1 right, wrong. Therefore - on my understanding of counterfactuality - both options for both uncertainties need to be on the table.
I am unconvinced you can ignore your uncertainty on Q's parity by arguing that it will come out only one way regardless of your uncertainty - this is true for coinflips in deterministic physics, but that doesn't mean we can't consider the counterfactual where the coin comes up tails.
From the original post:
Clarified here and here.
We cannot determine Q's parity, except by fallible calculator. When you say Q is the same, you seem to be including "Q's parity is the same".
Hmm. Maybe this will help?
But consider this situation:
These situations are clearly the counterfactuals of each other - that is, when scenario 1 says "the counterfactual world" it is saying "scenario 2", and vice versa. The interpretations given in the second half of each contradict each other - the first scenario attempts to decide for the second scenario and gets it wrong; the second scenario attempts to decide for the first and gets it wrong. Whence this contradiction?
Yes, that would be a counterfactual. But NOT the counterfactual under consideration. The counterfactual under consideration was the calculator result being different but Q (both the number and the formula, and thus their parity) being the same. Unless Nesov was either deliberately misleading or completely failed his intention to clarify anything the comments linked to. If Q is the same formula is supposed to be clear in any way then everything about Q has to be the same. If the representation of Q in the formula was supposed be the same, but the actual value possibly counterfactually different then only answering that the formula is the same is obscuration, not clarification.
I disagree. Recall that I specified this in each case:
Q (both the number and the formula, and thus the parity) is the same in both scenarios. The actual value is not counterfactually different - it's the same value in the safe, both times.
If you agree that Q's parity is the same I'm not sure what you are disagreeing with. Its not possible for Q to be odd in the counterfactual and even in actuality, so if Q is odd in the counterfactual that implies it is also odd in actuality and vice versa. Thus it's not possible for the calculator to be right in both counterfactual and reality simultaneously, and assuming it to be right in the counter-factual implies that it's wrong in actuality. Therefore you can reduce everything to the two cases I used, Q even/actual calculator right/counterfactual calculator wrong or Q odd/actual calculator wrong/counterfactual calculator right.
Maybe this could be more enlightening. When you control things, one of the necessary requirements is that you have logical uncertainty about some property of the thing you control. You start with having a definition of the control target, but not knowing some of its properties. And then you might be able to infer a dependence of one of its properties on your action. This allows you to personally determine what is that property of a structure whose definition you already know. See my posts on ADT for more detail.
I have been positing that these two cases are counterfactuals of each other. Before one of these two cases occurs, we don't know which one will occur. It is possible to consider being in the other case.
The problem is symmetrical. You can just copy everything, replace odd with even and vice versa and multiply everything with 0.5, then you also have the worlds where you see odd and Omega offers you to replace the result in counterfactuals where it came up even and where Q has the same parity. Doesn't change that Q is the same in the world that decides and the counterfactuals that are effected. Omega also transposing your choice to impossible worlds (or predicting what would happen in impossible worlds and imposing that on what happens in real worlds) would be a different problem (that violates that Q be the same in the counterfactual, but seems to be the problem you solved).
If someone is sure enough that I'm wrong to downvote all my post on this they should be able to tell me where I'm wrong. I would be extremely interested in finding out.
I don't know why you were downvoted. But I do notice that somewhere on this thread, the meaning of "Even World" has changed from what it was when Shokwave introduced the term. Originally it meant a world whose calculator showed 'Even'.
You're reasoning about the counterfactual using observational knowledge, i.e. making exactly the error whose nature puzzles me and is the subject of the post. In safely correct (but unenlightening about this error) updateless analysis, on the other hand, you don't update on observations, so shouldn't say things like "there is a 99% chance all worlds are Even".
No. That's completely insubstantial. Replace "even" with "same parity" and "odd" with "different parity" in my argument and the outcome is the same. The decision can be safely made before making any observations at all.
EDIT: And even in the formulation given I don't update on personally having seen the even outcome (which is irrelevant, there is no substantial difference between me and the mes at that point) but Omega visiting me in a world where the calculator result came up even.
Please restate in more detail how you arrived at the following conclusion, and what made it so instead of the prior 50/50 for Even/Odd. It appears that it must be the observation of "even", otherwise what privileged Even over Odd?
See the edit. If Omega randomly visits a possible world I can say ahead of time that there is a 99% chance that in that particular world the calculator result is correct and the decision will affect 1% of all worlds and a 1% chance that the result is wrong and the decision affects 99% of all worlds.
So you know a priori that the answer is Even, without even looking at the calculator? That can't be right.
(You're assuming that you know that Omega only arrives in "even" worlds, and updating on observing Omega, even before observing it. But in the same movement, you update on the calculator showing "even". Omega doesn't show up in the "odd" world, so you can't update on the fact that it shows up, other than by observing it, or alternatively observing "even" given the assumption of equivalence of these events.)
Of course not.
No. I'm assuming that either even is correct in all worlds or odd is correct in all worlds (0.5 prior for either). If Omega randomly picks a world, the chance of the calculator being correct is independent of that and 99% everywhere, then there is a 99% chance of the calculator being correct in the particular world Omega arrives in. If odd is correct Omega is 99% likely to arrive in a world where the calculator says odd, and if the calculator says odd in the particular world Omega arrives in there is a 99% chance that's because odd is correct.
EDIT:
If I were
the probability of even being correct would be 50% no matter what, and there would be a 50% chance each for affecting 99% of all worlds or 1% of all worlds.
I seem to agree with all of the above statements. The conditional probabilities are indeed this way. But it's incorrect to use these conditional probabilities (which is to say, probabilities of Odd/Even after updating on observing "even") to compute expected utility for the counterfactual. In a prior comment, you write:
99% is P(Even|Omega,"even"), that is to say it's probability of Even updated by observations (events) that Omega and "even".
No. There is no problem with using conditional probabilities if you use the correct conditional probabilities, that is the probabilities from wherever the decision happens, not from what you personally encounter. And I never claimed that any of the pieces you were quoting were part of an updateless analysis, just that it made no difference.
I would try to write a Wei Dai style world program at this point, but I know no programming at all and am unsure how drawing at random is supposed to be represented. It would be the same as the program for this game, though:
1 black and 99 white balls in an urn. You prefer white balls. You may decide to draw a ball and change all balls of the other color to balls of the color drawn, and must decide before the draw is made. (or to make it slightly more complicated: Someone else secretly flips a coin whether you get points for black or white balls. You get 99 balls of the color you get points for and one ball of the other color).
I believe the above is correct updateless analysis of the thought experiment. (Which is a natural step to take in considering it, but not the point of the post, see its last paragraph.)
Exactly. The correct decision for factual you may be different to the correct decision for counterfactual you.