Followup to: Anthropic Reasoning in UDT by Wei Dai
Suppose that I flip a logical coin - e.g. look at some binary digit of pi unknown to either of us - and depending on the result, either create a billion of you in green rooms and one of you in a red room if the coin came up 1; or, if the coin came up 0, create one of you in a green room and a billion of you in red rooms. You go to sleep at the start of the experiment, and wake up in a red room.
Do you reason that the coin very probably came up 0? Thinking, perhaps: "If the coin came up 1, there'd be a billion of me in green rooms and only one of me in a red room, and in that case, it'd be very surprising that I found myself in a red room."
What is your degree of subjective credence - your posterior probability - that the logical coin came up 1?
There are only two answers I can see that might in principle be coherent, and they are "50%" and "a billion to one against".
Tomorrow I'll talk about what sort of trouble you run into if you reply "a billion to one".
But for today, suppose you reply "50%". Thinking, perhaps: "I don't understand this whole consciousness rigamarole, I wouldn't try to program a computer to update on it, and I'm not going to update on it myself."
In that case, why don't you believe you're a Boltzmann brain?
Back when the laws of thermodynamics were being worked out, there was first asked the question: "Why did the universe seem to start from a condition of low entropy?" Boltzmann suggested that the larger universe was in a state of high entropy, but that, given a long enough time, regions of low entropy would spontaneously occur - wait long enough, and the egg will unscramble itself - and that our own universe was such a region.
The problem with this explanation is now known as the "Boltzmann brain" problem; namely, while Hubble-region-sized low-entropy fluctuations will occasionally occur, it would be far more likely - though still not likely in any absolute sense - for a handful of particles to come together in a configuration performing a computation that lasted just long enough to think a single conscious thought (whatever that means) before dissolving back into chaos. A random reverse-entropy fluctuation is exponentially vastly more likely to take place in a small region than a large one.
So on Boltzmann's attempt to explain the low-entropy initial condition of the universe as a random statistical fluctuation, it's far more likely that we are a little blob of chaos temporarily hallucinating the rest of the universe, than that a multi-billion-light-year region spontaneously ordered itself. And most such little blobs of chaos will dissolve in the next moment.
"Well," you say, "that may be an unpleasant prediction, but that's no license to reject it." But wait, it gets worse: The vast majority of Boltzmann brains have experiences much less ordered than what you're seeing right now. Even if a blob of chaos coughs up a visual cortex (or equivalent), that visual cortex is unlikely to see a highly ordered visual field - the vast majority of possible visual fields more closely resemble "static on a television screen" than "words on a computer screen". So on the Boltzmann hypothesis, highly ordered experiences like the ones we are having now, constitute an exponentially infinitesimal fraction of all experiences.
In contrast, suppose one more simple law of physics not presently understood, which forces the initial condition of the universe to be low-entropy. Then the exponentially vast majority of brains occur as the result of ordered processes in ordered regions, and it's not at all surprising that we find ourselves having ordered experiences.
But wait! This is just the same sort of logic (is it?) that one would use to say, "Well, if the logical coin came up heads, then it's very surprising to find myself in a red room, since the vast majority of people-like-me are in green rooms; but if the logical coin came up tails, then most of me are in red rooms, and it's not surprising that I'm in a red room."
If you reject that reasoning, saying, "There's only one me, and that person seeing a red room does exist, even if the logical coin came up heads" then you should have no trouble saying, "There's only one me, having a highly ordered experience, and that person exists even if all experiences are generated at random by a Boltzmann-brain process or something similar to it." And furthermore, the Boltzmann-brain process is a much simpler process - it could occur with only the barest sort of causal structure, no need to postulate the full complexity of our own hallucinated universe. So if you're not updating on the apparent conditional rarity of having a highly ordered experience of gravity, then you should just believe the very simple hypothesis of a high-volume random experience generator, which would necessarily create your current experiences - albeit with extreme relative infrequency, but you don't care about that.
Now, doesn't the Boltzmann-brain hypothesis also predict that reality will dissolve into chaos in the next moment? Well, it predicts that the vast majority of blobs who experience this moment, cease to exist after; and that among the few who don't dissolve, the vast majority of those experience chaotic successors. But there would be an infinitesimal fraction of a fraction of successors, who experience ordered successor-states as well. And you're not alarmed by the rarity of those successors, just as you're not alarmed by the rarity of waking up in a red room if the logical coin came up 1 - right?
So even though your friend is standing right next to you, saying, "I predict the sky will not turn into green pumpkins and explode - oh, look, I was successful again!", you are not disturbed by their unbroken string of successes. You just keep on saying, "Well, it was necessarily true that someone would have an ordered successor experience, on the Boltzmann-brain hypothesis, and that just happens to be us, but in the next instant I will sprout wings and fly away."
Now this is not quite a logical contradiction. But the total rejection of all science, induction, and inference in favor of an unrelinquishable faith that the next moment will dissolve into pure chaos, is sufficiently unpalatable that even I decline to bite that bullet.
And so I still can't seem to dispense with anthropic reasoning - I can't seem to dispense with trying to think about how many of me or how much of me there are, which in turn requires that I think about what sort of process constitutes a me. Even though I confess myself to be sorely confused, about what could possibly make a certain computation "real" or "not real", or how some universes and experiences could be quantitatively realer than others (possess more reality-fluid, as 'twere), and I still don't know what exactly makes a causal process count as something I might have been for purposes of being surprised to find myself as me, or for that matter, what exactly is a causal process.
Indeed this is all greatly and terribly confusing unto me, and I would be less confused if I could go through life while only answering questions like "Given the Peano axioms, what is SS0 + SS0?"
But then I have no defense against the one who says to me, "Why don't you think you're a Boltzmann brain? Why don't you think you're the result of an all-possible-experiences generator? Why don't you think that gravity is a matter of branching worlds in which all objects accelerate in all directions and in some worlds all the observed objects happen to be accelerating downward? It explains all your observations, in the sense of logically necessitating them."
I want to reply, "But then most people don't have experiences this ordered, so finding myself with an ordered experience is, on your hypothesis, very surprising. Even if there are some versions of me that exist in regions or universes where they arose by chaotic chance, I anticipate, for purposes of predicting my future experiences, that most of my existence is encoded in regions and universes where I am the product of ordered processes."
And I currently know of no way to reply thusly, that does not make use of poorly defined concepts like "number of real processes" or "amount of real processes"; and "people", and "me", and "anticipate" and "future experience".
Of course confusion exists in the mind, not in reality, and it would not be the least bit surprising if a resolution of this problem were to dispense with such notions as "real" and "people" and "my future". But I do not presently have that resolution.
(Tomorrow I will argue that anthropic updates must be illegal and that the correct answer to the original problem must be "50%".)
If the question was, "What odds should you bet at?", it could be answered using your values. Suppose each copy of you has $1000, and copies of you in a red room are offered a bet that costs $1000 and pays $1001 if the Nth bit of pi is 0. Which do you prefer:
To refuse the bet?
To take the bet?
But the question is "What is your posterior probability"? This is not a decision problem, so I don't know that it has an answer.
I think it may be natural to ask instead: "Given that your learned cognitive system of rational prediction is competing for influence over anticipations used in making decisions, in a brain which awards influence over anticipation to different cognitive systems depending on the success of their past reported predictions, which probability should your rational prediction system report to the brain's anticipation-influence-awarding mechanisms?"
Suppose you know the following:
This question could be answered using your values. Which would you prefer:
In both green rooms and red rooms, to rationally predict 1:1 probabilities of the experiences of being informed that the Nth bit of pi is 0 or 1?
In red rooms, to rationally predict a 1,000,000,000:1 probability of the experience of being informed that the Nth bit of pi is 0, and in green rooms, to rationally predict a 1,000,000,000:1 probability of the experience of being informed that the Nth bit of pi is 1?
The answer depends on the starting relative influences and on the details of the function from amounts of non-rational anticipation to amounts of harm. But for perspective, the ratio 2:1,000,000,001 can be reversed with 29.9 copies of the ratio 2,000,000,000:1,000,000,001.
If your copies are being merged, the optimal "rational" prediction would depend on the details of the merging algorithm. If the merging algorithm took the arithmetic mean of the updated influences, the optimal prediction would still depend on the starting relative influences and the harm from non-rational anticipations. But if the merging algorithm multiplicatively combined the likelihood ratios from every copy's predictions, then the second prediction rule would be optimal.
To make decisions about how to value possibly logically impossible worlds, it may help to imagine that the decision problem will be iterated with the (N+1)th digit of pi, the (N+2)th bit, ...
(If the rational prediction system already has complete control of your brain's anticipations, then there may be no reason to predict anything that does not affect a decision.)
Let me suggest that for anthropic reasoning, you are not directly calculating expected utility but actually trying to determine priors instead. And this traces back to Occam’s razor and hence complexity measures (complexity prior). Further, it is not probabilities that you are trying to directly manipulate, but degrees of similarity. (i.e which reference class does a given observer fall into? – what is the degree of similarity between given algorithms?). So rather than utility and probability, you are actually trying to manipulate something more basi... (read more)