Followup toAnthropic Reasoning in UDT by Wei Dai

Suppose that I flip a logical coin - e.g. look at some binary digit of pi unknown to either of us - and depending on the result, either create a billion of you in green rooms and one of you in a red room if the coin came up 1; or, if the coin came up 0, create one of you in a green room and a billion of you in red rooms.  You go to sleep at the start of the experiment, and wake up in a red room.

Do you reason that the coin very probably came up 0?  Thinking, perhaps:  "If the coin came up 1, there'd be a billion of me in green rooms and only one of me in a red room, and in that case, it'd be very surprising that I found myself in a red room."

What is your degree of subjective credence - your posterior probability - that the logical coin came up 1?

There are only two answers I can see that might in principle be coherent, and they are "50%" and "a billion to one against".

Tomorrow I'll talk about what sort of trouble you run into if you reply "a billion to one".

But for today, suppose you reply "50%".  Thinking, perhaps:  "I don't understand this whole consciousness rigamarole, I wouldn't try to program a computer to update on it, and I'm not going to update on it myself."

In that case, why don't you believe you're a Boltzmann brain?

Back when the laws of thermodynamics were being worked out, there was first asked the question:  "Why did the universe seem to start from a condition of low entropy?"  Boltzmann suggested that the larger universe was in a state of high entropy, but that, given a long enough time, regions of low entropy would spontaneously occur - wait long enough, and the egg will unscramble itself - and that our own universe was such a region.

The problem with this explanation is now known as the "Boltzmann brain" problem; namely, while Hubble-region-sized low-entropy fluctuations will occasionally occur, it would be far more likely - though still not likely in any absolute sense - for a handful of particles to come together in a configuration performing a computation that lasted just long enough to think a single conscious thought (whatever that means) before dissolving back into chaos.  A random reverse-entropy fluctuation is exponentially vastly more likely to take place in a small region than a large one.

So on Boltzmann's attempt to explain the low-entropy initial condition of the universe as a random statistical fluctuation, it's far more likely that we are a little blob of chaos temporarily hallucinating the rest of the universe, than that a multi-billion-light-year region spontaneously ordered itself.  And most such little blobs of chaos will dissolve in the next moment.

"Well," you say, "that may be an unpleasant prediction, but that's no license to reject it."  But wait, it gets worse:  The vast majority of Boltzmann brains have experiences much less ordered than what you're seeing right now.  Even if a blob of chaos coughs up a visual cortex (or equivalent), that visual cortex is unlikely to see a highly ordered visual field - the vast majority of possible visual fields more closely resemble "static on a television screen" than "words on a computer screen".  So on the Boltzmann hypothesis, highly ordered experiences like the ones we are having now, constitute an exponentially infinitesimal fraction of all experiences.

In contrast, suppose one more simple law of physics not presently understood, which forces the initial condition of the universe to be low-entropy.  Then the exponentially vast majority of brains occur as the result of ordered processes in ordered regions, and it's not at all surprising that we find ourselves having ordered experiences.

But wait!  This is just the same sort of logic (is it?) that one would use to say, "Well, if the logical coin came up heads, then it's very surprising to find myself in a red room, since the vast majority of people-like-me are in green rooms; but if the logical coin came up tails, then most of me are in red rooms, and it's not surprising that I'm in a red room."

If you reject that reasoning, saying, "There's only one me, and that person seeing a red room does exist, even if the logical coin came up heads" then you should have no trouble saying, "There's only one me, having a highly ordered experience, and that person exists even if all experiences are generated at random by a Boltzmann-brain process or something similar to it."  And furthermore, the Boltzmann-brain process is a much simpler process - it could occur with only the barest sort of causal structure, no need to postulate the full complexity of our own hallucinated universe.  So if you're not updating on the apparent conditional rarity of having a highly ordered experience of gravity, then you should just believe the very simple hypothesis of a high-volume random experience generator, which would necessarily create your current experiences - albeit with extreme relative infrequency, but you don't care about that.

Now, doesn't the Boltzmann-brain hypothesis also predict that reality will dissolve into chaos in the next moment?  Well, it predicts that the vast majority of blobs who experience this moment, cease to exist after; and that among the few who don't dissolve, the vast majority of those experience chaotic successors.  But there would be an infinitesimal fraction of a fraction of successors, who experience ordered successor-states as well.  And you're not alarmed by the rarity of those successors, just as you're not alarmed by the rarity of waking up in a red room if the logical coin came up 1 - right?

So even though your friend is standing right next to you, saying, "I predict the sky will not turn into green pumpkins and explode - oh, look, I was successful again!", you are not disturbed by their unbroken string of successes.  You just keep on saying, "Well, it was necessarily true that someone would have an ordered successor experience, on the Boltzmann-brain hypothesis, and that just happens to be us, but in the next instant I will sprout wings and fly away."

Now this is not quite a logical contradiction.  But the total rejection of all science, induction, and inference in favor of an unrelinquishable faith that the next moment will dissolve into pure chaos, is sufficiently unpalatable that even I decline to bite that bullet.

And so I still can't seem to dispense with anthropic reasoning - I can't seem to dispense with trying to think about how many of me or how much of me there are, which in turn requires that I think about what sort of process constitutes a me.  Even though I confess myself to be sorely confused, about what could possibly make a certain computation "real" or "not real", or how some universes and experiences could be quantitatively realer than others (possess more reality-fluid, as 'twere), and I still don't know what exactly makes a causal process count as something I might have been for purposes of being surprised to find myself as me, or for that matter, what exactly is a causal process.

Indeed this is all greatly and terribly confusing unto me, and I would be less confused if I could go through life while only answering questions like "Given the Peano axioms, what is SS0 + SS0?"

But then I have no defense against the one who says to me, "Why don't you think you're a Boltzmann brain?  Why don't you think you're the result of an all-possible-experiences generator?  Why don't you think that gravity is a matter of branching worlds in which all objects accelerate in all directions and in some worlds all the observed objects happen to be accelerating downward?  It explains all your observations, in the sense of logically necessitating them."

I want to reply, "But then most people don't have experiences this ordered, so finding myself with an ordered experience is, on your hypothesis, very surprising.  Even if there are some versions of me that exist in regions or universes where they arose by chaotic chance, I anticipate, for purposes of predicting my future experiences, that most of my existence is encoded in regions and universes where I am the product of ordered processes."

And I currently know of no way to reply thusly, that does not make use of poorly defined concepts like "number of real processes" or "amount of real processes"; and "people", and "me", and "anticipate" and "future experience".

Of course confusion exists in the mind, not in reality, and it would not be the least bit surprising if a resolution of this problem were to dispense with such notions as "real" and "people" and "my future".  But I do not presently have that resolution.

(Tomorrow I will argue that anthropic updates must be illegal and that the correct answer to the original problem must be "50%".)

New Comment
72 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Necromancy, but: easy. Boltzmann brains obey little or no causality, and thus cannot possibly benefit from rationality. As such, rationality is wasted on them. Optimize for the signal, not for the noise.

What is your degree of subjective credence - your posterior probability - that the logical coin came up 1?

. . .

(Tomorrow I will argue that anthropic updates must be illegal and that the correct answer to the original problem must be "50%".)

If the question was, "What odds should you bet at?", it could be answered using your values. Suppose each copy of you has $1000, and copies of you in a red room are offered a bet that costs $1000 and pays $1001 if the Nth bit of pi is 0. Which do you prefer:

  • To refuse the bet?

    • With 50% subjective logical probability, the Nth bit of pi will be 0 and you will have $1,000 per copy.
    • With 50% subjective logical probability, the Nth bit of pi will be 1 and you will have $1,000 per copy.
  • To take the bet?

    • With 50% subjective logical probability, the Nth bit of pi will be 0 and you will have $ 1,000.999 999 999 per copy.
    • With 50% subjective logical probability, the Nth bit of pi will be 1 and you will have $ 999.999 999 per copy.

But the question is "What is your posterior probability"? This is not a decision problem, so I don't know that it has an answer.

I think it may be natural to ask instead: "Given that y... (read more)

0SforSingularity
I agree with Steve; we have to take a step back and ask not for probabilities but for decision algorithms that aim to achieve certain goals, then it all makes sense; it has to - based upon materialism, whatever definition of "you" you try to settle upon, "you" is some set of physical objects that behave according to a certain decision algorithm, and given the decision algorithm, "you" will have a well-defined expected future reward.
-4[anonymous]
Let me suggest that for anthropic reasoning, you are not directly calculating expected utility but actually trying to determine priors instead. And this traces back to Occam’s razor and hence complexity measures (complexity prior). Further, it is not probabilities that you are trying to directly manipulate, but degrees of similarity. (i.e which reference class does a given observer fall into? – what is the degree of similarity between given algorithms?). So rather than utility and probability, you are actually trying to manipulate something more basic , i.e., complexity and similarity measures Suggested analogy: Complexity (is like) Utility Similarity (is like ) Probability Let me suggest that rather than trying to ‘maximize utility’ directly, you should first attempt to ‘minimize complexity’ using a new generalized new form of rationality based on the above analogy (The putative method would be an entirely new type of rationality which subsumes ordinary Bayesian reasoning as a special case). The ‘expected complexity’ (analogous to ‘expected utility’) would be based on a ‘complexity function’ (analogous to ‘utility function’) that combines similarity measures (similiarities between algorithms) with the complexities of given outcomes. The utilities and probabilities would be derived from these calculations (ordinary Bayesian rationality would be derivative rather than fundamental). * M J Geddes (Black Swan Siren!)

The skeleton of the argument is:

  1. Present a particular thought experiment, intended to provoke anthropic reasoning. There are two moderately plausible answers, "50%" and "a billion to one against".
  2. Assume for the sake of argument, the answer to the thought experiment is 50%. Note that the "50%" answer corresponds to ignoring the color of the room - "not updating on it" in the Bayesian jargon.
  3. The thought experiment is analogous to the Bolzmann-brain hypothesis. In particular, the color of the room corresponds to the ordered-ness of our experiences.
  4. With the exception of the ordered-ness of our experiences, a stochastic-all-experience-generator would be consistent with all observations.
  5. Occam's Razor: Use the simplest possible hypothesis consistent with observations.
  6. A stochastic-all-experience-generator would be a simple hypothesis.
  7. From 3, 4, 5, and 6, predict that the universe is a stochastic all-experience generator.
  8. From 7, some very unpleasant consequences.
  9. From 8, reject the assumption.

I think the argument can be improved.

According to the minimum description length notion of science, we have a model and a sequence of observations. A &... (read more)

0Leoeer
Isn't the argument in one false? If one applies bayes' theorem, with initial prob. 50% and new likelihood ratio of a billion to one, don't you get 500000000 to one chances?
0Johnicholas
I think you may be sincerely confused. Would you please reword your question? If your question is whether someone (either me or the OP) has committed a multiplication error - yes, it's entirely possible, but multiplication is not the point - the point is anthropic reasoning and whether "I am a Bolzmann brain" is a simple hypothesis.
0Christian_Szegedy
I Agree very much. It reminds me of one remark of Eliezer in his diavlog with Scott about the multiple world interpretation of QM. There he also said something to the effect that Occam's razor is only about the theory, but not about the "amount of stuff". I think that was the same fallacy. When Using MDL, you have to give a short description for your actual observation history, or at least give an upper bound for the compressed length. In multiple world theories these bounds can become very nontrivial, and the observations can easily dominate the description length, therefore Occam's razor cannot be applied without thorough quantitative analysis. Of course, in that special context it was true that a random state-reduction is not better than a multiple world hypothesis, in fact: slightly worse. However, one should add, a deterministic (low complexity) state reduction would be far superior. Regardless: such lighthearted remarks about the "amount of stuff" in Occam's razor are misleading at least.
0[anonymous]
"That claim (6) might be true, but it needs more support." Agreed.

I would have answered 1B:1 (looking forward to the second post to be proved wrong), however I think a rational agent should never believe in the Boltzmann brain scenario regardless.

Not because it is not a reasonable hypothesis, but since it negates the agent's capabilities of estimating prior probabilities (it cannot trust even a predetermined portion of its memories) plus it also makes optimizing outcomes a futile undertaking.

Therefore, I'd generally say that an agent has to assume an objective, causal reality as a precondition of using decision theory at all.

But for today, suppose you reply "50%". Thinking, perhaps: "I don't understand this whole consciousness rigamarole, I wouldn't try to program a computer to update on it, and I'm not going to update on it myself."

In that case, why don't you believe you're a Boltzmann brain?

This sounds backwards (sideways?); the reason to (strongly) believe one is a Boltzmann brain is that there are very many of them in some weighting compared to the "normal" you, which corresponds to accepting probability of 1 to the billion in this though... (read more)

3Eliezer Yudkowsky
The problem is that if you don't update on the proportions of sentients who have your particular experience, then there are much simpler hypotheses than our current physical model which would generate and "explain" your experiences, namely, "Every experience happens within the dust." To put it another way, the dust hypothesis is extremely simple and explains why this experience exists. It just doesn't explain why an ordered experience instead of a disordered one, when ordered experiences are such a tiny fraction of all experiences. If you think the latter is a non-consideration then you should just go with the simplest explanation.
5Vladimir_Nesov
Traditional explanations are for updating; this is probably a relevant tension. If you don't update, you can't explain in the sense of updating. The notion of explanation itself has to be revised in this light.
1Nisan
Are the Boltzmann brain hypothesis and the dust hypothesis really simpler than the standard model of the universe, in the sense of Occam's razor? It seems to me that it isn't. I'm thinking specifically about Solomonoff induction here. A Boltzmann brain hypothesis would be a program that correctly predicts all my experiences up to now, and then starts predicting unrelated experiences. Such a program of minimal length would essentially emulate the standard model until output N, and then start doing something else. So it would be longer than the standard model by however many bits it takes to encode the number N.

It seems to me that “I’m a Bolzmann brain” is exactly the same sort of useless hypothesis as “Everything I think I experience is a hallucination manufactured by an omnipotent evil genie”. They’re both non-falsifiable by definition, unsupported by any evidence, and have no effect on one’s decisions in any event. So I say: show me some evidence, and I’ll worry about it. Otherwise it isn’t even worth thinking about.

[Rosencrantz has been flipping coins, and all of them are coming down heads]

Guildenstern: Consider: One, probability is a factor which operates within natural forces. Two, probability is not operating as a factor. Three, we are now held within un-, sub- or super-natural forces. Discuss.

Rosencrantz: What?

Rosencrantz & Guildenstern Are Dead, Tom Stoppard

The Boltzmann brain argument was the reason why I had not adopted something along the lines of UDT, despite having considered it and discussed it a bit with others, before the recent LW discussion. Instead, I had tagged it as 'needs more analysis later.' After the fact, that looks like flinching to me.

[-][anonymous]30

Here, let me re-respond to this post.

So if you're not updating on the apparent conditional rarity of having a highly ordered experience of gravity, then you should just believe the very simple hypothesis of a high-volume random experience generator, which would necessarily create your current experiences - albeit with extreme relative infrequency, but you don't care about that.

"A high-volume random experience generator" is not a hypothesis. It's a thing. "The universe is a high-volume random experience generator" is better, but stil... (read more)

3pengvado
"My observations are [...], which were output by a high-volume random experience generator". Since the task is to explain my observations, not to predict where I am. This way also makes it more clear that that suffix is strictly superfluous from a Kolmogorov perspective. You mean less likely. i.e. there is no nonnegative monotonic-increasing infinite series whose sum is finite. Also, it need not happen for all large N, just some of them. So I would clarify it as: ∀L ∃N>L ∀M>N (((sum of probabilities of hypotheses of length M) < (sum of probabilities of hypotheses of length N)) or (both are zero)). But you shouldn't take that into account for your example. The theorem applies to infinite sequences of hypotheses, but not to any one finite hypothesis such as the disjunction of a billion green rooms. To get conclusions about a particular hypothesis, you need more than "any prior is Occam's razor with respect to a sufficiently perverse complexity metric".
1[anonymous]
You are correct, though I believe your statement is equivalent to mine. Right again; I'll fix my post.

I think we need to reduce "surprise" and "explanation" first. I suggest they have to do with bounded rationality and logical uncertainty. These concepts don't seem to exist in decision theories with logical omniscience.

Surprise seems to be the output of some heuristic that tell you when you may have made a cognitive error or taken a computational shortcut that turns out to be wrong (i.e., you find yourself in a situation where you had previously computed to have low probability) and should go back and recheck your logic. After you've f... (read more)

[-][anonymous]30

Suppose Omega plays the following game (the "Probability Game") with me: You will tell me a number X representing the probability of A. If A turns out to be true, I will increase your utility by ln(X); otherwise, I will increase your utility by ln(1-X). It's well-known that the way one maximizes one's expected utility is by giving their actual expected probability of X.

Presumably, decision mechanisms should be consistent under reflection. Even if not, if I somehow know that Omega's going to split me into 1,000,000,001 copies and do this, I want t... (read more)

(Missing word alert in paragraph 11: "Even [if] a blob of chaos coughs up a visual cortex (or equivalent)...".)

2Eliezer Yudkowsky
thx fixed

(Tomorrow I will argue that anthropic updates must be illegal and that the correct answer to the original problem must be "50%".)

Is your intent here to argue both sides of the issue to help, well, lay out the issues, or is it your actual current position that anthropic updates really really are verbotten and that 50% is the really really correct answer?

9Eliezer Yudkowsky
It's my intent here to lay out my own confusion.

In the criticisim of Boltzman, entropy sounds like a radio dial that someone is tweaking rather than a property of some space. I may be misunderstanding something.

Basically, if some tiny part of some enormous universe happened to condense into a very low-entropy state, that does not mean that it could spontaneously jump to a high-entropy state. It would, with extremely high probability, slowly return to a high-entropy state. It thus seems like we could see what we actually see and not be at risk of spontaneously turning into static. Our current observable ... (read more)

4DanArmak
If the low-entropy area of the universe was originally a spontaneous fluctuation in a bigger max-entropy universe, than that is vastly improbable. Such a fluctuation is exponentially more likely for (linearly) smaller volumes of the universe. So the parsimonious explanation for what we see, on this theory, is that the part of the universe that has low entropy is the smallest which is still enough to generate our actual experience. How small is "smallest"? Well, to begin with, it's not large enough to include stars outside the Solar System; it's vastly more likely that the light en route from those stars to Earth was spontaneously created, than that the stars themselves and all the empty space between (very low entropy!) were created millions of years earlier. So the parsimonious explanation is that any moment now, that light created en route is going to run out and we'll start seeing static (or at least darkness) in the night sky. Similarly: we have a long historical record in geology, archaeology, even written history. Did it all really happen? The parsimonious explanation says that it's vastly more likely that an Earth with fossils was spontaneously created, than that an Earth with dinosaurs was created, who then became fossils. This is because the past light cone of, say, a billion-year-old Earth is much bigger than the past light cone of a 6000 year old earth. And so requires the spontaneous creation of a vastly bigger section of universe. Finally, it's vastly more likely that you were spontaneously created a second ago complete with all your memories, than that you really lived through what you remember. And it's vastly more likely that the whole spontaneous creation was only a few light-seconds across, and not as big as it seems. In which case it'll stop existing any moment now. That's the experience of a Boltzmann Brain.
2abramdemski
I agree. The idea that low-entropy pockets that form are totally immune to a simplicity prior seems unjustified to me. The universe may be in a high-entropy state, but it's still got physical laws to follow! It's not just doing things totally at random; that's merely a convenient approximation. Maybe I am ignorant here, but it seems like the probability of a particular low-entropy bubble will be based on more than just its size.

It's not enitrely clear what does t mean to create a number of "me": my consciuousness is only one and cannot be more than one and I only can feel sensations from one sigle body. If the idea is just to generate a certain number of physical copies of my body and embed my present consciousness into one of them at random then the problem is at least clear and determined from a mathematical point of view: it seems to be a simple probability problem about conditional probability. You are asking what is the probability that an event happened in the past given the condition of some a priori possible consequence, it can be easily solved by Bayes' formula and the probability is about one over 1 billion.


In that case, why don't you believe you're a Boltzmann brain?

I think a portion of the confusion comes from implicit assumptions about what constitutes "you", and an implicit semantics for how to manipulate the concept. Suppose that there are N (N large) instances of "you" processes that run on Boltzmann Brains, and M (M << N) that run in sensible copies of the world around me. Which one of them is "you"? If "you" is a particular one of the N that run on Boltzmann Brains, then which one is "you, 10 seco... (read more)

ISTM the problem of Boltzmann brains is irrelevant to the 50%-ers. Presumably, the 50%-ers are rational--e.g., willing to update on statistical studies significant at p=0.05. So they don't object to the statistics of the situation; they're objecting to the concept of "creating a billion of you", such that you don't know which one you are. If you had offered to roll a billion-sided die to determine their fate (check your local tabletop-gaming store), there would be no disagreement.

Of course, this problem of identity and continuity has been hash... (read more)

[-]Zaq00

"Why did the universe seem to start from a condition of low entropy?"

I'm confused here. If we don't go with a big universe and instead just say that our observable universe is the whole thing, then tracing back time we find that it began with a very small volume. While it's true that such a system wold necessarily have low entropy, that's largely because small volume = not many different places to put things.

Alternative hypothesis: The universe began in a state of maximal entropy. This maximum value was "low" compared to present day... (read more)

BBs can't make correct judgement about their reality. Their judgements are random. So 50 per cent BBs think that they are in non-random reality even if they are in random. So your experience doesn't provide any information if you are BB or not. Only prior matters, and the prior is high.

1CCC
The quoted figure does not follow. Random, yes; but it's not a coinflip. Given that a Boltzmann Brain can randomly appear with any set of memories, and given that the potential set of random universes is vastly larger than the potential set of non-random universes, I'd imagine that the odds of a randomly-selected Boltzmann Brain thinking it is in a non-random universe are pretty low...
0turchin
It will be true if BB would have time to think about their experiences and ability to come to logical conclusions. But BBs opinions are also random.
0CCC
Hmmm. If the Boltzmann Brain has no time to think and update its own opinions from its own memory, then it is overwhelmingly likely that it has no opinion one way or another about whether or not it is in a random universe. In fact, it is overwhelmingly likely that it does not even understand the question, because its mindspace does not include the concepts of both "random" and "universe"...
0turchin
Of course most BBs don't not think about whether are they random or not. But from subset of BBs who have thoughts about it (we cant say they are thinking as it is longer process), its thoughts are random, and 50 per cent thinks that they are not random. So experience updating of BB probabilities is not strong, but I am still not afraid to be BB by two other reasons. 1. Any BB is a copy of a real observer, and so I am real. (depends of identity solving) 2. BBs and real observers are not dominating class of observers. There is a third class, that is Bolzmann supercomputers which simulate our reality. They a medium size fluctuation which are very effective in creation trillions of observers moments which are rather consistent. But small amount of randomness also exist in such simulated universes ( it could be experimentally found). Hope to elaborate the idea in long post soon.
0turchin
Found the similar idea in recent article about Boltzmann Brains: "What we can do, however, is recognize that it’s no way to go through life. The data that an observer just like us has access to includes not only our physical environment, but all of the (purported) memories and knowledge in our brains. In a randomly-fluctuating scenario, there’s no reason for this “knowledge” to have any correlation whatsoever with the world outside our immediate sensory reach. In particular, it’s overwhelmingly likely that everything we think we know about the laws of physics, and the cosmological model we have constructed that predicts we are likely to be random fluctuations, has randomly fluctuated into our heads. There is certainly no reason to trust that our knowledge is accurate, or that we have correctly deduced the predictions of this cosmological model.” https://arxiv.org/pdf/1702.00850.pdf
1entirelyuseless
As I said before about skeptical scenarios: you cannot refute them by argument, by definition, because the person arguing for the skeptical scenario will say, "since you are in this skeptical scenario, your argument is wrong no matter how convincing it seems to you." But we do not believe those scenarios, and that includes the Boltzmann Brain theory, because they are not useful for any purpose. In other words, if you are a Boltzmann Brain, you have no idea what would be good to do, and in fact according to the theory you cannot do anything because you will not exist one second from now.
0hairyfigment
I don't think that's descriptively true at all. Regardless of whether or not I see a useful way to address it, I still wouldn't expect to dissolve momentarily with no warning. Now, this may be because humans can't easily believe in novel claims. But "my" experience certainly seems more coherent than one would expect a BB's to seem, and this calls out for explanation.
0entirelyuseless
A Boltzmann brain has no way to know anything, reason to any conclusion, or whatever. So it has no way to know whether its experience should seem coherent or not. So your claim that this needs explanation is an unjustified assumption, if you are a Boltzmann brain.
0hairyfigment
One man's modus ponens is another man's modus tollens. I don't even believe that you believe the conclusion.
0entirelyuseless
Which conclusion? I believe that a Boltzmann brain cannot validly believe or reason about anything, and I certainly believe that I am not a Boltzmann brain. More importantly, I believe everything I said there.
0hairyfigment
Seems like you're using a confusing definition of "believe", but the point is that I disagree about our reasons for rejecting the claim that you're a BB. Note that according to your reasoning, any theory which says you're a BB must give us a uniform distribution for all possible experiences. So rationally coming to assign high probability to that theory seems nearly impossible if your experience is not actually random.
1entirelyuseless
My reason for rejecting the claim of BB is that the claim is useless -- and I am quite sure that is my reason. I would definitely reject it for that reason even if I had an argument that seemed extremely convincing to me that there is a 95% chance I am a BB. A theory that says I am a BB cannot assign a probability to anything, not even by giving a uniform distribution. A BB theory is like a theory that says, "you are always wrong." You cannot get any probability assignments from that, since as soon as you bring them up, the theory will say your assignments are wrong. In a similar way, a BB theory implies that you have never learned or studied probability theory. So you do not know whether probabilities should sum to 100% (or to any similar normalized result) or anything else about probability theory. As I said, BB theory is useless -- and part of its uselessness is that it cannot imply any conclusions, not even any kind of prior over your experiences.
0hairyfigment
1. I'm using probability to represent personal uncertainty, and I am not a BB. So I think I can legitimately assign the theory a distribution to represent uncertainty, even if believing the theory would make me more uncertain than that. (Note that if we try to include radical logical uncertainty in the distribution, it's hard to argue the numbers would change. If a uniform distribution "is wrong," how would I know what I should be assigning high probability to?) 2. I don't think you assign a 95% chance to being a BB, or even that you could do so without severe mental illness. Because for starters: 3. Humans who really believe their actions mean nothing don't say, "I'll just pretend that isn't so." They stop functioning. Perhaps you meant the bar is literally 5% for meaningful action, and if you thought it was 0.1% you'd stop typing? 4. I would agree if you'd said that evolution hardwired certain premises or approximate priors into us 'because it was useful' to evolution. I do not believe that humans can use the sort of pascalian reasoning you claim to use here, not when the issue is BB or not BB. Nor do I believe it is in any way necessary. (Also, the link doesn't make this clear, but a true prior would need to include conditional probabilities under all theories being considered. Humans, too, start life with a sketch of conditional probabilities.)
0turchin
META: I made a comment in discussion about the article and add there my consideration why it is not bad to be BB, may be we could move discussion there? http://lesswrong.com/r/discussion/lw/ol5/open_thread_feb_06_feb_12_2017/dmmr

If I wake up in a red room after the coin toss, I'm going to assume that there are a billion of us in red rooms, and one in a green room, and vice versa. That way a billion of me are assuming the truth, and one is not. So chances are (Billion-and-one out of billion) that this iteration of me is assuming the truth.

We'll each have to accept, of course, the possibility of being wrong, but hey, it's still the best option for me altogether.

Tomorrow I'll talk about what sort of trouble you run into if you reply "a billion to one".

Trouble? We'll take it on together, because every "I" is in this team. [applause]

[-]Uni00

Eliezer_Yudkowsky wrote: "I want to reply, "But then most people don't have experiences this ordered, so finding myself with an ordered experience is, on your hypothesis, very surprising."

One will feel surprised by winning a million dollar on the lottery too, but that doesn't mean that it would be rational to assume that just because one won a million dollar on the lottery most people win a million dollar on the lottery.

Maybe most of us exist only for a fraction of a second, but in that case, what is there to lose by (probably falsely, but m... (read more)

1Will_Sawin
How much resources should you devote to the next day vs. the next month vs. the next year? If each additional second of existence is a vast improbability, for simplicity you may assume a few moments of existence, but no longer. If, OTOH, once you live, say, 3 seconds, it's as likely as not that you'll live a few more years - there's some sort of bimodality - then such a stance is justified. Bimodality would only work if there were some sort of theoretical justification.
1Uni
If everything that can happen, happens (sooner or later) - which is assumed * there will be continuations (not necessarily at the same spot in spacetime, but somewhere) of whatever brief life I have for a few seconds or planck times now, and continuations of those continuations too, and so on, without an end, meaning I'm immortal, given that identity is not dependent on the survival of any particular atoms (as opposed to patterns in which atoms, any atoms, are arranged, anywhere). This means that what I achieve during the short existences that are most common in the universe will only be parts of what I will have achieved in the long run, when all those short existences are "put together" (or thought of as one continuous life). Therefore, I should care about what my life will be like in a few years, in a few centuries, in a few googol years, et cetera, together, that is, my whole infinitely long future, more than I should care about any one short existence at any one place in spacetime. If I can maximize my overall happiness over my infinite life only by accepting a huge lot of suffering for a hundred years beginning now, I should do just that (if I'm a rational egoist). My life may very well consist of predominantly extremely short-lived Boltzmann-brains, but I don't die just because these Boltzmann-brains die off one by one at a terrific rate.
0Will_Sawin
I said "how much" not "if". My point is that you should care vastly more about the next few seconds then a few years from now.

I am a Boltzmann brain atheist. ;)

Boltzmann brains are a problem even if you're a 50 percenter. Many fixed models of physics produce lots of BB. Maybe you can solve this with a complexity prior, that BB are less real because they're hard to locate. But having done this, it's not clear to me how this interacts with Sleeping Beauty. It may well be that such a prior also favors worlds with fewer BB, that is, worlds with fewer observers, but more properly weighted observers.

(ETA: I read the post backwards, so that was a non sequitur, but I do think the application of anthropics to BB is not at all clear. I agree with Eliezer that it looks like it helps, but it might well make it worse.)

Here's a logic puzzle that may have some vague relevance to the topic.

You and two teammates are all going to be taken into separate rooms and have flags put on your heads. Each flag has a 50% chance of being black or being white. None of you can see what color your own flag is, but you will be told what color flags your two teammates are wearing. Before each of you leave your respective rooms, you may make a guess as to what color flag you yourself are wearing. If at least one of you guesses correctly and nobody guesses incorrectly, you all win. If anyone ... (read more)

2cousin_it
My attempt at a solution: if you see two flags of the same color, guess the opposite color, otherwise don't guess. This wins 75% of the time. Lemma 1: it's impossible that everyone chooses not to guess. Proof: some two people have the same color, because there are three people and only two colors. Lemma 2: the chance of losing is 25%. Proof: by lemma 1, the team can only lose if someone guessed wrong, which implies all three colors are the same, which is 2 out of 8 possible assignments. This leaves open the question of whether this strategy is optimal. I highly suspect it is, but don't have a proof yet. UPDATE: here's a proof I just found on the Internet, it's elegant but not easy to come up with. I wonder if there's a simpler one.

It's a tricky category of question alright - you can make it even trickier by varying the procedure by which the copies are created.

The best answer I've come up with so far is to just maximize total utility. Thus, I choose the billion to one side because it maximizes the number of copies of me that hold true beliefs. I will be interested to see whether my procedure withstands your argument in the other direction.

(And of course there is the other complication that strictly speaking the probability of a logical coin is either zero or one, we just don't know ... (read more)

Well, I don't think the analogy holds up all that well. In the coin flip story we "know" that there was a time before the universe with two equally likely rules for the universe. In the world as it is, AFAIK we really don't have a complete, internally consistent set of physical laws fully capable of explaining the universe as we experience it, let alone a complete set of all of them.

The idea that we live in some sort of low entropy bubble which spontaneously formed in a high entropy greater universe seems pretty implausible for the reasons you describe. But I don't think we can come to a conclusion from this significantly stronger than "there's a lot we haven't figured out yet".

0spriteless
Current physics models get around that question anyways. The way our brains work, there is more entropy after a memory is burned than before. Thus, time seems to flow from low to high entropy to us. If entropy was flowing the another direction, than our brains would think of another direction as past. The laws of thermodynamics are a side effect of how our brains process time. Thus we can have low entropy -> high entropy without a shit ton of Boltzmann Brains.
2timtyler
The laws of thermodynamics arise in practically any reversible cellular automaton with a temperature - they are not to do with brains.
0wnoise
The laws of thermodynamics arise in our analysis of practically any reversible cellular automaton with a temperature.
[-]island-10

This one always reminds me of flies repeatedly slamming their heads against a closed window rather than to face the fact that there is something fundamentally wrong with some of our unproven assumptions about thermodynamics and the big bang.

0Nick_Tarleton
...care to explain further why we're wrong?
2Vladimir_Nesov
Do you really want to see the answer?

I'd like to be the first to point out that this post doubles as a very long (and very undeserved) response to this post.

[-][anonymous]-20

Non-scientific hypothesis: The universe's initial state was a singularity as postulated by the big bang theory, a state of minimal entropy. As per thermodynamics, entropy has been, is, and will be increasing steadily from that point until precisely 10^40 years from the Big Bang, at which point the universe will cease to exist with no warning whatsoever.

Though this hypothesis is very arbitrary (the figure "10^40 years" has roughly 300 bits of entropy), I figure it explains our observations at least 300 bits better than the "vanilla heat death... (read more)