Comment author: simon2 19 October 2008 03:51:46AM 5 points [-]

Just for the sake of devil's advocacy:

4) You want to attribute good things to your ethics, and thus find a way to interpret events that enables you to do so.

Comment author: simon2 07 October 2008 06:11:28PM 2 points [-]

Miguel: it doesn't seem to be a reference to something, but just a word for some experience an alien might have had that is incomprehensible to us humans, analogous to humour for the alien.

Comment author: simon2 06 October 2008 07:13:45AM 0 points [-]

Psy-Kosh, my argument that Boltzmann brains go poof is a theoretical argument, not an anthropic one. Also, if we want to maximize our correct beliefs in the long run, we should commit to ignore the possibility that we are a brain with beliefs not causally affected by the decision to make that commitment (such as a brain that randomly pops into existence and goes poof). This also is not an anthropic argument.

With regard to longer-lived brains, if you expect there to be enough of them that even the ones with your experience are more common than minds in a real civilization with your experience, then you really should rationally expect to be one (although as a practical matter since there's nothing much a Boltzmann brain can reasonably expect to do one might as well ignore it*). If you expect there to be more long lived Boltzmann brains than civilization-based minds in general, but not enough for ones with your experience to outnumber civilization-based minds with your experience, then your experience tips the balance in favour of believing you are not a Boltzmann brain after all.

I think your confusion is the result of you not being consistent about whether you accept self-indication, or maybe you being inconsistent about whether you think of the possible space with Boltzmann brains and no civilizations as being additional to or a substitute for space with civilizations. Here's what different choices of those assumptions imply:

(I assume throughout that that the probability of Boltzmann brains per volume in any space is always lower than the probability of minds in civilizations where they are allowed by physics**)***

Assumptions -> conclusion

self-indication, additional -> our experience is not evidence**** for or against the existence of the additional space (or evidence for its existence if we consider the possibility that we may be unusually order-observing entities in that space)

self-indication, substitute -> our experience is evidence against the existence of the substitute space

instead of self-indication, assume the probability of being a given observer is inversely proportional to number of observers in possible universe containing that observer (this is the most popular alternative to self-indication) -> our experience is evidence against the existence of the additional or substitute space

*unless the Boltzmann brain, at further exponentially reduced probability, also obtained effective means of manipulating its environment...

** basically, define "allowed" to mean (density of minds with our experience in civ) >> (density of Boltzmann brains with our experience), and not allowed to mean the opposite (<<). One would expect the probability of a space with comparable densities to be low enough not to have a significant quantitative or qualitative affect on the conclusions.

***It seems rather unlikely that a space with our current apparent physical laws allows more long-lived B-brains than civilization-based brains. I am too tired to want to think about and write out what would follow if this is not true.

****I am using "evidence" here to mean shifts of probability relative to the outside view prior (conditional on the existence of any observers at all), which means that any experience is evidence for a larger universe (other things being equal) given self-indication, etc.

Comment author: simon2 06 October 2008 05:01:56AM 0 points [-]

Nick, do you use the normal definition of a Boltzmann brain?

It's supposed to be a mind which comes into existence by sheer random chance. Additional complexity - such as would be required for some support structure (e.g. an actual brain), or additional thinking without a support structure - comes with an exponential probability penalty. As such, a Boltzmann brain would normally be very short lived.

In principle, though, there could be so much space uninhabitable for regular civilizations that even long-lived Boltzmann brains which coincidentally have experiences similar to minds in civilizations outnumber minds in civilizations.

It's not clear whether you are worrying about whether you already are a Boltzmann brain, or if you think you are not one but think that if a Boltzmann brain took on your personality it would be 'you'. If the former, I can only suggest that nothing you do as a Boltzmann brain is likely to have much effect on what happens to you, or on anything else. If the latter, I think you should upgrade your notion of personal identity. While the notion that personality is the essence of identity is a step above the notion that physical continuity is the essence of identity, by granting the notion that there is an essence of identity at all it reifies the concept in a way it doesn't deserve, a sort of pseudosoul for people who don't think they believe in souls.

Ultimately what you choose to think of as your 'self' is up to you, but personally I find it a bit pointless to be concerned about things that have no causal connection with me whatsoever as if they were me, no matter how closely they may coincidentally happen to resemble me.

Comment author: simon2 30 September 2008 01:56:00AM 0 points [-]

Let's suppose, purely for the sake of argument of course, that the scientists are superrational.

The first scientist chose the most probable theory given the 10 experiments. If the predictions are 100% certain then it will still be the most probable after 10 more successful experiments. So, since the second scientist chose a different theory, there is uncertainty and the other theory assigned an even higher probability to these outcomes.

In reality people are bad at assessing priors (hindsight bias), leading to overfitting. But these scientists are assumed to have assessed the priors correctly, and given this assumption you should believe the second explanation.

Of course, given more realistic scientists, overfitting may be likely.

Comment author: simon2 30 September 2008 12:56:27AM 0 points [-]

It may be that most minds with your thoughts do in fact disappear after an instant. Of course if that is the case there will be vastly more with chaotic or jumbled thoughts. But the fact that we observe order is no evidence against the existence of additional minds observing chaos, unless you don't accept self-indication.

So, your experience of order is not good evidence for your belief that more of you are non-Boltzmann than Boltzmann. But as I said, in the long term your expected accuracy will rise if you commit to not believing you are a Boltzmann brain, even if you believe that you most likely are one now.

A somewhat analogous situation may arise in AGI - AI makers can rule out certain things (e.g. the AI is simulated in a way that the simulated makers are non-conscious) that the AI cannot. Thus by having the AI rule such things out a priori, the makers can improve the AI's beliefs in ways that the AI itself, however superintelligent, rationally could not.

Comment author: simon2 30 September 2008 12:23:46AM 1 point [-]

Nick and Psy-Kosh: here's a thought on Boltzmann brains.

Let's suppose the universe has vast spaces uninhabited by anything except Boltzmann brains which briefly form and then disappear, and that any given state of mind has vastly more instantiations in the Boltzmann-brain only spaces than in regular civilizations such as ours.

Does it then follow that one should believe one is a Boltzmann brain? In the short run perhaps, but in the long run you'd be more accurate if you simply committed to not believing it. After all, if you are a Boltzmann brain, that commitment will cease to be relevant soon enough as you disintegrate, but if you are not, the commitment will guide you well for a potentially long time.

Comment author: simon2 22 September 2008 06:46:00AM 0 points [-]

And by elementary I mean the 8 different ways W, F, and the comet hit/non hit can turn out.

Comment author: simon2 22 September 2008 06:39:00AM 0 points [-]

Err... I actually did the math a silly way, by writing out a table of elementary outcomes... not that that's silly itself, but it's silly to get input from the table to apply to Bayes' theorem instead of just reading off the answer. Not that it's incorrect of course.

Comment author: simon2 22 September 2008 06:34:00AM 0 points [-]

Richard, obviously if F does not imply S due to other dangers, then one must use method 2:

P(W|F,S) = P(F|W,S)P(W|S)/P(F|S)

Let's do the math.

A comet is going to annihilate us with a probability of (1-x) (outside view) if the LHC would not destroy the Earth, but if the LHC would destroy the Earth, the probability is (1-y) (I put this change in so that it would actually have an effect on the final probability)
The LHC has an outside-view probability of failure of z, whether or not W is true
The universe has a prior probabilty w of being such that the LHC if it does not fail will annihilate us.

Then:
P(F|W,S) = 1
P(F|S) = (ywz+x(1-w)z)/(ywz+x(1-w)z+x(1-w)(1-z))
P(W|S) = (ywz)/(ywz+x(1-w)+x(1-w)(1-z))

so, P(W|F,S) = ywz/(ywz+x(1-w)z) = yw(yw+x(1-w))

I leave it as an exercise to the reader to show that there is no change in P(W|F,S) if the chance of the comet hitting depends on whether or not the LHC fails (only the relative probability of outcomes given failure matters).

Really though Richard, you should not have assumed in the first place that I was not capable of doing the math. In the future, don't expect me to bother with a demonstration.

Allan: you're right, I should have thought that through more carefully. It doesn't make your interpretation correct though...

I have really already spent much more time here today than I should have...

View more: Prev | Next