Does Tegmark provide any justification for the lower weight thing or is it a flat out "it could work if in some sense higher complexity realities have lower weight"?

It's the same justification as for the Kolmogorov prior: if you use a prefix-free code to generate random objects, less complex objects will come up more frequently. Descriptions of worlds with more tunable parameters must include those parameters, which adds complexity. (But, yes, if complexity/weight/frequency is ignored, there are infinitely more worlds above any complexity bound than below it.)

For that matter, what would it even mean for them to be lower weight?

Good question. With MWI, there's Robin's "mangled worlds" proposal (and maybe others) to generate objective frequencies; I don't know of any such suggestion for Tegmark's multiverse.

And any thoughts at all on why it seems like I'm not (at least, most of me seemingly isn't) a Boltzmann brain?

From Wikipedia: "Boltzmann proposed that we and our observed low-entropy world are a random fluctuation in a higher-entropy universe. Even in a near-equilibrium state, there will be stochastic fluctuations in the level of entropy. The most common fluctuations will be relatively small...." So we have strong evidence that this is false; there must be some reason to expect large, low-entropy universes to be more common than you would naively predict. Still, I would expect Boltzmann brains to outnumber 'normal' observers even within our universe, because there's only a narrow window of time for 'normal' observers to exist, but an infinity of heat death for Boltzmann brains to arise in, so I'm still confused.

Caledonian:

Well, the first point is to discard the idea that orderly perceptions are less probable than chaotic ones in the Dust.

Could be, but there doesn't seem to be any prior reason to suppose this. It seems that the dust should generate observer-moments with probability according to their algorithmic complexity, which would produce many more chaotic than normal ones. But it would solve the problem.

The second is to recognize that probability doesn't matter to the anthropic principle at all. You don't exist in the chaotic perspectives, so you never see them.

For every 'normal' possible world, there exist a huge number exactly like it but with small but glaring anomalies, like I have two sets of inconsistent memories or all coin flips come up heads or.... Observers could still exist in these partially-chaotic perspectives. There are also worlds that are almost entirely chaotic but with an island of order just big enough for one observer.

No one in particular: even if it's possible to account for why the dust wouldn't produce consciousness, the same arguments would still seem to apply to a non-conscious, purely computational Bayesian decision system (it would be surprised to observe order, etc.) I suspect this is actually a doubly wrong question, resulting from confusion about both consciousness and anthropic reasoning.

## Comments (166)

OldPsy-Kosh:

Does Tegmark provide any justification for the lower weight thing or is it a flat out "it could work if in some sense higher complexity realities have lower weight"?It's the same justification as for the Kolmogorov prior: if you use a prefix-free code to generate random objects, less complex objects will come up more frequently. Descriptions of worlds with more tunable parameters must include those parameters, which adds complexity. (But, yes, if complexity/weight/frequency is ignored, there are infinitely more worlds above any complexity bound than below it.)

For that matter, what would it even mean for them to be lower weight?Good question. With MWI, there's Robin's "mangled worlds" proposal (and maybe others) to generate objective frequencies; I don't know of any such suggestion for Tegmark's multiverse.

And any thoughts at all on why it seems like I'm not (at least, most of me seemingly isn't) a Boltzmann brain?From Wikipedia:

"Boltzmann proposed that we and our observed low-entropy world are a random fluctuation in a higher-entropy universe. Even in a near-equilibrium state, there will be stochastic fluctuations in the level of entropy. The most common fluctuations will be relatively small...."So we have strong evidence that this is false; there must be some reason to expect large, low-entropy universes to be more common than you would naively predict. Still, I would expect Boltzmann brains to outnumber 'normal' observers even within our universe, because there's only a narrow window of time for 'normal' observers to exist, but an infinity of heat death for Boltzmann brains to arise in, so I'm still confused.Caledonian:

Well, the first point is to discard the idea that orderly perceptions are less probable than chaotic ones in the Dust.Could be, but there doesn't seem to be any prior reason to suppose this. It seems that the dust should generate observer-moments with probability according to their algorithmic complexity, which would produce many more chaotic than normal ones. But it would solve the problem.

The second is to recognize that probability doesn't matter to the anthropic principle at all. You don't exist in the chaotic perspectives, so you never see them.For every 'normal' possible world, there exist a huge number exactly like it but with small but glaring anomalies, like I have two sets of inconsistent memories or all coin flips come up heads or.... Observers could still exist in these partially-chaotic perspectives. There are also worlds that are almost entirely chaotic but with an island of order just big enough for one observer.

No one in particular: even if it's possible to account for why the dust wouldn't produce consciousness, the same arguments would still seem to apply to a non-conscious, purely computational Bayesian decision system (it would be surprised to observe order, etc.) I suspect this is actually a doubly wrong question, resulting from confusion about both consciousness and anthropic reasoning.