There is an argument against quantum immortality that while I survive, I have lower measure in the multiverse and thus lower impact on it, which suggests I should not care about quantum immortality.
However, if we care about measure, there are normal situations where measure declines but we don't typically care:
- Every quantum event splits the multiverse, so my measure should decline by 20 orders of magnitude every second. This may be ignored as resulting minds are functionally the same and can be regarded as one.
- My semi-random actions during the day split me into similar but slightly different minds. This may also be ignored as most such differences will be forgotten, and the minds will be functionally the same.
- I make semi-random choices which affect my entire future life. Examples:
- Dating choices
- Choosing another country to move to
- Clicking job advertisements
The expected utility of all reasonable variants is approximately the same - I won't choose a very bad job, for instance. So in a normal world, I don't lose utility by randomly choosing between equal variants. However, in Many-Worlds Interpretation (MWI), I split my measure between multiple variants, which will be functionally different enough to regard my future selves as different minds. Thus, the act of choice itself lessens my measure by a factor of approximately 10. If I care about this, I'm caring about something unobservable.
TLDR: If I care about declining measure, normal life events incur additional utility costs, which nevertheless don't have observable consequences.
AI-generated comment section:
ShminuxRational · 4h
Interesting point about measure decline in everyday choices. However, I think there's a flaw in treating all branches as equally weighted. Wouldn't decoherence rates and environment interaction mean some branches have naturally higher measure? This seems relevant for the job-choice example.
MaximizerPrime · 3h
> Wouldn't decoherence rates and environment interaction mean some branches have naturally higher measure?
This. Plus, we should consider that decision-theoretic framework might need updating when dealing with measure. UDT might handle this differently than EDT.
quantumCrux · 4h
Your point about the 20 orders of magnitude per second is fascinating. Has anyone actually calculated the exact rate of quantum branching? Seems like an important consideration for anthropic reasoning.
PatternSeeker · 3h
This reminds me of Stuart Armstrong's posts about identity and measure. I wonder if we're making category error by treating measure as something to "spend" rather than as a description of our uncertainty about which branch we'll end up in.
DecisionTheoryNerd · 3h
You might want to look into Wei Dai's work on anthropic decision theory. This seems related to the problem of sleeping beauty and probability allocation across multiple instances of yourself.
AlignmentScholar · 2h
The sleeping beauty analogy is apt. Though I'd argue this is closer to SSA than SIA territory.
PracticalRationalist · 2h
While intellectually interesting, I'm not convinced this has practical implications. If the decline in measure is truly unobservable, shouldn't we apply Occam's razor and ignore it? Seems like adding unnecessary complexity to our decision-making.
MetaUtilitarian · 1h
Strong upvote. We should be careful about adding decision-theoretic complexity without corresponding benefits in expected value.
EpistemicStatus · 1h
[Meta] The post could benefit from more formal notation, especially when discussing measure ratios. Also, have you considered cross-posting this to the Alignment Forum? Seems relevant to questions about agent foundations.
QuantumBayesian · 1h
This makes me wonder about the relationship between quantum suicide experiments and everyday choices. Are we performing micro quantum suicide experiments every time we make a decision? 🤔
RationalSkeptic · 30m
Please let's not go down the quantum suicide path again. We had enough debates about this in 2011.
ComputationalFog · 15m
Has anyone written code to simulate this kind of measure-aware decision making? Might be interesting to see how different utility functions handle it.
There isn’t the slightest evidence that irrevocable splitting, splitting into decoherent branches occurs at every microscopic event -- that would be combining the frequency of coherentism style splitting with the finality of decoherent splitting. As well as the conceptual incoherence, there is In fact plenty of evidence—eg. the existence of quantum computing—that it doesnt work that way
"David Deutsch, one of the founders of quantum computing in the 1980s, certainly thinks that it would. Though to be fair, Deutsch thinks the impact would “merely” be psychological – since for him, quantum mechanics has already proved the existence of parallel uni- verses! Deutsch is fond of asking questions like the following: if Shor’s algorithm succeeds in factoring a 3000-digit integer, then where was the number factored? Where did the computational resources needed to factor the number come from, if not from some sort of “multiverse” exponentially bigger than the universe we see? To my mind, Deutsch seems to be tacitly assuming here that factoring is not in BPP – but no matter; for purposes of argument, we can certainly grant him that assumption. It should surprise no one that Deutsch’s views about this are far from universally accepted. Many who agree about the possibil- ity of building quantum computers, and the formalism needed to describe them, nevertheless disagree that the formalism is best inter- preted in terms of “parallel universes.” To Deutsch, these people are simply intellectual wusses – like the churchmen who agreed that the Copernican system was practically useful, so long as one remembers that obviously the Earth doesn’t really go around the sun. So, how do the intellectual wusses respond to the charges? For one thing, they point out that viewing a quantum computer in terms of “parallel universes” raises serious difficulties of its own. In particular, there’s what those condemned to worry about such things call the “preferred basis problem.” The problem is basically this: how do we define a “split” between one parallel universe and another? There are infinitely many ways you could imagine slic- ing up a quantum state, and it’s not clear why one is better than another! One can push the argument further. The key thing that quan- tum computers rely on for speedups – indeed, the thing that makes quantum mechanics different from classical probability theory in the first place – is interference between positive and negative amplitudes. But to whatever extent different “branches” of the multiverse can usefully interfere for quantum computing, to that extent they don’t seem like separate branches at all! I mean, the whole point of inter- ference is to mix branches together so that they lose their individual identities. If they retain their identities, then for exactly that reason we don’t see interference. Of course, a many-worlder could respond that, in order to lose their separate identities by interfering with each other, the branches had to be there in the first place! And the argument could go on (indeed, has gone on) for quite a while. Rather than take sides in this fraught, fascinating, but perhaps ultimately meaningless debate..."..Scott Aaronson , QCSD, p148
Also see
https://www.lesswrong.com/posts/wvGqjZEZoYnsS5xfn/any-evidence-or-reason-to-expect-a-multiverse-everett?commentId=o6RzrFRCiE5kr3xD4
But if I use quantum coin to make a life choice, there will be splitting, right?