I'm not sure what quantum mechanics has to do with this. Say humanity is spread over 10 planets. Would you rather take a logical 9/10 chance of wiping out humanity, or destroy 9 of the planets with certainty (and also destroy 90% of uninhabited planets to reduce the potential for future growth by the same degree)? Is there any ethically relevant difference between these scenarios?
There are two differences I can see :
The "planets" example admit the MWI is correct. Without MWI, the quantum trigger is exactly a normal random trigger, not killing 9/10th of the worlds, but killing everyone with a 1/10th probability. The thought experiment is a way to force people to quantify their acceptance of MWI.
Communication. There is no communication possible between the MW, while there is between the planets, and that will have massive long term effects.
Good point.
The two scenarios have somewhat different intuition pumps, but are otherwise similar.
It seems like something has gone terribly wrong when our ethical decisions depend on our interpretation of quantum mechanics.
My understanding was that many-worlds is indistiguishable by observation from the Copenhagen interpretation. Has this changed? If not, it frightens me that people would choose a higher chance of the world ending to rescue hypothetical people in unobservable universes.
If anything this seems like a (weak) argument in favour of total utilitarianism, in that it doesn't suffer from giving different answers according to one's choice among indistiguishable theories.
Why should you not have preferences about something just because you can't observe it? Do you also not care whether an intergalactic colony-ship survives its journey, if the colony will be beyond the cosmological horizon?
The depature of an intergalactic colony-ship is an observable event. It's not that the future of other worlds is unobservable, it's that their existance in the first place is not a testable theory (though see army1987's comment on that issue).
To make an analogy (though admittedly an unfair one for being a more complex rather than an arguably less complex explanation): I don't care about the lives of the fairies who carry raindrops to the ground either, but it's not because fairies are invisible (well, to grown-ups anyway).
The depature of an intergalactic colony-ship is an observable event.
By the exact same token, the world-state prior to the "splitting" in a Many Worlds scenario is an observable event.
I think the spirit of the question is basically: In what situations do we give credence to hypotheses which posit systems that we can influence, but cannot influence us?
By the exact same token, the world-state prior to the "splitting" in a Many Worlds scenario is an observable event.
The falling of raindrops is also observable, you appear to have missed the point of my reply.
To look at it another way, there is strong empyrical evidence that sentient beings will continue to exist on the colony-ship after it has left, and I do not believe there is analogous evidence for the continued existence of split-off parallel universes.
The spirit of the question is basically this:
Can the most parsimonious hypothesis ever posits systems that you can influence, but cannot causally influence you? And if so, what does that mean for your preferences?
No, the spirit of the question in context was to undermine the argument that the untestability of a theory implies it should have no practical implications, a criticism I opened myself up to by talking about observability rather than testability. The answer to the question was redundant to the argument, which was why I clarified my argument rather than answer it.
But since you want an answer, in principle yes I could care about things I can't observe, at least on a moral level. On a personal level it's a strong candidate for "somebody else's problem" if ever I've seen one, but that's a whole other issue. Usually the inability to observe something makes it hard to know the right way to influence it though.
you appear to have missed the point of my reply.
Let's check: "I can only have preferences over things that exist. The ship probably exists, because my memory of its departure is evidence. The parallel worlds have no similar evidence for their existence." Is that correct paraphrasing?
Before the ship leaves, you know that sometime in the future there will be a future-ship in a location where it cannot interact with future-you.
By the same token, you can observe the laws of physics and the present-state of the universe. If, for some reason, your interpretation of those laws involves Many Worlds splitting off from each other, then, before the worlds split, you know that sometime in the future there will be a future-world unable to interact with future you.
For future-you, the existence of the future-ship is not a testable theory, but the fact that you have a memory of the ship leaving counts as evidence.
For future-you, the existence of the Other-Worlds is not a testable theory, but if Many-Worlds is your best model, then your memory of the past-state of the universe, combined with your knowledge of physics, counts as evidence for the existence of certain specific other worlds.
In your Faeries example, the Faeries do not merit consideration because it is impossible to get evidence for their existence. That's not true in the quantum bomb scenario - if we except Many Worlds, then for the survivors of the quantum bomb, the memory of the existence of a quantum bomb is evidence that there exist many branches with Other Worlds in which everyone was wiped out by the bomb.
So, the actual question should be:
1) Does Many-Worlds fit in our ontology - as in, do universes on other branches constructed in the Many-World format even fit within the definition of "Reality" or not? (For example, if you told me there was a parallel universe which never interacted with us in any way, I'd say that your universe wasn't Real by definition. Many Worlds branches are a gray area because they do interact, but current Other Worlds only interact with the past and the present only interacts with future Other Worlds, not current ones )
2a) If we decide that the Other Worlds from Many Worlds qualify as "Real", can Many Worlds ever be a hypothesis which is Parsimonious enough to not be Pascal-Wager-ish? The Faeries qualify as "Real" because they do cause the raindrops to fall, but because of the nature of that hypothesis it can never be parsimonious enough to rise above Pascal-Wager-thresholds. Is Many-Worlds the same way? (From your answer, I gathered that your answer is "yes", but I disagreed with your reason - see paragraph that begins with "In your Faeries example..." which is why I pointed out that if you accept Many Worlds then you can have evidence that points to certain sorts of worlds existing in my first reply.)
2b) If we decide that the other branches do not qualify as Real, can we make a definition of reality that does not exclude light-cone-leaving-spaceships?
3) And how do we construct our preferences, in relation to what we have defined as "Real"? (For example, we could simply say that despite having an ontology that acknowledges all the branches of Many Worlds as Real, our preferences only care about the world that we end up in.)
The spaceship "exists" (I don't really like using exists in this context because it is confusing) in the sense that in the futures where someone figures out how to break the speed of light, I know I can interact with the spaceship. What is the probability that I can break the speed of light in the future?
Then for Many Worlds, what is the probability that I will be able to interact with one of the Other Worlds?
I would not care more about things if I gain information that I can influence them, unless I also gain information that they can influence me. If I gain credence in Many Worlds, then I only care about Other Worlds to the extent that it might be more likely for them to influence my world.
We're assuming you can't break the speed of light or interact with the other worlds.
It's a one-way influence. You can influence the spaceship before it leaves your light cone (you can give them supplies, etc). The MW argument is that you can influence parallel universes before they split off.
Let's check: "I can only have preferences over things that exist. The ship probably exists, because my memory of its departure is evidence. The parallel worlds have no similar evidence for their existence." Is that correct paraphrasing?
No, not really. I mean, it's not that far from something I said, but it's departing from what I meant and it's not in any case the point of my reply. The mistake I'm making is persisting in trying to clarify a particular way of viewing the problem which is not the best way and which is leading us both down the garden path. Instead, please forget everything else I said and consider the following argument.
Theories have two aspects. Testable predictions, and descriptive elements. I would (and I think the sequences support me) argue that two theories which make the same predictions are not different theories, they are the same theory with different flavour. In particular, you should never make a different decision under one theory than under the other. Many Worlds is a flavour of quantum mechanics, and if that choice of flavour effects ethical decisions then you are making different decisions according to the flavour rather than content of the theory, and something has gone wrong.
Everything else I said was intended solely to support that point, but somewhere along the way we got lost arguing about what's observable, what consitutes evidence and meta-ethics. If you accept that argument then I have no further point to make. If you do not accept it, then please direct comments at that argument directly rather than anything else I've said.
I'll try to address the rest of your reply with this in mind in the hopes that it's helpful.
If ... your interpretation of those laws involves Many Worlds
You could equally have said "If your interpretation of the physics of raindrops involves fairies". My point is that no-one has any justification for making that assumption. Quantum physics is a whole bunch of maths that models the behaviour of particles on a small scale. Many Worlds is one of many possible descriptions of that maths that help us understand it. If you arbitrarily assume your description is a meaningful property of reality then sure, everything else you say follows logically, but only because the mistake was made already.
You compare Many Worlds to fairies in the wrong place, in particular post-arbitrary-assumption for Many Worlds and pre-arbitrary-assumption for fairies. I'll give you the analogous statements for a correct comparison:
the Faeries do not merit consideration because it is impossible to get evidence for their existence
The people of other worlds do not merit consideration because it is impossible to get evidence of their existance.
if we except Many Worlds...
If we accept fairies...
... the memory of the existence of a quantum bomb is evidence that there exist many branches with Other Worlds in which everyone was wiped out by the bomb
... the sight of a raindrop falling is evidence that there exists a fairy a short distance away.
Taboo "justification". Justification is essentially a pointer to evidence or inference. After all the inference is said and done, the person who needs to provide more evidence is the person who has the more un-parsimonious hypothesis. You reject fairies based on a lack of justification because it's not parsimonious. You can't reject Many-Worlds on those same grounds, at least not without explaining more.
The difference is that the fairies interpretation of raindrops has different maths than the non-fairy interpretation of raindrops. When the mathematically-rigorous descriptions for two different hypotheses are different, there is a clear correct answer as to which is more parsimonious.
Many-worlds has exactly the same mathematical description as the alternative, so it's hard to say which is more parsimonious. You can't say that Single-World is default and Many Worlds requires justification. This is why I claim that it is first a question of ontology (a question of what we choose to define as reality), and then maybe we can talk about the epistemology and whether or not the statement is "True" within our definitions...after we clarify our ontology and define the relationship between ontology and parsimony, not before.
It seems like something has gone terribly wrong when our ethical decisions depend on our interpretation of quantum mechanics.
Yes. Someone has hooked up a universe-destroying bomb and is offering to make the outcome quantum. I think that covers it.
'Copenhagen' isn't so much an interpretation as a relatively traditional, relatively authoritative body of physics slogans. Depending on which Copenhagenist you speak to, the interpretation might amount to Objective Collapse, or Operationalism, or Metaphysical Idealism, or Quietism. The latter three aren't so much alternatives to MWI as alternatives to the very practice of mainstream scientific realism; and Objective Collapse is generally empirically distinct from MWI (and, to the extent that it has made testable predictions, these have always been falsified.)
Bohmian Mechanics is an alternative to the MWI family of interpretations that really does look empirically indistinguishable. But it's about as different from Copenhagenism as you can get, and is almost universally dismissed by physicists. Also, it may not solve this problem; I haven't seen discussion of whether the complexity of the BM pilot wave is likely to itself encode an overwhelming preponderance of mental 'ripples' that crowd out the moral weight of our own world. Are particles needed for complex biology-like structure in BM?
My understanding was that many-worlds is indistiguishable by observation from the Copenhagen interpretation. Has this changed?
According to MWI you can put arbitrarily large systems into quantum superposition, whereas according to CI when the system is sufficiently large the wavefunction will collapse.
According to MWI you can put arbitrarily large systems into quantum superposition
Yes and no. According to MWI, there is no theoretical limit to how a large a system in quantum superposition can be, yes. But to keep the system in quantum superposition without making two (or more) words that will never interact with each other again, you've to keep them from interacting with the rest of the world (in a way that is linked to the kind of superposition). And practically that is very hard to do for large-scale systems. That's an issue with quantum computing for example, the more qbits you try to add, the harder it is to keep them isolated.
But the point would remain in that case that there is in principle an experiment to distinguish the theories, even if such an experiment has yet to be performed?
Although (and I admit my understanding of the topic is being stretched here) it still doesn't sound like the central issue of the existance of parallel universes with which we may no longer interact would be resolved by such an experiment. It seems more like Copenhagen's latest attempt to define the conditions for collapse would be disproven without particularly necessitating a fundamental change of interpretation.
For Copenhagen, yes, but MWI and Copenhagen aren't the only two interpretations of quantum mechanics worth thinking about.
In truth, you'll find few physicists who treat the Copenhagen Interpretation as anything but convenient shorthand, not usually for MWI.
If we managed to put human-sized systems into superposition, that'd rule out CI AFAICT. And before that, the larger the systems we manage to put into superposition the less likely CI will seem.
Good point. Decoherence makes MWI de facto indistinguishable from CI except that the maximum size of systems you can put into superpositions depends on the temperature rather than gravity/consciousness/whatever.
If anything this seems like a (weak) argument in favour of total utilitarianism, in that it doesn't suffer from giving different answers according to one's choice among indistiguishable theories.
Oh, total utilitarianism has its own problem with indistinguishable theories :-)
See http://lesswrong.com/lw/g9n/false_vacuum_the_universe_playing_quantum_suicide/ and http://lesswrong.com/r/discussion/lw/j3i/another_problem_with_quantum_measure/
Fair point, it sounds like it's a co-incidental victory for total-utilitarianism in this particular case.
In my understanding of Many Worlds Interpretation, I'm under the impression that there should exist some measure in which the supercomputer accidentally computes 0 on a non zero logical trigger because too many quantum events happened to hit a bit and hence doesn't blow up, and for that matter, the reverse: Too many too quantum events hit a bit, and turned a logically computed 0 into some other number. Presumably these chances are there, and they represent some small, non zero measure.
But as far as I can tell, the fact that this occurs seems to render a substantial premise of the question moot, because it means neither trigger hits 0 quantum measure, which is what we were postulating paying money to avoid.
So to verify this:
1: Am I correct that MWI does imply that there will be some quantum measure left even when using the logical trigger?
2: Does this actually render a substantial part of the original scenario moot, or does it still apply for reasons I don't understand?
That depends if you have some kind of "mangled worlds" hypothesis (in short, an hypothesis that worlds with a too low probability will be unstable and collapse due to contamination from "nearby" worlds).
As long as we don't know where the Born rule comes from in MWI, it's hard to say if all worlds are "real" and how much "real" they are, or if there is some kind of boundary below which the world isn't "real" for practical purpose (like, not stable enough to allow a consciousness to exist in it).
Are we assuming that every inhabited Everett branch has such a doomsday device, and the same decision about the trigger will be made in each branch? If you're only one of a huge number of universes, then 90% of your branch dying vs. a 90% chance of your branch dying isn't going to make much of a difference.
How do you see that? Most consequentialist theories would assume that "parallel universes" that you can't affect have limited impact on your choices in this one.
You could use a quantum random number generator to make your decision. Then you ensure there is an Everett branch in which humanity continues to exist, but you only pay the $5 1% (say) of the time.
That's only true if your utility function is linear. If your utility function is nonlinear, and you care about humanity existing, but you don't care as much about how much humanity exists, then a doomsday device isn't nearly as bad if you know humanity will continue to exist in a parallel universe. I assumed that this is why someone would prefer 90% of the measure of the universe being destroyed to a 90% chance of the whole thing being destroyed. Is there another reason you'd prefer the former?
Is there another reason you'd prefer the former?
There are some total utilitarians who are (or would like to be) indifferent between the two options - I've chatted with them.
If the dilemma is only taking place in a small portion of the branches, the other branches will survive regardless of the choice, which breaks the argument about many-worlds total extinction risk.
I assumed that, even if most branches don't have the machine, the machine's influence reaches to all branches, so that it can destroy all of them along with ours.
The thought experiment is about eliciting some of the normative content of truth vs. falsity of MWI, in terms that don't assume MWI. The meaning of "destroy all MWI branches" is given in terms of MWI, so this clause wouldn't respect the motivation of the thought experiment.
The thought experiment is about eliciting some of the normative content of truth vs. falsity of MWI, in terms that don't assume MWI.
That is not my reading. Consider this part:
If you treat quantum measure squared exactly as probability, then you shouldn't see any reason to replace the trigger. But if you believed in many worlds quantum mechanics (or think that MWI is possibly correct with non-zero probability), you might be tempted to accept the deal - after all, everyone will survive in one branch.
The post then goes on to argue that there is a dilemma here, that an apparently plausible case can be made for either choice, assuming that MWI is true.
I take the post to be saying, "Here's an interesting dilemma. Well, it's only interesting if there's a possibility that MWI is true. That is, if you know that MWI is false, then the answer is obvious. But, granting the possibility of MWI for the sake of argument, what would you do?"
If many qualitatively different branches are consequentialistically optimizing a common goal, all these branches become better according to that goal, even if the specific situations and actions taken in them are significantly different. On the other hand, if these branches respond to your argument and abandon their optimization due to rarity of their particular situations and possible actions, all the branches would remain poorly optimized.
(More generally, an optimizer doesn't care about the scope of their influence, only about comparison of available choices, however insignificant.)
The thing missing here is that there are plenty of branches where the doomsday device was never built, or a different digit was chosen, and humanity will continue in those.
Would you similarly consider 90% chance of total extinction and destruction of the universe, equivalent with killing 9/10th of the human race (and 9/10th of the universe)?
Hm, in that case I guess I would prefer deterministically destroying 90% of the universe. A universe with 10% persons is a bigger improvement over the empty universe than a universe with 100% persons is over the 10% universe, so we have a convex utility function and risk aversion. As you write, to be indifferent I guess I would have to subscribe to a "strong total utilitarian" principle, that I value a universe exactly according to the number of people in it.
I take it that your argument is that the same reasoning should apply to multiverses, and we should pick the alternative that leaves a guaranteed 10% remaining Everett branches? That's a neat perspective which I had not appreciated before: the quantum process is more deterministic than the logical process, since it destroys (part of) the multiverse deterministically instead of subjectively-randomly.
I note that for this argument to go through, you need that our utility function over different multiverses really is convex, which you didn't argue for. We choose between different universes every day, so we have relatively well-developed intuitions about which universe we prefer. I rarely think about which multiverse I prefer to live in, so I'm less confident that the utilitarian principle is wrong there. And rejecting it leads to the strange trades.
I feel this discussion doesn't capture my first intuition about the problem, though, which is about subjective versus objective probability. If I knew what the 10^100th digit was, then of course I would have an opinion about whether I wanted the digit or the qubit to decide. But when I don't know either way, it seems weird to care about the exact mechanism used to set off the bomb. So it seems that not only do I value the collection of quantum future worlds as a sum weighted by their measure, I would also extend this to the collection of possible future worlds. (Where a world is possible if as far as I know it could happen).
I note that for this argument to go through, you need that our utility function over different multiverses really is convex, which you didn't argue for.
I would agree with that (on grounds of excessive duplication of almost-identical agents, for one), but the point for this post is more to get people reactions than to push a particular theory.
Child, I'm sorry to tell you that the world is about to end. Most likely. You see, this madwoman has designed a doomsday machine that will end all life as we know it - painlessly and immediately. It is attached to a supercomputer that will calculate the 10100th digit of pi - if that digit is zero, we're safe. If not, we're doomed and dead.
However, there is one thing you are allowed to do - switchout the logical trigger and replaced it by a quantum trigger, that instead generates a quantum event that will prevent the bomb from triggering with 1/10th measure squared (in the other cases, the bomb goes off). You ok paying €5 to replace the triggers like this?
If you treat quantum measure squared exactly as probability, then you shouldn't see any reason to replace the trigger. But if you believed in many worlds quantum mechanics (or think that MWI is possibly correct with non-zero probability), you might be tempted to accept the deal - after all, everyone will survive in one branch. But strict total utilitarians may still reject the deal. Unless they refuse to treat quantum measure as akin to probability in the first place (meaning they would accept all quantum suicide arguments), they tend to see a universe with a tenth of measure-squared as exactly equally valued to a 10% chance of a universe with full measure. And they'd even do the reverse, replace a quantum trigger with a logical one, if you paid them €5 to do so.
Still, most people, in practice, would choose to change the logical bomb for a quantum bomb, if only because they were slightly uncertain about their total utilitarian values. It would seem self evident that risking the total destruction of humanity is much worse than reducing its measure by a factor of 10 - a process that would be undetectable to everyone.
Of course, once you agree with that, we can start squeezing. What if the quantum trigger only has 1/20 measured-squared "chance" of saving us? 1/000? 1/10000? If you don't want to fully accept the quantum immortality arguments, you need to stop - but at what point?