Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Quantum Russian Roulette

6 Post author: Christian_Szegedy 18 September 2009 08:49AM

The quantum Russian roulette is a game where 16 people participate. Each of them gets a unique four digit binary code assigned and deposits $50000. They are put to deep sleep using some drug. The organizer flips a quantum coin four times. Unlike in Russian roulette, here only the participant survives whose code was flipped. The others are executed in a completely painless manner. The survivor takes all the money.

Let us assume that none of them have families or very good friends. Then the only result of the game is that the guy who wins will enjoy a much better quality of life. The others die in his Everett branch, but they live on in others. So everybody's only subjective experience will be that he went into a room and woke up $750000 richer.

Being extremely spooky to our human intuition, there are hardly any trivial objective reasons to oppose this game under the following assumptions:

  1. Average utilitarianism
  2. Near 100% confidence in the Multiple World nature of our universe
  3. It is possible to kill someone without invoking any negative experiences.

The natural question arises whether it could be somehow checked that the method really works, especially that the Multiple World Hypothesis is correct. At first sight, it looks impossible to convince anybody besides the participant who survived the game.

However there is a way to convince a lot of people in a few Everett branches: You make a one-time big announcement in the Internet, TV etc. and say that there is a well tested quantum coin-flipper, examined by a community consisting of the most honest and trusted members of the society. You take some random 20 bit number and say that you will flip the equipment 20 times and if the outcome is the same as the predetermined number, then you will take it as a one to million evidence that the Multiple World theory works as expected. Of course, only people in the right branch will be convinced. Nevertheless, they could be convinced enough to make serious thoughts about the viability of quantum Russian roulette type games.

My question is: What are the possible moral or logical reasons not to play such games? Both from individual or societal standpoints.

[EDIT] A Simpler version (single player version of the experiment): The single player generates lottery numbers by flipping quantum coins. Sets up an equipment that kills him in sleep if the generated numbers don't coincide with his. In this way, he can guarantee waking up as a lottery millionaire.

 

Comments (55)

Comment author: cousin_it 18 September 2009 02:41:45PM *  9 points [-]

Here's a funny reformulation of your argument: if you live in a quantum world where deaths are swift and painless, it makes sense to bet a lot of money on the assertion that you will stay alive. This incentivizes many other people to bet on your death and try hard to kill you. The market takes this into account, so your payoff for staying alive grows very high. Sounds like a win-win situation all around!

That said, at root it's just a vanilla application of quantum immortality which people may or may not believe in (the MWI doesn't seem to logically imply it). For a really mindblowing quantum trick see the Elitzur-Vaidman bomb tester. For a deeper exploration of the immortality issue, see Quantum suicide reality editing.

Comment author: Vladimir_Nesov 18 September 2009 03:12:30PM *  3 points [-]

For a really mindblowing quantum trick see the Elitzur-Vaidman bomb tester.

Interesting.

Consider a collection of bombs, some of which are duds. The bombs are triggered by a single photon. Usable bombs will absorb the photon and detonate. Dud bombs will not absorb the photon. The problem is how to separate the usable bombs from the duds.
...
A solution is for the sorter to use a mode of observation known as counterfactual measurement
...
In 1994, Anton Zeilinger, Paul Kwiat, Harald Weinfurter, and Thomas Herzog actually performed an equivalent of the above experiment, proving interaction-free measurements are indeed possible.

See also:

Comment author: Jonathan_Graehl 18 September 2009 06:47:01PM 1 point [-]

As I understand it, the only way to have a known-live undetonated bomb in this branch is to cause it to actually detonate it in other branches.

Comment author: Vladimir_Nesov 18 September 2009 08:14:22PM 0 points [-]

Sorta, but not quite, as the probability of it actually detonating can be brought as close to 0 as one likes (if I'm not mistaken).

Comment author: Jonathan_Graehl 18 September 2009 09:36:30PM 0 points [-]

Yes - I didn't mean all other branches.

Comment author: Christian_Szegedy 18 September 2009 06:56:46PM *  1 point [-]

The Elitzur-Vaidman is really amazing: more sophisticated than my scenario. However it is quite different and not directly related.

The Quantum suicide is much more similar, in fact my posting derived from an almost identical idea that I also posted on lesswrong. I got that idea independently when reading that thread.

The reason I find the quantum roulette thought experiment interesting is that it is much less speculative. The payoff is clear and the in can easily be motivated and performed by current technology.

Comment author: cousin_it 18 September 2009 06:59:39PM *  1 point [-]

Yes, that last point is important. Too bad we won't get volunteers from LW, because we're all kinda friends and would miss each other.

Comment author: Aurini 18 September 2009 10:43:00PM 0 points [-]

If we all did it together though, the worlds we left behind would be like some sort of geeky Atlas Shrugged dystopia. Heh.

Comment author: Christian_Szegedy 18 September 2009 08:50:01PM 0 points [-]

:) We could still make the second experiment and if the numbers match, throw a party... ;)

Comment author: Christian_Szegedy 19 September 2009 02:28:36AM 0 points [-]

I don't think my game is a simple reformulation of quantum immortality.

I don't even believe in quantum immortality. At least it is not implied in any way by MWI.

It is perfectly possible that you have increasing amounts of "quantum luck" in a lot of branches, but finally you die in each of the branches, because the life-supporting miracles increase your life less and less, and when they hit the resolution of time, you simply run out of them and die for sure.

If you think time is continuous, then think of the Zenon paradox to see why the infinite sum of such pieces can add to a finite amount of time gained.

Comment author: Yvain 18 September 2009 01:18:44PM 6 points [-]

However there is a way to convince a lot of people in a few Everett branches: You make a one-time big announcement in the Internet, TV etc. and say that there is a well tested quantum coin-flipper, examined by a community consisting of the most honest and trusted members of the society. You take some random 20 bit number and say that you will flip the equipment 20 times and if the outcome is the same as the predetermined number, then you will take it as a one to million evidence that the Multiple World theory works as expected. Of course, only people in the right branch will be convinced. Nevertheless, they could be convinced enough to make serious thoughts about the viability of quantum Russian roulette type games.

I vaguely remember this being discussed here before, and people deciding it wouldn't work. Before the coin-flipper is run, you have a 1/2^20 chance of seeing your number come up, whether many worlds is true or false. That means that seeing the number come up doesn't tell you anything about whether MW is true or not. It just tells you you're extremely lucky: either lucky enough that the coin-flipper got a very specific number, or lucky enough to have ended up in the very specific universe where the flipper got that number.

Comment author: Christian_Szegedy 18 September 2009 08:33:14PM *  2 points [-]

I don't really buy that argument. It would apply to any measurement scenario. You could say in the two-mirror experiment: "These dots on the screen don't mean a thing, we just got extremely lucky." Which is of course always a theoretical possibility.

Of course you can derive that you were extremely lucky, but also that "someone got extremely lucky" [SGEL]. If you start with some arbitrary estimates e.g. P(SWI)=0.5 and P(MWI)=0.5 and try to update P(MWI) by using Bayesian inference, you get:

By P(SGEL|SWI)=1/2^20

P(SGEL|MWI)=1

You get:

P(MWI|SGEL)=P(SGEL|MWI)P(MWI)/(P(SGEL|SWI)P(SWI)+P(SGEL|MWI)P(MWI))=

0.5/((1/2^20)0.5 + 0.5)=1/(1+1.2^20) ~ 1-1/2^20

Comment author: saturn 18 September 2009 09:56:37PM 0 points [-]

Well, yes, but we can't peek into other Everett branches to check them for lucky people.

Comment author: Christian_Szegedy 19 September 2009 12:43:03AM 1 point [-]

I don't see why you wanted to. You could only increase P(MWI) by finding there any.

Comment author: Eliezer_Yudkowsky 10 November 2012 06:39:33PM 5 points [-]

You take some random 20 bit number and say that you will flip the equipment 20 times and if the outcome is the same as the predetermined number, then you will take it as a one to million evidence that the Multiple World theory works as expected.

That doesn't convince anyone else; from their perspective, in Bayesian terms, the experiment has the same million-to-one improbability of producing this result, regardless of whether QTI is true, since they're not dying in the other worlds. From your perspective, you've ended up in a world where you experienced what feels like strong evidence of QTI being true, but you can never communicate this evidence to anyone else. If we hook up a doomsday device to the coinflipper, then in worlds where we survive, we can never convince aliens.

Comment author: Lightwave 18 September 2009 11:05:49AM 5 points [-]

Do you really need Many Worlds for quantum suicide/immortality? How's a Single Big World (e.g. Tegmark's Level 1 universe) different? Since a Single Big World is infinite, there will be an infinite number of people performing quantum suicide experiments, and some of them will seem to be immortal.

It seems to me that there is no practical difference between living in Many Worlds vs a Single Big World.

Comment author: randallsquared 18 September 2009 04:31:36PM 1 point [-]

Well, Many Worlds doesn't actually require infinity, which is a plus in my book. You can have a Many Worlds scenario with "only" (the number of discrete possible actions since the beginning of time) worlds, whereas a Single Big World that wasn't actually infinite would require further explanation.

Comment author: Wei_Dai 18 September 2009 10:26:39AM *  4 points [-]

If you assume 3, average utilitarianism says you should kill anyone who has below average utility, since that raises the average. So in the end you kill everyone except the one person who has the highest utility. There is no need for assumption 2 at all.

BTW, are you aware of any of the previous literature and discussion on quantum suicide and immortality?

See also http://www.acceleratingfuture.com/steven/?p=215

Comment author: thomblake 18 September 2009 06:40:21PM 1 point [-]

If you assume 3, average utilitarianism says you should kill anyone who has below average utility, since that raises the average. So in the end you kill everyone except the one person who has the highest utility.

You really need to also assume killing people doesn't decrease utility for those who are left, which usually doesn't work too well for humans...

Comment author: SforSingularity 18 September 2009 12:54:08PM 1 point [-]

Which is why we do not really believe in average utilitarianism...

Comment author: Psychohistorian 18 September 2009 07:59:45PM 2 points [-]

It is possible to kill someone without invoking any negative experiences.

Average utilitarianism requires more, it requires that it is possible to have a policy of systematically killing most people that does not result in negative experiences. This does not seem meaningfully possible for any agents that are vaguely human, so this is a straw man objection to average utilitarianism, and a pretty bad one at that.

Comment author: Christian_Szegedy 18 September 2009 06:33:36PM *  0 points [-]

Thanks for the links. They look interesting.

The base idea seems identical to the quantum suicide scenarios. Although I did not know about them, my only contribution is to put up a convincing concrete scenario, where suicide offers a benefit to each players.

Comment author: Vladimir_Nesov 18 September 2009 12:05:32PM *  3 points [-]

If the worlds in your MWI experiment are considered independent, you might as well do the same in a single deterministic world. Compare the expected utility calculations for one world and many-worlds: they'll look the same, you just exchange "many-worlds" with "possible worlds" and averaging with expectation. MWI is morally uninteresting, unless you do nontrivial quantum computation. Just flip a logical coin from pi and kill the other guys.

More specifically: when you are saying "everyone survives in one of the worlds", this statement gets intuitive approval (as opposed to doing the experiment in a deterministic world where all participants but one "die completely"), but there is no term in the expected utility calculation that corresponds to the sentiment.

Comment author: SforSingularity 18 September 2009 01:06:04PM 2 points [-]

MWI is morally uninteresting, unless you do nontrivial quantum computation.

I think that the intuition at steak here is something about continuity of conscious experience. The intuition that Christian might have, if I may anticipate him, is that everyone in the experiment will actually experience getting $750,000, because somehow the word-line of their conscious experience will continue only in the worlds where they do not die.

I think that, in some sense, this is a mistake, because it fundamentally rests upon a very very strong intuition that there exists a unique person who I will be in the future. This is an intuition that evolution programmed into us for obvious reasons: we are more likely to act like a good Could-Should-Would agent if we think that the benefits and costs associated with our actions will accrue to us rather than to some vague probability distribution over an infinite set of physically realized future continuations-of-me, with the property that whatever I do, some of them will die, and whatever I do, some of them will be rich and happy.

Comment author: Christian_Szegedy 18 September 2009 08:46:50PM 1 point [-]

You can assign high negative utility to certain death.

Comment author: Vladimir_Nesov 18 September 2009 08:50:58PM *  1 point [-]

You can assign high negative utility to certain death.

You can, but then you should also do so in the expected utility calculation, which is never actually done in most discussions of MWI in this context, and isn't done in this post. The problem is using MWI as rationalization for invalid intuitions.

Comment author: Jack 18 September 2009 10:08:08PM *  1 point [-]

I think there will probably have to be some set of worlds in which the losers of the game are alive but near death (it really is necessary to specify the means by which they die). So this really is a gamble since participating means there is a very slight chance you will wake up, 50,000 dollars poorer and needing an ambulance. To figure out the overall average utility of the game one would need to include the possible worlds in which the killing mechanism fails. Average utility over the universal wave function would probably still go up, but there would be a few branches where average utility would go down, dramatically. So the answer it would depend on whether you were quantifying over the entire wave function or doing individual worlds.

Or were you ignoring that as part of the thought experiment?

EDIT: I just thought it all out and I think the probability of experiencing surviving the killing mechanism might be 5 in 6. See here. Basically, when calculating your future experience the worlds in which you survive take over the probability space of the worlds in which you don't survive such that, given about a 5 in 6 chance of your death there is actually about a 5 in 6 chance you experience losing and surviving since in the set of worlds in which you lose there is a 100% chance of experiencing one of the very few worlds in which you survive. This would make playing Quantum Russian roulette considerably stupider then playing regular Russian roulette unless you have a killing mechanism that can only fail by failing to do any damage at all.

Comment author: Jordan 18 September 2009 07:05:21PM 1 point [-]

With a similar technique you can solve any NP-Complete problem. Actually, you can solve much harder problems. For instance, you can minimize any function you have enough computing power to compute. You could apply this, for instance, to genetic algorithms, and arrive at the globally fittest solution. You could likewise solve for the "best" AI given some restraints, such as: find the best program less than 10000 characters long that performs best on a Turing test.

Comment author: Theist 18 September 2009 11:45:24PM 2 points [-]

If mangled worlds is correct (and I understand it correctly), then sufficiently improbable events fail to happen at all. What kind of limit would this place on the problems you can solve with "quantum suicide voodoo"?

Comment author: Christian_Szegedy 18 September 2009 08:45:11PM 0 points [-]

This is a very interesting point and somehow shakes my belief in the current version of MWI.

What I could imagine is that since the total information content of multiverse must be finite, there is some additional quantification going on that makes highly improbable branches "too fuzzy" to be observable. Or something like that.

Comment author: cousin_it 18 September 2009 08:47:31PM *  2 points [-]

Not likely. You're already in a highly improbable branch, and it's getting less probable every millisecond.

Comment author: Christian_Szegedy 19 September 2009 02:15:11AM 0 points [-]

We have seen in the sister topic that mangled worlds theory can in fact account for such information loss. However MWT has similar deficiencies as single worlds: non local action, nonlinearity, discontinuity. It does not mean it can't be true.

Comment deleted 18 September 2009 08:57:09PM *  [-]
Comment author: pengvado 19 September 2009 02:33:55AM 0 points [-]

Why would the information content of a quantum universe be measured in bits, rather than qubits? 2^1000 qubits is enough to keep track of every possible configuration of the Hubble volume, without discarding any low magnitude ones. (Unless of course QM does discard low magnitude branches, in which case your quantum computer would too... but such a circular definition is consistent with any amount of information content.)

Comment author: cousin_it 18 September 2009 08:51:22PM *  -1 points [-]

Actually, you can solve much harder problems. For instance, you can minimize any function you have enough computing power to compute.

Why much harder? If computing the function is in P, minimization is in NP.

EDIT: this statement is wrong, see Wei Dai's comments below for explanation. Corrected version: "minimization is at most polynomially harder than an NP problem".

Comment author: Wei_Dai 18 September 2009 09:30:50PM *  2 points [-]

If computing the function is in P, minimization is in NP.

Why is that? In order for function minimization to be in NP, you have to be to write a polytime-checkable proof of the fact that some input is a minimum. I don't think that's true in general.

I also don't see how function minimization can be accomplished using quantum suicide. You can compute the value of the function on every possible input in parallel, but how do you know that your branch hasn't found the minimum and therefore should commit suicide?

This seems like a relevant article, although it doesn't directly address the above questions.

ETA: I do see a solution now how to do function minimization using quantum suicide: Guess a random x, compute f(x), then flip a quantum coin n*f(x) times, and commit suicide unless all of them come out heads. Now the branch that found the minimum f(x) will have a measure at least 2^n times greater than any other branch.

ETA 2: No, that's not quite right, since flipping a quantum coin n*f(x) times takes time proportional to f(x) which could be exponential. So I still don't know how to do this.

Comment author: cousin_it 18 September 2009 11:11:46PM *  0 points [-]

Why is that? In order for function minimization to be in NP, you have to be to write a polytime-checkable proof of the fact that some input is a minimum. I don't think that's true in general.

...That's an interesting way to look at it. For example take the traveling salesman problem: it's traditionally converted to a decision problem by asking if there's a route cheaper than X. Note that only "yes" answers have to have quickly checkable certificates - this is NP, not NP intersected with co-NP. Now if we start asking if X is exactly the minimum instead, would that be equivalent in complexity? And would it be helpful, seeing as it takes away our ability to do binary search?

In short, I'm a little funny in the head right now, but still think my original claim was correct.

Comment author: Wei_Dai 19 September 2009 04:07:20AM *  3 points [-]

This paper (which I found via this Ph.D. thesis) says that for TSP, the question "Is the optimal value equal to k" is D^P-complete. It also says D^P contains both NP and coNP, so assuming NP!=coNP, that problem is not in NP.

Your original claim was "If computing the function is in P, minimization is in NP." Technically, this sentence doesn't make sense because minimization is a functional problem, whereas NP is a class of decision problems. But the ability to minimize a function certainly implies the ability to determine whether a value is a minimum, and since that problem is not in NP, I think your claim is wrong in spirit as well.

As Nesov pointed out, TSP is in FP^NP, so perhaps a restatement of your claim that is correct is "If computing the function takes polynomial time, then it can be minimized in polynomial time given an NP oracle." This still seems to imply that function minimization isn't much harder than NP-complete problems.

Are there any problems in PostBQP that can be said to be much harder than NP-complete problems? I guess you asked that already.

Comment author: cousin_it 19 September 2009 05:20:53AM 2 points [-]

There was a dumb comment here, I deleted it. You're actually completely right, thanks!

Comment author: Jordan 18 September 2009 11:08:19PM 0 points [-]

You can't get the exact answer, but you can approach it exponentially quickly by doing a binary search. So, if you want N digits of accuracy, you're going to need O(N) time. Someone mentioned this elsewhere but I can't find the comment now.

Your method above would work better, actually (assuming the function is valued from 0 to 1). Just randomly guess x, compute f(x)^n, then place a qubit in a superposition such that it is f(x)^n likely to be 0, and 1 - f(x)^n likely to be 1. Measure the qubit. If it is 0, quantum suicide. You can do the whole thing a few times, taking n = 10, n = 1000, and so on, until you're satisfied you've actually got the minimum. Of course, there's always the chance there's a slightly smaller minimum somewhere, so we're just getting an approximate answer like before, albeit an incredibly good approximate answer.

Comment author: Vladimir_Nesov 18 September 2009 11:26:46PM *  0 points [-]

It's in FP^NP, not in NP. Only the decision problem for whether a given value C is more than the minimum is in NP, not minimization itself.

Comment author: cousin_it 18 September 2009 11:45:52PM *  0 points [-]

Yes, that's certainly right, thanks. I was careless in assuming that people would just mentally convert everything to decision problems, like I do :-)

ETA: Vladimir, I couldn't find any proof that NP is strictly contained in PP; is it known? Our thread seems to depend on that question only. It should be true because PP is so freaky powerful, but nothing's turned up yet.

Comment author: pengvado 19 September 2009 07:20:16AM *  2 points [-]

P⊆NP⊆PP⊆PSPACE, but we don't even have a proof of P≠PSPACE.

Unconditionally proven strict containments are pretty rare. You'll probably want to settle for some lesser evidence.

Comment author: cousin_it 19 September 2009 12:55:44PM *  0 points [-]

Hah. Thanks, that settles the issue for now: short of Scott Aaronson making an unexpected breakthrough, we have no proof that quantum suicide computing can solve any non-polynomial problems :-)

Comment author: Wei_Dai 19 September 2009 07:39:50AM 0 points [-]

On the computational power of PP and ⊕P says it gives strong evidence that PP is strictly harder than PH, which contains NP.

Comment author: Jordan 19 September 2009 12:59:31AM *  0 points [-]

According to this FNP is strictly harder than NP.

No idea about PP. Where did PP come up, or how does it relate to FNP?

ETA: Ah, I see. Aaronson's paper here shows that postselection (which is what our suicide voodoo is) gives you PP. Since postselection also gives you FNP, and since FNP is harder than NP, then we should have that PP is strictly harder than NP.

Comment author: cousin_it 19 September 2009 04:02:10AM *  0 points [-]

Jordan, I'm really sorry to inject unneeded rigor into the discussion again, but the statement "FNP is strictly harder than NP" doesn't work. It makes sense to talk about P=NP because P and NP are both sets of decision problems, but FNP is a set of function problems, so to compare it with NP you have to provide a mapping of some sort. Thus your argument doesn't prove that PP>NP yet.

For me personally, function problems are exotic beasts and I'd prefer to settle the power of quantum voodoo on decision problems first :-)

Comment author: Jordan 19 September 2009 05:56:02PM *  0 points [-]

There's a natural partial mapping in my mind, but a rigorous mapping is beyond me. Check out this. Apparently it can be shown that FNP is strictly harder than NP, under certain conditions. I didn't understand the condition, and whether it was reasonable or not.

Regardless of all this complexity jazz, which has definitely exceeded my grasp, my random dictionary example still demonstrates that certain problems can be solved exponentially faster with postselection than without, even if it doesn't prove that FNP > NP.

ETA: Coming from a numerics background, decision problems seem like exotic beasts. I'd prefer to settle the power of quantum voodoo on function problems first =P

Comment author: Jordan 18 September 2009 09:47:30PM 0 points [-]

Consider the following problem, given N:

Create a list with 2^N entries, where each entry is a random number from 0 to 1. What is the smallest entry?

This is a function minimization problem, where the function takes n and returns the n-th element of the list. The cost of computing the function is O(1). There is no way to minimize it, however, without looking at every value, which is O(2^N). With quantum suicide voodoo, however, we can minimize it in O(1).

Comment author: cousin_it 18 September 2009 10:28:39PM *  1 point [-]

1) The problem of finding the smallest entry in a list is linear-time with respect to the size of the input; calling that size 2^N instead of M doesn't change things.

2) Accessing the nth element of a list isn't O(1), because you have to read the bits of n for chrissake.

3) I'm not sure how you're going to solve this with quantum voodoo in O(1), because just setting up the computation will take time (or space if you're parallel) proportional to the length of the input list.

Comment author: Jordan 18 September 2009 10:44:13PM *  0 points [-]

1) The problem of finding the smallest entry in a list is linear-time with respect to the size of the input; calling that size 2^N instead of M doesn't change things.

You can call it M, if you like. Then individual function evaluations cost log(log(M)). The point is the relative cost difference between function evaluations and function minimization.

2) Accessing the nth element of a list isn't O(1), because you have to read the bits of n for chrissake.

Good point. Function calls will be O(log(N)).

3) I'm not sure how you're going to solve this with quantum voodoo in O(1), because just setting up the computation will take time (or space if you're parallel) proportional to the length of the input list.

Right, the setup of the problem is separate. It could have been handed to you on a flash drive. The point is there exist functions such that we can calculate single evaluations quickly, but can't minimize the entire function quickly.

Comment author: cousin_it 18 September 2009 11:01:39PM *  0 points [-]

The point is the relative cost difference between function evaluations and function minimization.

Well yeah, but the definitions of P and NP involve the size of the input. Your original claim was that we can solve "much harder than NP-complete" problems with quantum voodoo. I don't think you've proved it yet, but if you revise it to somehow talk about "relative cost differences", it does become somewhat more believable.

Comment author: ThirdContact 02 April 2011 10:58:17AM 0 points [-]

like it.

I'm developing a web game called 'quantum roulette' for the website of my upcoming feature Third Contact, and I happened on this thread.

the game will be at www.thirdcontactmovie.com at some point. You might find the film of interest too. Love the discussion.

Comment author: rwallace 18 September 2009 10:16:56AM 0 points [-]

I hold this to be quite a nice reductio ad absurdum of average utilitarianism. Total utilitarianism handles it fine: there is higher utility in being alive with no money than being dead.