Jordan comments on Quantum Russian Roulette - Less Wrong

6 Post author: Christian_Szegedy 18 September 2009 08:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread.

Comment author: Jordan 18 September 2009 07:05:21PM 1 point [-]

With a similar technique you can solve any NP-Complete problem. Actually, you can solve much harder problems. For instance, you can minimize any function you have enough computing power to compute. You could apply this, for instance, to genetic algorithms, and arrive at the globally fittest solution. You could likewise solve for the "best" AI given some restraints, such as: find the best program less than 10000 characters long that performs best on a Turing test.

Comment author: Theist 18 September 2009 11:45:24PM 2 points [-]

If mangled worlds is correct (and I understand it correctly), then sufficiently improbable events fail to happen at all. What kind of limit would this place on the problems you can solve with "quantum suicide voodoo"?

Comment author: Christian_Szegedy 18 September 2009 08:45:11PM 0 points [-]

This is a very interesting point and somehow shakes my belief in the current version of MWI.

What I could imagine is that since the total information content of multiverse must be finite, there is some additional quantification going on that makes highly improbable branches "too fuzzy" to be observable. Or something like that.

Comment author: cousin_it 18 September 2009 08:47:31PM *  2 points [-]

Not likely. You're already in a highly improbable branch, and it's getting less probable every millisecond.

Comment author: Christian_Szegedy 19 September 2009 02:15:11AM 0 points [-]

We have seen in the sister topic that mangled worlds theory can in fact account for such information loss. However MWT has similar deficiencies as single worlds: non local action, nonlinearity, discontinuity. It does not mean it can't be true.

Comment deleted 18 September 2009 08:57:09PM *  [-]
Comment author: pengvado 19 September 2009 02:33:55AM 0 points [-]

Why would the information content of a quantum universe be measured in bits, rather than qubits? 2^1000 qubits is enough to keep track of every possible configuration of the Hubble volume, without discarding any low magnitude ones. (Unless of course QM does discard low magnitude branches, in which case your quantum computer would too... but such a circular definition is consistent with any amount of information content.)

Comment author: cousin_it 18 September 2009 08:51:22PM *  -1 points [-]

Actually, you can solve much harder problems. For instance, you can minimize any function you have enough computing power to compute.

Why much harder? If computing the function is in P, minimization is in NP.

EDIT: this statement is wrong, see Wei Dai's comments below for explanation. Corrected version: "minimization is at most polynomially harder than an NP problem".

Comment author: Wei_Dai 18 September 2009 09:30:50PM *  2 points [-]

If computing the function is in P, minimization is in NP.

Why is that? In order for function minimization to be in NP, you have to be to write a polytime-checkable proof of the fact that some input is a minimum. I don't think that's true in general.

I also don't see how function minimization can be accomplished using quantum suicide. You can compute the value of the function on every possible input in parallel, but how do you know that your branch hasn't found the minimum and therefore should commit suicide?

This seems like a relevant article, although it doesn't directly address the above questions.

ETA: I do see a solution now how to do function minimization using quantum suicide: Guess a random x, compute f(x), then flip a quantum coin n*f(x) times, and commit suicide unless all of them come out heads. Now the branch that found the minimum f(x) will have a measure at least 2^n times greater than any other branch.

ETA 2: No, that's not quite right, since flipping a quantum coin n*f(x) times takes time proportional to f(x) which could be exponential. So I still don't know how to do this.

Comment author: cousin_it 18 September 2009 11:11:46PM *  0 points [-]

Why is that? In order for function minimization to be in NP, you have to be to write a polytime-checkable proof of the fact that some input is a minimum. I don't think that's true in general.

...That's an interesting way to look at it. For example take the traveling salesman problem: it's traditionally converted to a decision problem by asking if there's a route cheaper than X. Note that only "yes" answers have to have quickly checkable certificates - this is NP, not NP intersected with co-NP. Now if we start asking if X is exactly the minimum instead, would that be equivalent in complexity? And would it be helpful, seeing as it takes away our ability to do binary search?

In short, I'm a little funny in the head right now, but still think my original claim was correct.

Comment author: Wei_Dai 19 September 2009 04:07:20AM *  3 points [-]

This paper (which I found via this Ph.D. thesis) says that for TSP, the question "Is the optimal value equal to k" is D^P-complete. It also says D^P contains both NP and coNP, so assuming NP!=coNP, that problem is not in NP.

Your original claim was "If computing the function is in P, minimization is in NP." Technically, this sentence doesn't make sense because minimization is a functional problem, whereas NP is a class of decision problems. But the ability to minimize a function certainly implies the ability to determine whether a value is a minimum, and since that problem is not in NP, I think your claim is wrong in spirit as well.

As Nesov pointed out, TSP is in FP^NP, so perhaps a restatement of your claim that is correct is "If computing the function takes polynomial time, then it can be minimized in polynomial time given an NP oracle." This still seems to imply that function minimization isn't much harder than NP-complete problems.

Are there any problems in PostBQP that can be said to be much harder than NP-complete problems? I guess you asked that already.

Comment author: cousin_it 19 September 2009 05:20:53AM 2 points [-]

There was a dumb comment here, I deleted it. You're actually completely right, thanks!

Comment author: Jordan 18 September 2009 11:08:19PM 0 points [-]

You can't get the exact answer, but you can approach it exponentially quickly by doing a binary search. So, if you want N digits of accuracy, you're going to need O(N) time. Someone mentioned this elsewhere but I can't find the comment now.

Your method above would work better, actually (assuming the function is valued from 0 to 1). Just randomly guess x, compute f(x)^n, then place a qubit in a superposition such that it is f(x)^n likely to be 0, and 1 - f(x)^n likely to be 1. Measure the qubit. If it is 0, quantum suicide. You can do the whole thing a few times, taking n = 10, n = 1000, and so on, until you're satisfied you've actually got the minimum. Of course, there's always the chance there's a slightly smaller minimum somewhere, so we're just getting an approximate answer like before, albeit an incredibly good approximate answer.

Comment author: Vladimir_Nesov 18 September 2009 11:26:46PM *  0 points [-]

It's in FP^NP, not in NP. Only the decision problem for whether a given value C is more than the minimum is in NP, not minimization itself.

Comment author: cousin_it 18 September 2009 11:45:52PM *  0 points [-]

Yes, that's certainly right, thanks. I was careless in assuming that people would just mentally convert everything to decision problems, like I do :-)

ETA: Vladimir, I couldn't find any proof that NP is strictly contained in PP; is it known? Our thread seems to depend on that question only. It should be true because PP is so freaky powerful, but nothing's turned up yet.

Comment author: pengvado 19 September 2009 07:20:16AM *  2 points [-]

P⊆NP⊆PP⊆PSPACE, but we don't even have a proof of P≠PSPACE.

Unconditionally proven strict containments are pretty rare. You'll probably want to settle for some lesser evidence.

Comment author: cousin_it 19 September 2009 12:55:44PM *  0 points [-]

Hah. Thanks, that settles the issue for now: short of Scott Aaronson making an unexpected breakthrough, we have no proof that quantum suicide computing can solve any non-polynomial problems :-)

Comment author: Wei_Dai 19 September 2009 07:39:50AM 0 points [-]

On the computational power of PP and ⊕P says it gives strong evidence that PP is strictly harder than PH, which contains NP.

Comment author: Jordan 19 September 2009 12:59:31AM *  0 points [-]

According to this FNP is strictly harder than NP.

No idea about PP. Where did PP come up, or how does it relate to FNP?

ETA: Ah, I see. Aaronson's paper here shows that postselection (which is what our suicide voodoo is) gives you PP. Since postselection also gives you FNP, and since FNP is harder than NP, then we should have that PP is strictly harder than NP.

Comment author: cousin_it 19 September 2009 04:02:10AM *  0 points [-]

Jordan, I'm really sorry to inject unneeded rigor into the discussion again, but the statement "FNP is strictly harder than NP" doesn't work. It makes sense to talk about P=NP because P and NP are both sets of decision problems, but FNP is a set of function problems, so to compare it with NP you have to provide a mapping of some sort. Thus your argument doesn't prove that PP>NP yet.

For me personally, function problems are exotic beasts and I'd prefer to settle the power of quantum voodoo on decision problems first :-)

Comment author: Jordan 19 September 2009 05:56:02PM *  0 points [-]

There's a natural partial mapping in my mind, but a rigorous mapping is beyond me. Check out this. Apparently it can be shown that FNP is strictly harder than NP, under certain conditions. I didn't understand the condition, and whether it was reasonable or not.

Regardless of all this complexity jazz, which has definitely exceeded my grasp, my random dictionary example still demonstrates that certain problems can be solved exponentially faster with postselection than without, even if it doesn't prove that FNP > NP.

ETA: Coming from a numerics background, decision problems seem like exotic beasts. I'd prefer to settle the power of quantum voodoo on function problems first =P

Comment author: Jordan 18 September 2009 09:47:30PM 0 points [-]

Consider the following problem, given N:

Create a list with 2^N entries, where each entry is a random number from 0 to 1. What is the smallest entry?

This is a function minimization problem, where the function takes n and returns the n-th element of the list. The cost of computing the function is O(1). There is no way to minimize it, however, without looking at every value, which is O(2^N). With quantum suicide voodoo, however, we can minimize it in O(1).

Comment author: cousin_it 18 September 2009 10:28:39PM *  1 point [-]

1) The problem of finding the smallest entry in a list is linear-time with respect to the size of the input; calling that size 2^N instead of M doesn't change things.

2) Accessing the nth element of a list isn't O(1), because you have to read the bits of n for chrissake.

3) I'm not sure how you're going to solve this with quantum voodoo in O(1), because just setting up the computation will take time (or space if you're parallel) proportional to the length of the input list.

Comment author: Jordan 18 September 2009 10:44:13PM *  0 points [-]

1) The problem of finding the smallest entry in a list is linear-time with respect to the size of the input; calling that size 2^N instead of M doesn't change things.

You can call it M, if you like. Then individual function evaluations cost log(log(M)). The point is the relative cost difference between function evaluations and function minimization.

2) Accessing the nth element of a list isn't O(1), because you have to read the bits of n for chrissake.

Good point. Function calls will be O(log(N)).

3) I'm not sure how you're going to solve this with quantum voodoo in O(1), because just setting up the computation will take time (or space if you're parallel) proportional to the length of the input list.

Right, the setup of the problem is separate. It could have been handed to you on a flash drive. The point is there exist functions such that we can calculate single evaluations quickly, but can't minimize the entire function quickly.

Comment author: cousin_it 18 September 2009 11:01:39PM *  0 points [-]

The point is the relative cost difference between function evaluations and function minimization.

Well yeah, but the definitions of P and NP involve the size of the input. Your original claim was that we can solve "much harder than NP-complete" problems with quantum voodoo. I don't think you've proved it yet, but if you revise it to somehow talk about "relative cost differences", it does become somewhat more believable.