paulfchristiano comments on Who thinks quantum computing will be necessary for AI? - Less Wrong

4 Post author: ChrisHallquist 28 May 2013 10:59PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (101)

You are viewing a single comment's thread. Show more comments above.

Comment author: paulfchristiano 29 May 2013 10:29:39PM *  3 points [-]

As a pedantic note, if you want to derandomize algorithms it is necessary (and sufficient) to assume P/poly != E, i.e. polynomial size circuits cannot compute all functions computed by exponential time computations. This is much weaker than P != NP, and is consistent with e.g. P = PSPACE. You don't have to be able to fool an adversary, to fool yourself.

This is sometimes sloganized as "randomness never helps unless non-uniformity always helps," since it is obvious that P << E and generally believed that P/poly is about as strong as P for "uniform" problems. It would be a big shock if P/Poly was so much bigger than P.

But of course, in the worlds where you can't derandomize algorithms in the complexity-theoretic sense, you can still look up at the sky and use the whole universe to get your randomness. What this means is that you can exploit much of the stuff going on in the universe to do useful computation without lifting a finger, and since the universe is so much astronomically larger than the problems we care about, this is normally good enough. General derandomization is extremely interesting and important as a conceptual framework in complexity theory, but useless for actually computing things.

Comment author: ciphergoth 30 May 2013 05:59:45AM 2 points [-]

Are you referring to this result? Doesn't seem to be identical to what you said, but very close.

Comment author: paulfchristiano 30 May 2013 10:21:35AM *  3 points [-]

Yeah, I was using "derandomize" slightly sloppily (to refer to a 2^n^(epsilon) slowdown rather than a poly slowdown). The result you cite is one of the main ones in this direction, but there are others (I think you can find most of them by googling "hardness vs. randomness").

If poly size circuits can't compute E, we can derandomize poly time algorithms with 2^(m^c) complexity for any c > 0, and if 2^(m^c) size circuits can't compute E for sufficiently small c, we can derandomize in poly time. Naturally there are other intermediate tradeoffs, but you can't quite get BPP = P from P/poly < E.