Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Scott_Aaronson2 comments on The Weighted Majority Algorithm - Less Wrong

18 Post author: Eliezer_Yudkowsky 12 November 2008 11:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Scott_Aaronson2 14 November 2008 01:26:51AM 7 points [-]

I am interested in what Scott Aaronson says to this.

I fear Eliezer will get annoyed with me again :), but R and Stephen basically nailed it. Randomness provably never helps in average-case complexity (i.e., where you fix the probability distribution over inputs) -- since given any ensemble of strategies, by convexity there must be at least one deterministic strategy in the ensemble that does at least as well as the average.

On the other hand, if you care about the worst-case running time, then there are settings (such as query complexity) where randomness provably does help. For example, suppose you're given n bits, you're promised that either n/3 or 2n/3 of the bits are 1's, and your task is to decide which. Any deterministic strategy to solve this problem clearly requires looking at 2n/3 + 1 of the bits. On the other hand, a randomized sampling strategy only has to look at O(1) bits to succeed with high probability.

Whether randomness ever helps in worst-case polynomial-time computation is the P versus BPP question, which is in the same league as P versus NP. It's conjectured that P=BPP (i.e., randomness never saves more than a polynomial). This is known to be true if really good pseudorandom generators exist, and such PRG's can be constructed if certain problems that seem to require exponentially large circuits, really do require them (see this paper by Impagliazzo and Wigderson). But we don't seem close to proving P=BPP unconditionally.