# Daniel_I._Lewis comments on Worse Than Random - Less Wrong

25 11 November 2008 07:01PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Old

You are viewing a single comment's thread.

Comment author: 12 November 2008 05:24:18AM 1 point [-]

Brian: How does the randomness tie in to acquired knowledge, and what is the superior non-random method making better use of that knowledge?

The knowledge in this case is your belief about the distribution of input lists. Let's say, for the sake of argument, that you get sorted lists (forwards or backwards) more often than chance, and the rest of the time you get a random permutation.

On a random list, one choice of pivot is as good as another. On a sorted list, though, the ends are always a bad choice. Picking the first item is thus stupider than average, and you can do better with a random pivot. But you can do even better by picking the middle item: it's always the median when the list is sorted, and it's no worse than random when the list is random.

What if you have an intelligent adversary trying to mount a denial-of-service attack against your quicksort? Then you should expect to get inputs that your algorithm is likely to do poorly on. If you pick pivots randomly then your expected time to sort a list is completely independent of the list's initial ordering. Even though the number of permutations that cause degenerate O(n^2) behavior is the same, you've made it impossible to pick one ahead of time. Randomness can defeat an intelligent adversary for the same reason that optimization processes don't work in a completely random universe.