Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Brian_Jaress2 comments on Worse Than Random - Less Wrong

25 Post author: Eliezer_Yudkowsky 11 November 2008 07:01PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Brian_Jaress2 12 November 2008 11:18:23PM 0 points [-]

GreedyAlgorithm,

"If your source of inputs is narrower than 'whatever people anywhere using my ubergeneral sort utility will input' then you may be able to do better."

That's actually the scenario I had in mind, and I think it's the most common. Usually, when someone does a sort, they do it with a general-purpose library function or utility.

I think most of those are actually implemented as a merge sort, which is usually faster than quicksort, but I'm not clear on how that ties in to the use of information gained during the running of the program.

What I'm getting at is that the motivation for selecting randomly and any speedup for switching to merge sort don't seem to directly match any of the examples already given.

In his explanation and examples, Eliezer pointed to information gained while the algorithm is running. Choosing the best type of selection in a quicksort is based on foreknowledge of the data, with random selection seeming best when you have the least foreknowledge.

Likewise, the difference between quicksort and other sorts that may be faster doesn't have an obvious connection to the type information that would help you choose between different selections in a quicksort.

I'm not looking for a defense of nonrandom methods. I'm looking for an analysis of random selection in quicksort in terms of the principles that Eliezer is using to support his conclusion.