mkehrt comments on Open Thread: February 2010 - Less Wrong

1 Post author: wedrifid 01 February 2010 06:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (738)

You are viewing a single comment's thread. Show more comments above.

Comment author: mkehrt 02 February 2010 01:26:30AM 4 points [-]

A quick glance seems to indicate that they are achieving these linear time algorithms through massive parallelization. This is "cheating" because to do a linear-time sort of size n, you need O(n) processing units. While they seem to be arguing that this is acceptable because processing is becoming more and more parallel, this breaks down for large n. One can easily use traditional algorithms to sort a billion elements in O(n * log n); however for their algorithm to sort such a list in O(n) time, they need a billion (times some constant factor) times more processing units than to sort a list of size n.

I'm also vaguely perplexed by their basic argument. They want to have programming tools and computational models which are closer to the metal to take advantage of the features of new machines. This ignores the fact that the current abstractions exist, not just for historical reasons, but because they are easy to reason about.

This is all from a fairly cursory read of their paper, however, so take it with a grain of salt.

Comment author: pengvado 02 February 2010 01:53:34AM *  10 points [-]

It takes O(n) memory units just to store a list of size n. Why should computers have asymptotically more memory units than processing units? You don't get to assume an infinitely parallel computer, but O(n)-parallel is only reasonable.

My first impression of the paper is: We can already do this, it's called an FPGA, and the reason we don't use them everywhere is that they're hard to program for.