Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

gwern comments on Dragon Ball's Hyperbolic Time Chamber - Less Wrong Discussion

35 Post author: gwern 02 September 2012 11:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (62)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 25 November 2012 09:31:28PM 2 points [-]

Even the so-called Embarrassingly parallel problems, those whose theoretical performance scales almost linearly with the number of cpus, in practice scale sublinearly in the amount of work done per dollar: massive parallelization comes with all kinds of overheads, from synchronization to cache contention to network communication costs to distributed storage issues. More trivially, large data centers have significant heat dissipation issues: they all need active cooling and many are also housed in high-tech buildings specifically designed to address this issue. Many companies even place data centers in northern countries to take advantage of the colder climate, instead of putting them in, say, China, India or Brazil where labor costs much less.

Problems that are not embarrassingly parallel are limited by Amdahl's law: as you increase the number of cpus, the performance quickly reach an asymptote where the sequential parts of the algorithms dominate.

I can't help but think that there being no obvious candidates means the candidates wouldn't be fantastically useful.

Take P-complete problems, for instance. These are problems which are efficient (polynomial time) on a sequential computer, but are conjectured to be inherently difficult to parallelize (the NC != P conjecture). This class contains problems of practical interest, notably linear programming and various problems for model checking. Being able to run these tasks overnight instead of in one year would a significant advantage.

Comment author: gwern 25 November 2012 11:45:39PM 1 point [-]

A HTC would come with serious overhead costs too; the cooling is just the flip side of the electricity - a HTC isn't in Iceland and the obvious interpretation of a HTC as a very small pocket universe means that you have serious cooling issues as well (a years' worth of heat production to eject each opening).

Take P-complete problems, for instance. These are problems which are efficient (polynomial time) on a sequential computer, but are conjectured to be inherently difficult to parallelize (the NC != P conjecture). This class contains problems of practical interest, notably linear programming and various problems for model checking. Being able to run these tasks overnight instead of in one year would a significant advantage.

I'm not sure how much of an advantage that would be: there are pretty good approximations for some (most/all?) problems like linear programming (remember Grötschel's report citing a 43 million times speedup of a benchmark linear programming problem since 1988) and such stuff tends to asymptote. How much of an advantage is running for a year rather than the otherwise available days/weeks? Is it large enough to pay for a year of premium HTC computing power?

Comment author: V_V 26 November 2012 03:14:36PM -1 points [-]

a HTC isn't in Iceland and the obvious interpretation of a HTC as a very small pocket universe means that you have serious cooling issues as well (a years' worth of heat production to eject each opening).

Of course given that the HTC is a fictional device you can always imagine arbitrary issues that make it uneconomical. I was considering the HTC just as a computer that had 365x the serial speed of present day computers, and considering whether there would be economically interesting batch (~1 day long) computations to run on it.

I'm not sure how much of an advantage that would be: there are pretty good approximations for some (most/all?) problems like linear programming (remember Grötschel's report citing a 43 million times speedup of a benchmark linear programming problem since 1988) and such stuff tends to asymptote.

These problems have polynomial time complexity, they don't asymptote. Linear programming, for instance has quadratic worst-case time complexity in the size of problem instance (and O(n^3.5) time complexity in the number of variables). For problems related to model checking (circuit value problem, Horn-satisfiability, type inference) approximate solutions don't seem particularly useful.

Comment author: gwern 28 November 2012 01:28:57AM 1 point [-]

Of course given that the HTC is a fictional device you can always imagine arbitrary issues that make it uneconomical. I was considering the HTC just as a computer that had 365x the serial speed of present day computers, and considering whether there would be economically interesting batch (~1 day long) computations to run on it.

Hm, I wasn't, except in the shift to the upload scenario where the speedup is not from executing regular algorithms (presumably anything capable of executing emulated brains at 365x realtime will have much better serial performance than current CPUs). As an ordinary computer there's still heat considerations - how is it taking care of putting out 365x a regular computer's heat even if it's doing 365x the work? And as a pocket universe as specified, heat is an issue - in fact, now that I think about it Stephen Baxter invented an space-faring alien race in his Ring hard sf universe which lives inside tiny pocket universes as the ultimate in heat insulation.

These problems have polynomial time complexity, they don't asymptote. Linear programming, for instance has quadratic worst-case time complexity in the size of problem instance (and O(n^3.5) time complexity in the number of variables).

I was referring to the quality of the solution produced by the approximating algorithms.

For problems related to model checking (circuit value problem, Horn-satisfiability, type inference) approximate solutions don't seem particularly useful.

Quickly googling, there seems to be plenty of work on approximate solutions and approaches in model checking; for example http://cs5824.userapi.com/u11728334/docs/77a8b8880f48/Bernhard_Steffen_Verification_Model_Checking_an.pdf includes a paper:

In this paper, we propose an approximation method to verify quantitative properties on discrete Markov chains. We give a randomized algorithm to approximate the probability that a property expressed by some positive LTL formula is satisfied with high confidence by a probabilistic system. Our randomized algorithm requires only a succinct representation of the system and is based on an execution sampling method. We also present an implementation and a few classical examples to demonstrate the effectiveness of our approach.

I'll admit I don't know much about the model checking field, though.