Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

timtyler comments on Worse Than Random - Less Wrong

25 Post author: Eliezer_Yudkowsky 11 November 2008 07:01PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 23 January 2010 10:03:37PM 2 points [-]

That's an "improvement" that's unlikely to happen before universal heat death.

Improving an algorithm by adding randomness is simple in many cases - while improving it further may be totally impractical and ineffectual.

Comment author: Eliezer_Yudkowsky 23 January 2010 10:27:51PM 2 points [-]

Certainly. That, however, is a mere matter of limited computing power, not of theory.

Comment author: timtyler 23 January 2010 11:09:49PM *  4 points [-]

The uncertainty principle would appear to represent a problem for the thesis. Secure hardware random number generators are "really difficult" to predict. That appears to be a case where you can't - in practice - beat a random strategy - whereas a random strategy beats all kinds of stupid strategies.

The "any algorithm that can be improved by randomization" is redundant - and does no useful conceptual work. Idiotic algorithms that can be improved by adding randomness are two-a-penny.

The "can always be improved further by derandomization" is not true - in practice. Sometimes, you just can't find a way to do any better than the random algorithm.

I think the correct principle in this area is that a deterministic algorithm can always do at least as well as a random one - provided its workings can be kept secret.

The main problem case for this principle occurs when you face an opponent with a lot of resources and cryptographic skills - but with sufficient care you should be able to keep them busy until the universal heat death.

Comment author: Tyrrell_McAllister 23 January 2010 10:34:57PM *  5 points [-]

I think that this is a better de-randomization: Presumably there is some set S of numbers that have been observed to pass Diehard tests reliably. The de-randomized algorithm is simply to read off S. Computationally, this is cheaper than generating your own random set and reading it off.

ETA: And, if I'm not mistaken, if S has already been observed to pass Diehard tests reliably, then it is even more likely to pass them in the future than is a newly generated random set.