Lumifer comments on The Power of Noise - LessWrong

28 Post author: jsteinhardt 16 June 2014 05:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: jsteinhardt 17 June 2014 02:39:58AM *  2 points [-]

I doubt that Eliezer would refuse to implement a probabilistic solution on the grounds that it's not pure enough and so no solution at all is better than a version tainted by its contact with the RNG.

Sure, I agree with this. But he still seems to think that in principle it is always possible to improve over a randomized algorithm. But doing so requires having some knowledge of the distribution over the environment, and that would break modularity.

Whether or not Eliezer himself is basing this argument on Bayesian grounds, it certainly seems to be the case that many commenters are, e.g.:

Will_Pearson:

However if it is an important problem and you think you might be able to find some regularities, the best bet would to be to do bayesian updates on the most likely positions to be ones and preferentially choose those

DonGeddis:

And yet this is no different from a deterministic algorithm. It can also query O(1) bits, and "with high probability" have a certain answer.

And some comments by Eliezer:

Eliezer_Yudkowsky:

Quantum branching is "truly random" in the sense that branching the universe both ways is an irreducible source of new indexical ignorance. But the more important point is that unless the environment really is out to get you, you might be wiser to exploit the ways that you think the environment might depart from true randomness, rather than using a noise source to throw away this knowledge.

Eliezer_Yudkowsky:

I certainly don't say "it's not hard work", and the environmental probability distribution should not look like the probability distribution you have over your random numbers - it should contain correlations and structure. But once you know what your probability distribution is, then you should do your work relative to that, rather than assuming "worst case". Optimizing for the worst case in environments that aren't actually adversarial, makes even less sense than assuming the environment is as random and unstructured as thermal noise.

Comment author: Lumifer 17 June 2014 02:51:44AM 3 points [-]

in principle it is always possible to improve over a randomized algorithm

in principle are the key words.

As the old saying goes, in theory there is no difference between theory and practice but in practice there is.

requires having some knowledge of the distribution over the environment, and that would break modularity.

You are using the word "modularity" in a sense weird to me. From my perspective, in the software context "modularity" refers to interactions of pieces of software, not inputs or distributions over the environment.

Comment author: jsteinhardt 17 June 2014 06:15:22PM 2 points [-]

Based on the discusssions with you and trist, I updated the original text of that section substantially. Let me know if it's clearer now what I mean.

Also, thanks for the feedback so far!