Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

wafflepudding comments on Worse Than Random - Less Wrong

25 Post author: Eliezer_Yudkowsky 11 November 2008 07:01PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: wafflepudding 31 December 2015 08:14:42AM 0 points [-]

This post is my first experience learning about noise in algorithms, so forgive me if I seem underinformed. Two points occurred to me while reading this comment, some clarification would be great:

First, while it was intriguing to read that input just below the perceptual threshold would half the time be perceived by bumping it above the threshold, it seems to me that input just above the threshold would half the time be knocked below it. So wouldn't noise lead to no gain? Just a loss in acuity?

Second, I'm confused how input below the perceptual threshold is actually input. If a chair moves in front of a camera so slightly that the camera doesn't register a change in position, the input seems to me like zero, and noise loud enough to move zero past the perceptual threshold would not distinguish between movement and stillness, but go off half the time and half the time be silent. If that doesn't make sense, assume that the threshold is .1 meters, and the camera doesn't notice any movement less than that. Let's say your noise is a random number between .01 meters and -.01 meters. The chair moves .09 meters, and your noise lands on .01 meters. I wouldn't think that would cross the threshold, because the camera can't actually detect that .09 meters if it's threshold is .1. So, wouldn't the input just be 0 motion detected + .01 meters of noise = .01 meters of motion? Maybe I'm misunderstanding.

Comment author: gjm 31 December 2015 09:44:52AM *  2 points [-]

Suppose you have a motion-detector that looks once per second and notices a change when the chair moves by 0.1m within a second and is completely blind to smaller changes. Then a chair moving at 0.09m/s won't trigger it at all. Now suppose you add noise of amplitude +-0.01m. Then in most seconds you still won't see anything, but sometimes (I think 1/8 of the time, if that noise is uniformly distributed) the apparent movement will be above the threshold. So now if you do some kind of aggregation of the detector output over time you'll be able to tell that the chair is moving.

Yes, the cost of this is that above the threshold your performance is worse. You'll need to take averages or something of the kind to make up for it. (But: when a detector has a threshold, it usually doesn't give perfectly accurate measurements just above the threshold. You may find that even above the threshold you actually get more useful results in the presence of noise.)

Another example. Suppose you are trying to detect oscillating signals (musical notes, radio waves, ...) via an analogue-to-digital converter. Let's say its resolution is 1 unit. Then a signal oscillating between -0.5 and +0.5 will not show up at all: every time you sample it you'll get zero. And any small change to the signal will make exactly no difference to the output. But if you add enough noise to that signal, it becomes detectable. You'll need to average your data (or do something broadly similar); you'll have some risk of false positives; but if you have enough data you can measure the signal pretty well even though it's well below the threshold of your ADC.

[EDITED to add:] It may be worth observing that there's nothing super-special about adding random stuff for this purpose. E.g., suppose you're trying to measure some non-varying value using an analogue-to-digital converter, and the value you're trying to measure is smaller than the resolution in your ADC. You could (as discussed above) add noise and average. But if you happen to have the ability to add non-random offsets to your data before measuring, you can do that and get better results than with random offsets.

In other words, this is not an exception to the principle Eliezer proposes, that anything you can improve by adding randomness you can improve at least as much by adding something not-so-random instead.