This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
- No more than 5 quotes per person per monthly thread, please.
Wow, I'm glad this kind of analysis is showing up in mainstream publications.
Norvig is describing an important insight from information theory: the amount of information you get from learning something is equal to the log of the inverse of the probability you assigned to it (log 1/p). (This value is called the "surprisal" or "self-information".)
So, always getting results you expect (i.e. put a high p on), means you're getting little information out of the experiments, and you should be doing ones where you expect the result to be less probable.
Therefore, to have a good experiment, you want to maximize the "expected surprisal" (i.e. sum over p * log(1/p)), which is equivalent to the entropy, and probably the basis for the method you mention.
Is LW broken for everyone?
ETA: When I wrote this, the "Comments" page was one of the few I could access, hence it being posted in such a strange place.