If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Here's an old puzzle:
Alice: How can we formalize the idea of "surprise"?
Bob: I think surprise is seeing an event of low probability.
Alice: This morning I saw a car whose license plate said 3817, and that didn't surprise me at all!
Bob: Huh.
For everyone still wondering about that, here's the correct answer! The numerical measure of surprise is information gain (Kullback-Leibler divergence) from your prior to your posterior over models after updating on the data. That gives the intuitive answer to the above puzzle, as long as none of your models assigned high probability to 3817 in advance. It also works for the opposite case, if you expected an ordered string but got a random one, or ordered in a different way.
This is actually well known, I just wanted to put it on LW.
Wait. If you're talking about surprise because you have said "update your model based on how surprised you are", you can't turn around and say "surprise is defined by how much you should update your model". "update your model based on how much you should update your model" isn't very helpful.