Eliezer_Yudkowsky comments on How to Measure Anything - LessWrong

50 Post author: lukeprog 07 August 2013 04:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 03 August 2013 01:44:05PM 20 points [-]

A measurement is an observation that quantitatively reduces uncertainty.

A measurement reduces expected uncertainty. Some particular measurement results increase uncertainty. E.g. you start out by assigning 90% probability that a binary variable landed heads and then you see evidence with a likelihood ratio of 1:9 favoring tails, sending your posterior to 50-50. However the expectation of the entropy of your probability distribution after seeing the evidence, is always evaluated to be lower than its current value in advance of seeing the evidence.

Comment author: lukeprog 03 August 2013 07:33:20PM 15 points [-]

Just FYI, I think Hubbard knows this and wrote "A measurement is an observation that quantitatively reduces uncertainty" because he was trying to simplify and avoid clunky sentences. E.g. on p. 146 he writes:

It is even possible for an additional sample to sometimes increase the size of the [confidence] interval... before the next sample makes it narrower again. But, on average, the increasing sample size will decrease the size of the [confidence] interval.

Comment author: lukeprog 29 September 2013 08:31:05PM 3 points [-]

I'm reminded also of Russell's comment:

A book should have either intelligibility or correctness; to combine the two is impossible, but to lack both is to be unworthy.

Comment author: roland 30 August 2015 11:19:15AM *  0 points [-]

The technical term for this is conditional entropy.

The conditional entropy will always be lower unless the evidence is independent of your hypothesis(in this case the conditional entropy will be equal to the prior entropy).