TheOtherDave comments on Calibrate your self-assessments - LessWrong

68 Post author: Yvain 09 October 2011 11:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (117)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 10 October 2011 04:33:32AM 3 points [-]

I meant within the set of your 50 test scores, assuming they're normalized to a common range.

To pick an extreme example: if all your test scores fall between 92% and 98%, it becomes less remarkable that your estimations of your test scores all fall within 3% of your actual test scores... anyone else could do about as well, given that fact about the data set. So it seems that knowing something about the distribution is helpful in reasoning about the causes of the differences in the accuracy of your judgments.

Comment author: KPier 11 October 2011 12:56:47AM *  0 points [-]

Oh, that makes sense.

Nope, still a big difference. For example, here are my scores from the last few weeks:

Predicted/Actual: 98/100 72/72.5 94/94 85/86 82.5/87.5 90/92

Comment author: Luke_A_Somers 11 October 2011 02:26:42PM 0 points [-]

Interesting that there were no too-high predictions.