CarlShulman comments on 2012 Survey Results - Less Wrong

80 Post author: Yvain 07 December 2012 09:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (640)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 29 November 2012 03:48:00PM 26 points [-]

The calibration question is an n=1 sample on one of the two important axes (those axes being who's answering, and what question they're answering). Give a question that's harder than it looks, and people will come out overconfident on average; give a question that's easier than it looks, and they'll come out underconfident on average. Getting rid of this effect requires a pool of questions, so that it'll average out.

Comment author: CarlShulman 01 December 2012 09:18:50PM *  5 points [-]

I have often pondered this problem with respect to some of the traditional heuristics and biases studies, e.g. the "above-average driver" effect. If people consult their experiences of subjective difficulty at doing a task, and then guess they are above average for the ones that feel easy, and below average for the ones that feel hard, this will to some degree track their actual particular strengths and weaknesses. Plausibly a heuristic along these lines gives overall better predictions than guessing "I am average" about everything.

However, if we focus in on activities that happen to be unusually easy-feeling or hard-feeling in general, then we can make the heuristics look bad by only showing their successes and not their failures. Although the name "heuristics and biases" does reflect this notion: we have heuristics because they usually work, but they produce biases in some cases as an acceptable loss.