You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on Open Thread, Jul. 27 - Aug 02, 2015 - Less Wrong Discussion

5 Post author: MrMind 27 July 2015 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (220)

You are viewing a single comment's thread. Show more comments above.

Comment author: D_Malik 27 July 2015 04:43:20PM *  11 points [-]

About that survey... Suppose I ask you to guess the result of a biased coin which comes up heads 80% of the time. I ask you to guess 100 times, of which ~80 times the right answer is "heads" (these are the "easy" or "obvious" questions) and ~20 times the right answer is "tails" (these are the "hard" or "surprising" questions). Then the correct guess, if you aren't told whether a given question is "easy" or "hard", is to guess heads with 80% confidence, for every question. Then you're underconfident on the "easy" questions, because you guessed heads with 80% confidence but heads came up 100% of the time. And you're overconfident on the "hard" questions, because you guessed heads with 80% confidence but got heads 0% of the time.

So you can get apparent under/overconfidence on easy/hard questions respectively, even if you're perfectly calibrated, if you aren't told in advance whether a question is easy or hard. Maybe the effect Yvain is describing does exist, but his post does not demonstrate it.

Comment author: cousin_it 27 July 2015 06:54:34PM *  4 points [-]

Wow, that's a great point. We can't measure anyone's "true" calibration by asking them a specific set of questions, because we're not drawing questions from the same distribution as nature! That's up there with the obvious-in-retrospect point that the placebo effect gets stronger or weaker depending on the size of the placebo group in the experiment. Good work :-)