Jonathan_Graehl comments on PredictionBook.com - Track your calibration - Less Wrong

34 Post author: Eliezer_Yudkowsky 14 October 2009 12:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (52)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jonathan_Graehl 18 August 2011 06:00:56AM 0 points [-]

Your point regarding the overconfidence of most domain experts is a strong one. I've updated :) This is not quite antipodal to the incompetent most overestimating their percentile competence - D-K.

I was merely imagining, without evidence, that some of the calibration training would be general and some would be domain specific. Certainly you'd learn to calibrate, in general. You just wouldn't automatically be calibrated in all domains. Obviously, if you've optimized on your expertise in a domain (or worse: on getting credit for a single bold overconfident guess), then I don't expect you to have optimized your calibration for that domain. In fact, I have only a weak opinion about whether domain experts should be better or worse calibrated on average in their natural state. I'm guessing they'll overly signal confidence (to their professional+status benefit) moreso than that they're really more overconfident (when it comes to betting their own money).

Comment author: gwern 18 August 2011 04:31:01PM 0 points [-]

Fortunately, Dunning-Kruger does not seem to be universal (not that anyone who would understand or care about calibration would also be in the stupid-enough quartiles in the first place).

Certainly you'd learn to calibrate, in general. You just wouldn't automatically be calibrated in all domains.

Again, I don't see why I couldn't. All I need is a good understanding of what I know, and then anytime I run into predictions on things I don't know about, I should be able to estimate my ignorance and adjust my predictions closer to 50% as appropriate. If I am mistaken, well, in some areas I will be underconfident and in some overconfident, and they balance out.

Comment author: Jonathan_Graehl 18 August 2011 09:23:46PM *  1 point [-]

If there's a single thing mainly responsible for making people poor estimators of their numerical certainty (judged against reality), then you're probably right. For example, it makes sense for me to be overconfident in my pronouncements if I want people to listen to me, and there's little chance of me being caught in my overconfidence. This motivation is strong and universal. But I can learn to realize that I'm effectively lying (everyone does it, so maybe I should persist in most arenas), and report more honestly and accurately, if only to myself, after just a little practice in the skill of soliciting the right numbers for my level of information about the proposition I'm judging.

I have no data, so I'll disengage until I have some.