One issue that I've noticed in discussions on Less Wrong is that I'm much less certain about the likely answers to specific questions than some other people on Less Wrong. But the questions where this seems to be most pronounced are mathematical questions that are close to my area of expertise (such as whether P = NP). In areas outside my expertise, my apparent confidence is apparently often higher. Thus, for example at a recent LW meet-up I expressed a much lower probability estimate that cold fusion is real than what others in the conversation estimated. This suggests that I may be systematically overestimating  my confidence in areas that I don't study as much, essentially a variant of the Dunning-Krueger effect. Have other people here experienced the same pattern with their own confidence estimates?

New Comment
8 comments, sorted by Click to highlight new comments since:

the questions where this seems to be most pronounced are mathematical questions that are close to my area of expertise (such as whether P = NP)

On a tangential note, exactly how close is this to your area of expertise? In my experience it tends to be mathematicians who are in related areas but don't actually work on complexity theory directly who insist on being agnostic about P vs NP - almost all complexity theorists are pretty much completely convinced of even stronger complexity theoretic assumptions (eg, I bet Scott Aaronson would give pretty good odds on BQP != NP).

I'm not entirely sure how tangential this is, as it seems to suggest that there may be some sort of sweet point of expertise (at least on this question) - any layman would take my word for it that P != NP, most non-CS mathematicians would refuse to have an opinion and most complexity theorists are convinced of its truth for their own reasons. I guess this might be something that's unique to mathematics, with its insistence on formal proof as a standard of truth, can anyone thing of anything similar in other fields?

That's a valid point. My own area is algebraic number theory but I have some interest in complexity theory, probably more interest than most number theorists and some of my work has certainly touched on complexity issues. I'm not at all agnostic about P=NP; I assign about a 95% chance that P !=NP, I think that is outside the "agnostic" zone by most standards.

  1. Not "almost all are completely convinced"; according to this poll, 61 supposed experts "thought P != NP" (which does not imply that they would bet their house on it), 9 thought the opposite and 22 offered no opinion (the author writes that he asked "theorists", partly people he knew, but also partly by posting to mailing lists - I'm pretty sure he filtered out the crackpots and that enough of the rest are really people working in the area)

  2. Even that case wouldn't increase the likelyhood of P != NP to 1-epsilon, as experts have been wrong in past and their greater confidence could stem from more reinforcement through groupthink or greater exposition to things they simply understand wrong rather than a better overview. Somewhere in Eliezers posts, a study is referenced where something happens only in 70 % of the cases when an expert says that he is 99 % sure; in another referenced study, people raised their subjective confidence in something vastly more than they actually changed their mind when they got greater exposition to an issue which means that the experts confidence doesn't prove much more than the non-experts (who had light exposition to an issue) confidence.

Perhaps a person tends to make more assumptions in areas not thoroughly familiar. Short-cutting comprehension as a means of saving time or effort where the cost of more study outweighs the benefits, especially short term. Not so much a cognitive effect but a necessary memory expedient.

Wouldn't that be a perfect example of a cognitive effect?

It seems likely. If you know a field well, you know all the wrinkles and the skeletons hidden in the closet; you're much less likely to believe people's strident opinions on what is and isn't possible. You've also got much more experience being wrong in that field- you had to mistake your way up to being experienced. When evaluating something distant, you don't have that history to rely on.

For the cold fusion example specifically, framing might have a lot to do with it. "Were F&P frauds?" is a very different question than "can fusion happen at low temperatures?", and so you can confidently answer "yes!" to the first without having any idea about the second. If you know the field better, you've heard of things like polywells which are sort of low-temperature fusion (but are entirely different from what the cold fusion folks think works).

I suppose I should have been clear that by cold fusion in this context we were explicitly discussing the Pons-Fleischmann sort of set-up. I don't actually think that Pons and Fleischmann were frauds. I'm more inclined to believe that it was an example of poor controls and wishful thinking. I'm aware of other types of low-energy fusion such as fusors and polywells but know that I don't have anywhere near the expertise to evaluate them.

In any event, your first paragraph strongly argues strongly that I should discount my confidence estimates for other fields much more than I do.

a much lower probability estimate that cold fusion is real

Is "is cold fusion real?" the right sort of question? It sounds kind of like we are considering some kind of magic. Perhaps "is cold fusion possible?"