Nick_Tarleton comments on Dissenting Views - Less Wrong

19 Post author: byrnema 26 May 2009 06:55PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (207)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 27 May 2009 01:50:07AM 0 points [-]

Perhaps there are intuitive notions of "less wrong" that are different from "more right", but in a technical sense, they seem to be the same:

At this point it may occur to some readers that there's an obvious way to achieve perfect calibration - just flip a coin for every yes-or-no question, and assign your answer a confidence of 50%. You say 50% and you're right half the time. Isn't that perfect calibration? Yes. But calibration is only one component of our Bayesian score; the other component is discrimination.

Suppose I ask you ten yes-or-no questions. You know absolutely nothing about the subject, so on each question you divide your probability mass fifty-fifty between "Yes" and "No". Congratulations, you're perfectly calibrated - answers for which you said "50% probability" were true exactly half the time. This is true regardless of the sequence of correct answers or how many answers were Yes. In ten experiments you said "50%" on twenty occasions - you said "50%" to Yes-1, No-1; Yes-2, No-2; .... On ten of those occasions the answer was correct, the occasions: Yes-1; No-2; No-3; .... And on ten of those occasions the answer was incorrect: No-1; Yes-2; Yes-3; ...

Now I give my own answers, putting more effort into it, trying to discriminate whether Yes or No is the correct answer. I assign 90% confidence to each of my favored answers, and my favored answer is wrong twice. I'm more poorly calibrated than you. I said "90%" on ten occasions and I was wrong two times. The next time someone listens to me, they may mentally translate "90%" into 80%, knowing that when I'm 90% sure I'm right about 80% of the time. But the probability you assigned to the final outcome is 1/2 to the tenth power, 0.001 or 1/1024. The probability I assigned to the final outcome is 90% to the eighth power times 10% to the second power, (0.9^8)*(0.1^2), which works out to 0.004 or 0.4%. Your calibration is perfect and mine isn't, but my better discrimination between right and wrong answers more than makes up for it. My final score is higher - I assigned a greater joint probability to the final outcome of the entire experiment. If I'd been less overconfident and better calibrated, the probability I assigned to the final outcome would have been 0.8^8 * 0.2^2, 0.006.

Accounting for the uncertainty in your own mind only gets you so far, to a certain minimum of wrongness. To do better, to be less wrong, you have to actually be right about the rest of the universe outside your mind.

Comment author: Nick_Tarleton 27 May 2009 07:52:31AM 2 points [-]

Perhaps there are intuitive notions of "less wrong" that are different from "more right", but in a technical sense, they seem to be the same:

True but irrelevant; this is psychology, not probability theory. Intuitively, to a first approximation, beliefs are either affirmed or not, and there's a difference between affirming fewer false beliefs and more true ones.

Comment author: JGWeissman 27 May 2009 10:04:58PM 1 point [-]

The fact that psychology can explain how the phrase "less wrong" can be misunderstood does not mean that the misunderstanding is the correct way to interpret that phrase when used by an online community that uses psychology, as well as probability theory, to inform the development of rationality. It does not make sense to interpret the title of our site with the very naivety that we seek to overcome.

Comment author: pjeby 28 May 2009 01:09:38AM 2 points [-]

It does not make sense to interpret the title of our site with the very naivety that we seek to overcome.

That's what I've been saying, actually. Except that the naivety in question is the belief that brains do probability or utility, when it's well established that humans can have both utility and disutility, that they're not the same thing, and that human behavior about them is different. You know, all that loss/win framing stuff?

It's not rational to expect human beings to treat "less wrong" as meaning the same thing (in behavioral terms) as "more right". Avoiding wrongness has different emotional affect and different prioritization of behavior and thought than approaching rightness. Think "avoiding a predator" versus "hunting for food".

The idea that we can simultaneously have approach and avoidance behaviors and they're differently-motivating is backed by a (yes, peer-reviewed) concept called affective asynchrony. Strong negative or strong positive emotions can switch off the other system, but for the most part, they operate independently. And mistake-avoidance motivation reduces creativity, independence, risk-taking, etc.

Heck, I'd be willing to bet some actual cash money that a controlled experiment would show significant behavioral differences between people primed with the terms "less wrong" and "more right", no matter how "rational" they rate themselves to be.