gwern comments on [Link] FreakoStats and CEV - Less Wrong

1 Post author: Filipe 06 June 2012 03:21PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 06 June 2012 04:40:03PM *  3 points [-]

1) More generally, what if more intelligent people are more resistant to some biases, but equally prone to other biases? Then in opinions of more intelligent people we would see less of the former biases, but perhaps more of the latter biases; and also more of the correct answers. The exact values would depend on exact numbers in models.

Example model: Imagine that a person must first avoid an error A, then an error B, until they reach the correct conclusion C. The chance of making the error A is 70% for average person, 50% for intelligent person; the chance of making the error B is 90% for average person, 80% for an intelligent person.

Results for average people: 70% A, 27% B, 3% C. Results for intelligent people: 50% A, 40% B, 10% C. Possible interpretation: B is the correct answer, because here the difference is largest: 13%. (C is obviously a small minority even among intelligent people, so we can explain it away e.g. by signalling.)

2) Intelligence can correlate with something, e.g. education, which may be a source of new errors. Not necessarily new kinds of biases, just new ways to apply the same old biases. For example the "quantum mysterious consciousness" explanations will be more popular among more educated people, less educated will instead use words "spirits" and "magic" to explain the same concept.

3) An intelligent person can easily confuse "opinions of me and my friends" with "opinions of intelligent people". Because how do most intelligent-and-proud-of-it people judge the intelligence of others? In my experience, usually by similarity of opinions.

EDIT: Does author really give questionaires and IQ tests to large enough samples of randomly selected people? In other words, even if we trust the authors premises, should be trust his specific results too?

Comment author: gwern 06 June 2012 05:38:42PM 2 points [-]

1) More generally, what if more intelligent people are more resistant to some biases, but equally prone to other biases? Then in opinions of more intelligent people we would see less of the former biases, but perhaps more of the latter biases; and also more of the correct answers. The exact values would depend on exact numbers in models.

For what it's worth (and as I've commented previously on that blog), in reading on heuristics & biases, I've encountered biases which inversely correlate minimally with intelligence like sunk cost, but I don't believe I have seen any biases which correlated with increasing intelligence.

EDIT: Does author really give questionaires and IQ tests to large enough samples of randomly selected people?

How large is 'large enough'? Think of political polling - how many samples do they need to extrapolate to the general population?

Comment author: Viliam_Bur 06 June 2012 07:55:38PM 0 points [-]

I don't believe I have seen any biases which correlated with increasing intelligence.

My guess would be reversing stupidity, and searching for a difficult solution when a simple one exists. Both are related to signalling intelligence. On the other hand, I guess many intelligent people don't self-diagnose as intelligent, so perhaps those biases would be only strong in Mensa and similar places.

But I was more thinking about one bias appearing stronger when a bias in another direction is eliminated. For example bias X makes people think A, bias Y makes people think B, if a person is under influence of both biases, the answer is randomly A or B. In such case, eliminating bias X leads to increase of answer B.

How large is 'large enough'?

Depending on what certainty of answer is required. Before convincing people "you should believe X, because this is what smart people believe" I would like to be at least 95% certain, because this kind of argument is rather offensive towards opponents.

Comment author: gwern 06 June 2012 09:34:54PM *  2 points [-]

But I was more thinking about one bias appearing stronger when a bias in another direction is eliminated. For example bias X makes people think A, bias Y makes people think B, if a person is under influence of both biases, the answer is randomly A or B. In such case, eliminating bias X leads to increase of answer B.

Biases don't have clear 'directions' often. If you are overconfident on a claim P, that's just as accurate as saying you were underconfident on claim ~P. Similarly for anchoring or priming - if you anchor on the random number generator while estimating number of African nations, whether you look "over" or "under" is going to depend on whether the RNG was spitting out 1-50 or 100-200, perhaps.

I would like to be at least 95% certain

And what does that mean? If you just want to know 'what do smart people in general believe versus normal people', you don't need large samples if you can get a random selection and your questions are each independent. For example, in my recent Wikipedia experiment I removed only 100 links and 3 were reverted; when I put that into a calculator for a Bernouilli distribution, I get 99% certainty that the true reversion rate is 0-7%. So to simplify considerably, if you sampled 100 smart people and 100 dumb people and they differ by 14%, is that enough certainty for you?

Comment author: Viliam_Bur 07 June 2012 08:01:52AM *  0 points [-]

So to simplify considerably, if you sampled 100 smart people and 100 dumb people and they differ by 14%, is that enough certainty for you?

I am not good at statistics, but I guess yes. Especially if those 100 people are really randomly selected, which in the given situation they were.