VincentYu comments on 2013 Survey Results - Less Wrong

74 Post author: Yvain 19 January 2014 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (558)

You are viewing a single comment's thread. Show more comments above.

Comment author: jkaufman 19 January 2014 04:12:22PM 23 points [-]

The IQ numbers have time and time again answered every challenge raised against them and should be presumed accurate.

What if the people who have taken IQ tests are on average smarter than the people who haven't? My impression is that people mostly take IQ tests when they're somewhat extreme: either low and trying to qualify for assistive services or high and trying to get "gifted" treatment. If we figure lesswrong draws mostly from the high end, then we should expect the IQ among test-takers to be higher than what we would get if we tested random people who had not previously been tested.

The IQ Question read: "Please give the score you got on your most recent PROFESSIONAL, SCIENTIFIC IQ test - no Internet tests, please! All tests should have the standard average of 100 and stdev of 15."

Among the subset of people making their data public (n=1480), 32% (472) put an answer here. Those 472 reports average 138, in line with past numbers. But 32% is low enough that we're pretty vulnerable to selection bias.

(I've never taken an IQ test, and left this question blank.)

Comment author: VincentYu 20 January 2014 03:01:31PM *  28 points [-]

What if the people who have taken IQ tests are on average smarter than the people who haven't? My impression is that people mostly take IQ tests when they're somewhat extreme: either low and trying to qualify for assistive services or high and trying to get "gifted" treatment. If we figure lesswrong draws mostly from the high end, then we should expect the IQ among test-takers to be higher than what we would get if we tested random people who had not previously been tested.

This sounds plausible, but from looking at the data, I don't think this is happening in our sample. In particular, if this were the case, then we would expect the SAT scores of those who did not submit IQ data to be different from those who did submit IQ data. I ran an Anderson–Darling test on each of the following pairs of distributions:

  • SAT out of 2400 for those who submitted IQ data (n = 89) vs SAT out of 2400 for those who did not submit IQ data (n = 230)
  • SAT out of 1600 for those who submitted IQ data (n = 155) vs SAT out of 1600 for those who did not submit IQ data (n = 217)

The p-values came out as 0.477 and 0.436 respectively, which means that the Anderson–Darling test was unable to distinguish between the two distributions in each pair at any significance.

As I did for my last plot, I've once again computed for each distribution a kernel density estimate with bootstrapped confidence bands from 999 resamples. From visual inspection, I tend to agree that there is no clear difference between the distributions. The plots should be self-explanatory:

(More details about these plots are available in my previous comment.)

Edit: Updated plots. The kernel density estimates are now fixed-bandwidth using the Sheather–Jones method for bandwidth selection. The density near the right edge is bias-corrected using an ad hoc fix described by whuber on stats.SE.

Comment author: jkaufman 20 January 2014 10:53:41PM 4 points [-]

Thanks for digging into this! Looks like the selection bias isn't significant.