2012 Survey Results
Thank you to everyone who took the 2012 Less Wrong Survey (the survey is now closed. Do not try to take it.) Below the cut, this post contains the basic survey results, a few more complicated analyses, and the data available for download so you can explore it further on your own. You may want to compare these to the results of the 2011 Less Wrong Survey.
Part 1: Population
How many of us are there?
The short answer is that I don't know.
The 2011 survey ran 33 days and collected 1090 responses. This year's survey ran 23 days and collected 1195 responses. The average number of new responses during the last week was about five per day, so even if I had kept this survey open as long as the last one I probably wouldn't have gotten more than about 1250 responses. That means at most a 15% year on year growth rate, which is pretty abysmal compared to the 650% growth rate in two years we saw last time.
About half of these responses were from lurkers; over half of the non-lurker remainder had commented but never posted to Main or Discussion. That means there were only about 600 non-lurkers.
But I am skeptical of these numbers. I hang out with some people who are very closely associated with the greater Less Wrong community, and a lot of them didn't know about the survey until I mentioned it to them in person. I know some people who could plausibly be described as focusing their lives around the community who just never took the survey for one reason or another. One lesson of this survey may be that the community is no longer limited to people who check Less Wrong very often, if at all. One friend didn't see the survey because she hangs out on the #lesswrong channel more than the main site. Another mostly just goes to meetups. So I think this represents only a small sample of people who could justly be considered Less Wrongers.
The question of "how quickly is LW growing" is also complicated by the high turnover. Over half the people who took this survey said they hadn't participated in the survey last year. I tried to break this down by combining a few sources of information, and I think our 1200 respondents include 500 people who took last year's survey, 400 people who were around last year but didn't take the survey for some reason, and 300 new people.
As expected, there's lower turnover among regulars than among lurkers. Of people who have posted in Main, about 75% took the survey last year; of people who only lurked, about 75% hadn't.
This view of a very high-turnover community and lots of people not taking the survey is consistent with Vladimir Nesov's data showing http://lesswrong.com/lw/e4j/number_of_members_on_lesswrong/77xz 1390 people who have written at least ten comments. But the survey includes only about 600 people who have at least commented; 800ish of Vladimir's accounts are either gone or didn't take the census.
Part 2: Categorical Data
SEX:
Man: 1057, 89.2%
Woman: 120, 10.1%
Other: 2, 0.2%)
No answer: 6, 0.5%
GENDER:
M (cis): 1021, 86.2%
F (cis): 105, 8.9%
M (trans f->m): 3, 0.3%
F (trans m->f): 16, 1.3%
Other: 29, 2.4%
No answer: 11, 0.9%
ORIENTATION:
Heterosexual: 964, 80.7%
Bisexual: 135, 11.4%
Homosexual: 28, 2.4%
Asexual: 24, 2%
Other: 28, 2.4%
No answer: 14, 1.2%
RELATIONSHIP STYLE:
Prefer monogamous: 639, 53.9%
Prefer polyamorous: 155, 13.1%
Uncertain/no preference: 358, 30.2%
Other: 21, 1.8%
No answer: 12, 1%
NUMBER OF CURRENT PARTNERS:
0: 591, 49.8%
1: 519, 43.8%
2: 34, 2.9%
3: 12, 1%
4: 5, 0.4%
6: 1, 0.1%
7, 1, 0.1% (and this person added "really, not trolling")
Confusing or no answer: 20, 1.8%
RELATIONSHIP STATUS:
Single: 628, 53%
Relationship: 323, 27.3%
Married: 220, 18.6%
No answer: 14, 1.2%
RELATIONSHIP GOALS:
Not looking for more partners: 707, 59.7%
Looking for more partners: 458, 38.6%
No answer: 20, 1.7%
COUNTRY:
USA: 651, 54.9%
UK: 103, 8.7%
Canada: 74, 6.2%
Australia: 59, 5%
Germany: 54, 4.6%
Israel: 15, 1.3%
Finland: 15, 1.3%
Russia: 13, 1.1%
Poland: 12, 1%
These are all the countries with greater than 1% of Less Wrongers, but other, more exotic locales included Kenya, Pakistan, and Iceland, with one user each. You can see the full table here.
This data also allows us to calculate Less Wrongers per capita:
Finland: 1/366,666
Australia: 1/389,830
Canada: 1/472,972
USA: 1/483,870
Israel: 1/533,333
UK: 1/603,883
Germany: 1/1,518,518
Poland: 1/3,166,666
Russia: 1/11,538,462
RACE:
White, non-Hispanic 1003, 84.6%
East Asian: 50, 4.2%
Hispanic 47, 4.0%
Indian Subcontinental 28, 2.4%
Black 8, 0.7%
Middle Eastern 4, 0.3%
Other: 33, 2.8%
No answer: 12, 1%
WORK STATUS:
Student: 476, 40.7%
For-profit work: 364, 30.7%
Self-employed: 95, 8%
Unemployed: 81, 6.8%
Academics (teaching): 54, 4.6%
Government: 46, 3.9%
Non-profit: 44, 3.7%
Independently wealthy: 12, 1%
No answer: 13, 1.1%
PROFESSION:
Computers (practical): 344, 29%
Math: 109, 9.2%
Engineering: 98, 8.3%
Computers (academic): 72, 6.1%
Physics: 66, 5.6%
Finance/Econ: 65, 5.5%
Computers (AI): 39, 3.3%
Philosophy: 36, 3%
Psychology: 25, 2.1%
Business: 23, 1.9%
Art: 22, 1.9%
Law: 21, 1.8%
Neuroscience: 19, 1.6%
Medicine: 15, 1.3%
Other social science: 24, 2%
Other hard science: 20, 1.7%
Other: 123, 10.4%
No answer: 27, 2.3%
DEGREE:
Bachelor's: 438, 37%
High school: 333, 28.1%
Master's: 192, 16.2%
Ph.D: 71, 6%
2-year: 43, 3.6%
MD/JD/professional: 24, 2%
None: 55, 4.6%
Other: 15, 1.3%
No answer: 14, 1.2%
POLITICS:
Liberal: 427, 36%
Libertarian: 359, 30.3%
Socialist: 326, 27.5%
Conservative: 35, 3%
Communist: 8, 0.7%
No answer: 30, 2.5%
You can see the exact definitions given for each of these terms on the survey.
RELIGIOUS VIEWS:
Atheist, not spiritual: 880, 74.3%
Atheist, spiritual: 107, 9.0%
Agnostic: 94, 7.9%
Committed theist: 37, 3.1%
Lukewarm theist: 27, 2.3%
Deist/Pantheist/etc: 23, 1.9%
No answer: 17, 1.4%
FAMILY RELIGIOUS VIEWS:
Lukewarm theist: 392, 33.1%
Committed theist: 307, 25.9%
Atheist, not spiritual: 161, 13.6
Agnostic: 149, 12.6%
Atheist, spiritual: 46, 3.9%
Deist/Pantheist/Etc: 32, 2.7%
Other: 84, 7.1%
RELIGIOUS BACKGROUND:
Other Christian: 517, 43.6%
Catholic: 295, 24.9%
Jewish: 100, 8.4%
Hindu: 21, 1.8%
Traditional Chinese: 17, 1.4%
Mormon: 15, 1.3%
Muslim: 12, 1%
Raw data is available here.
MORAL VIEWS:
Consequentialism: 735, 62%
Virtue Ethics: 166, 14%
Deontology: 50, 4.2%
Other: 214, 18.1%
No answer: 20, 1.7%
NUMBER OF CHILDREN
0: 1044, 88.1%
1: 51, 4.3%
2: 48, 4.1%
3: 19, 1.6%
4: 3, 0.3%
5: 2, 0.2%
6: 1, 0.1%
No answer: 17, 1.4%
WANT MORE CHILDREN?
No: 438, 37%
Maybe: 363, 30.7%
Yes: 366, 30.9%
No answer: 16, 1.4%
LESS WRONG USE:
Lurkers (no account): 407, 34.4%
Lurkers (with account): 138, 11.7%
Posters (comments only): 356, 30.1%
Posters (comments + Discussion only): 164, 13.9%
Posters (including Main): 102, 8.6%
SEQUENCES:
Never knew they existed until this moment: 99, 8.4%
Knew they existed; never looked at them: 23, 1.9%
Read < 25%: 227, 19.2%
Read ~ 25%: 145, 12.3%
Read ~ 50%: 164, 13.9%
Read ~ 75%: 203, 17.2%
Read ~ all: 306, 24.9%
No answer: 16, 1.4%
Dear 8.4% of people: there is this collection of old blog posts called the Sequences. It is by Eliezer, the same guy who wrote Harry Potter and the Methods of Rationality. It is really good! If you read it, you will understand what we're talking about much better!
REFERRALS:
Been here since Overcoming Bias: 265, 22.4%
Referred by a link on another blog: 23.5%
Referred by a friend: 147, 12.4%
Referred by HPMOR: 262, 22.1%
No answer: 35, 3%
BLOG REFERRALS:
Common Sense Atheism: 20 people
Hacker News: 20 people
Reddit: 15 people
Unequally Yoked: 7 people
TV Tropes: 7 people
Marginal Revolution: 6 people
gwern.net: 5 people
RationalWiki: 4 people
Shtetl-Optimized: 4 people
XKCD fora: 3 people
Accelerating Future: 3 people
These are all the sites that referred at least three people in a way that was obvious to disentangle from the raw data. You can see a more complete list, including the long tail, here.
MEETUPS:
Never been to one: 834, 70.5%
Have been to one: 320, 27%
No answer: 29, 2.5%
CATASTROPHE:
Pandemic (bioengineered): 272, 23%
Environmental collapse: 171, 14.5%
Unfriendly AI: 160, 13.5%
Nuclear war: 155, 13.1%
Economic/Political collapse: 137, 11.6%
Pandemic (natural): 99, 8.4%
Nanotech: 49, 4.1%
Asteroid: 43, 3.6%
The wording of this question was "which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?"
CRYONICS STATUS:
No, don't want to: 275, 23.2%
No, still thinking: 472, 39.9%
No, procrastinating: 178, 15%
No, unavailable: 120, 10.1%
Yes, signed up: 44, 3.7%
Never thought about it: 46, 3.9%
No answer: 48, 4.1%
VEGETARIAN:
No: 906, 76.6%
Yes: 147, 12.4%
No answer: 130, 11%
For comparison, 3.2% of US adults are vegetarian.
SPACED REPETITION SYSTEMS
Don't use them: 511, 43.2%
Do use them: 235, 19.9%
Never heard of them: 302, 25.5%
Dear 25.5% of people: spaced repetition systems are nifty, mostly free computer programs that allow you to study and memorize facts more efficiently. See for example http://ankisrs.net/
HPMOR:
Never read it: 219, 18.5%
Started, haven't finished: 190, 16.1%
Read all of it so far: 659, 55.7%
Dear 18.5% of people: Harry Potter and the Methods of Rationality is a Harry Potter fanfic about rational thinking written by Eliezer Yudkowsky (the guy who started this site). It's really good. You can find it at http://www.hpmor.com/.
ALTERNATIVE POLITICS QUESTION:
Progressive: 429, 36.3%
Libertarian: 278, 23.5%
Reactionary: 30, 2.5%
Conservative: 24, 2%
Communist: 22, 1.9%
Other: 156, 13.2%
ALTERNATIVE ALTERNATIVE POLITICS QUESTION:
Left-Libertarian: 102, 8.6%
Progressive: 98, 8.3%
Libertarian: 91, 7.7%
Pragmatist: 85, 7.2%
Social Democrat: 80, 6.8%
Socialist: 66, 5.6%
Anarchist: 50, 4.1%
Futarchist: 29, 2.5%
Moderate: 18, 1.5%
Moldbuggian: 19, 1.6%
Objectivist: 11, 0.9%
These are the only ones that had more than ten people. Other responses notable for their unusualness were Monarchist (5 people), fascist (3 people, plus one who was up for fascism but only if he could be the leader), conservative (9 people), and a bunch of people telling me politics was stupid and I should feel bad for asking the question. You can see the full table here.
CAFFEINE:
Never: 162, 13.7%
Rarely: 237, 20%
At least 1x/week: 207, 17.5
Daily: 448, 37.9
No answer: 129, 10.9%
SMOKING:
Never: 896, 75.7%
Used to: 1-5, 8.9%
Still do: 51, 4.3%
No answer: 131, 11.1%
For comparison, about 28.4% of the US adult population smokes
NICOTINE (OTHER THAN SMOKING):
Never used: 916, 77.4%
Rarely use: 82, 6.9%
>1x/month: 32, 2.7%
Every day: 14, 1.2%
No answer: 139, 11.7%
MODAFINIL:
Never: 76.5%
Rarely: 78, 6.6%
>1x/month: 48, 4.1%
Every day: 9, 0.8%
No answer: 143, 12.1%
TRUE PRISONERS' DILEMMA:
Defect: 341, 28.8%
Cooperate: 316, 26.7%
Not sure: 297, 25.1%
No answer: 229, 19.4%
FREE WILL:
Not confused: 655, 55.4%
Somewhat confused: 296, 25%
Confused: 81, 6.8%
No answer: 151, 12.8%
TORTURE VS. DUST SPECKS
Choose dust specks: 435, 36.8%
Choose torture: 261, 22.1%
Not sure: 225, 19%
Don't understand: 22, 1.9%
No answer: 240, 20.3%
SCHRODINGER EQUATION:
Can't calculate it: 855, 72.3%
Can calculate it: 175, 14.8%
No answer: 153, 12.9%
PRIMARY LANGUAGE:
English: 797, 67.3%
German: 54, 4.5%
French: 13, 1.1%
Finnish: 11, 0.9%
Dutch: 10, 0.9%
Russian: 15, 1.3%
Portuguese: 10, 0.9%
These are all the languages with ten or more speakers, but we also have everything from Marathi to Tibetan. You can see the full table here..
NEWCOMB'S PROBLEM
One-box: 726, 61.4%
Two-box: 78, 6.6%
Not sure: 53, 4.5%
Don't understand: 86, 7.3%
No answer: 240, 20.3%
ENTREPRENEUR:
Don't want to start business: 447, 37.8%
Considering starting business: 334, 28.2%
Planning to start business: 96, 8.1%
Already started business: 112, 9.5%
No answer: 194, 16.4%
ANONYMITY:
Post using real name: 213, 18%
Easy to find real name: 256, 21.6%
Hard to find name, but wouldn't bother me if someone did: 310, 26.2%
Anonymity is very important: 170, 14.4%
No answer: 234, 19.8%
HAVE YOU TAKEN A PREVIOUS LW SURVEY?
No: 559, 47.3%
Yes: 458, 38.7%
No answer: 116, 14%
TROLL TOLL POLICY:
Disapprove: 194, 16.4%
Approve: 178, 15%
Haven't heard of this: 375, 31.7%
No opinion: 249, 21%
No answer: 187, 15.8%
MYERS-BRIGGS
INTJ: 163, 13.8%
INTP: 143, 12.1%
ENTJ: 35, 3%
ENTP: 30, 2.5%
INFP: 26, 2.2%
INFJ: 25. 2.1%
ISTJ: 14, 1.2%
No answer: 715, 60%
This includes all types with greater than 10 people. You can see the full table here.
Part 3: Numerical Data
Except where indicated otherwise, all the numbers below are given in the format:
mean+standard_deviation (25% level, 50% level/median, 75% level) [n = number of data points]
INTELLIGENCE:
IQ (self-reported): 138.7 + 12.7 (130, 138, 145) [n = 382]
SAT (out of 1600): 1485.8 + 105.9 (1439, 1510, 1570) [n = 321]
SAT (out of 2400): 2319.5 + 1433.7 (2155, 2240, 2320)
ACT: 32.7 + 2.3 (31, 33, 34) [n = 207]
IQ (on iqtest.dk): 125.63 + 13.4 (118, 130, 133) [n = 378]
I am going to harp on these numbers because in the past some people have been pretty quick to ridicule this survey's intelligence numbers as completely useless and impossible and so on.
According to IQ Comparison Site, an SAT score of 1485/1600 corresponds to an IQ of about 144. According to Ivy West, an ACT of 33 corresponds to an SAT of 1470 (and thence to IQ of 143).
So if we consider self-report, SAT, ACT, and iqtest.dk as four measures of IQ, these come out to 139, 144, 143, and 126, respectively.
All of these are pretty close except iqtest.dk. I ran a correlation between all of them and found that self-reported IQ is correlated with SAT scores at the 1% level and iqtest.dk at the 5% level, but SAT scores and IQTest.dk are not correlated with each other.
Of all these, I am least likely to trust iqtest.dk. First, it's a random Internet IQ test. Second, it correlates poorly with the other measures. Third, a lot of people have complained in the comments to the survey post that it exhibits some weird behavior.
But iqtest.dk gave us the lowest number! And even it said the average was 125 to 130! So I suggest that we now have pretty good, pretty believable evidence that the average IQ for this site really is somewhere in the 130s, and that self-reported IQ isn't as terrible a measure as one might think.
AGE:
27.8 + 9.2 (22, 26, 31) [n = 1185]
LESS WRONG USE:
Karma: 1078 + 2939.5 (0, 4.5, 136) [n = 1078]
Months on LW: 26.7 + 20.1 (12, 24, 40) [n = 1070]
Minutes/day on LW: 19.05 + 24.1 (5, 10, 20) [n = 1105]
Wiki views/month: 3.6 + 6.3 (0, 1, 5) [n = 984]
Wiki edits/month: 0.1 + 0.8 (0, 0, 0) [n = 984]
PROBABILITIES:
Many Worlds: 51.6 + 31.2 (25, 55, 80) [n = 1005]
Aliens (universe): 74.2 + 32.6 (50, 90, 99) [n = 1090]
Aliens (galaxy): 42.1 + 38 (5, 33, 80) [n = 1081]
Supernatural: 5.9 + 18.6 (0, 0, 1) [n = 1095]
God: 6 + 18.7 (0, 0, 1) [n = 1098]
Religion: 3.8 + 15.5 (0, 0, 0.8) [n = 1113]
Cryonics: 18.5 + 24.8 (2, 8, 25) [n = 1100]
Antiagathics: 25.1 + 28.6 (1, 10, 35) [n = 1094]
Simulation: 25.1 + 29.7 (1, 10, 50) [n = 1039]
Global warming: 79.1 + 25 (75, 90, 97) [n = 1112]
No catastrophic risk: 71.1 + 25.5 (55, 80, 90) [n = 1095]
Space: 20.1 + 27.5 (1, 5, 30) [n = 953]
CALIBRATION:
Year of Bayes' birth: 1767.5 + 109.1 (1710, 1780, 1830) [n = 1105]
Confidence: 33.6 + 23.6 (20, 30, 50) [n= 1082]
MONEY:
Income/year: 50,913 + 60644.6 (12000, 35000, 74750) [n = 644]
Charity/year: 444.1 + 1152.4 (0, 30, 250) [n = 950]
SIAI/CFAR charity/year: 309.3 + 3921 (0, 0, 0) [n = 961]
Aging charity/year: 13 + 184.9 (0, 0, 0) [n = 953]
TIME USE:
Hours online/week: 42.4 + 30 (21, 40, 59) [n = 944]
Hours reading/week: 30.8 + 19.6 (18, 28, 40) [n = 957]
Hours writing/week: 7.9 + 9.8 (2, 5, 10) [n = 951]
POLITICAL COMPASS:
Left/Right: -2.4 + 4 (-5.5, -3.4, -0.3) [n = 476]
Libertarian/Authoritarian: -5 + 2 (-6.2, -5.2, -4)
BIG 5 PERSONALITY TEST:
Big 5 (O): 60.6 + 25.7 (41, 65, 84) [n = 453]
Big 5 (C): 35.2 + 27.5 (10, 30, 58) [n = 453]
Big 5 (E): 30.3 + 26.7 (7, 22, 48) [n = 454]
Big 5 (A): 41 + 28.3 (17, 38, 63) [n = 453]
Big 5 (N): 36.6 + 29 (11, 27, 60) [n = 449]
These scores are in percentiles, so LWers are more Open, but less Conscientious, Agreeable, Extraverted, and Neurotic than average test-takers. Note that people who take online psychometric tests are probably a pretty skewed category already so this tells us nothing. Also, several people got confusing results on this test or found it different than other tests that they took, and I am pretty unsatisfied with it and don't trust the results.
AUTISM QUOTIENT
AQ: 24.1 + 12.2 (17, 24, 30) [n = 367]
This test says the average control subject got 16.4 and 80% of those diagnosed with autism spectrum disorders get 32+ (which of course doesn't tell us what percent of people above 32 have autism...). If we trust them, most LWers are more autistic than average.
CALIBRATION:
Reverend Thomas Bayes was born in 1701. Survey takers were asked to guess this date within 20 years, so anyone who guessed between 1681 and 1721 was recorded as getting a correct answer. The percent of people who answered correctly is recorded below, stratified by the confidence they gave of having guessed correctly and with the number of people at that confidence level.
0-5: 10% [n = 30]
5-15: 14.8% [n = 183]
15-25: 10.3% [n = 242]
25-35: 10.7% [n = 225]
35-45: 11.2% [n = 98]
45-55: 17% [n = 118]
55-65: 20.1% [n = 62]
65-75: 26.4% [n = 34]
75-85: 36.4% [n = 33]
85-95: 60.2% [n = 20]
95-100: 85.7% [n = 23]

Here's a classic calibration chart. The blue line is perfect calibration. The orange line is you guys. And the yellow line is average calibration from an experiment I did with untrained subjects a few years ago (which of course was based on different questions and so not directly comparable).
The results are atrocious; when Less Wrongers are 50% certain, they only have about a 17% chance of being correct. On this problem, at least, they are as bad or worse at avoiding overconfidence bias as the general population.
My hope was that this was the result of a lot of lurkers who don't know what they're doing stumbling upon the survey and making everyone else look bad, so I ran a second analysis. This one used only the numbers of people who had been in the community at least 2 years and accumulated at least 100 karma; this limited my sample size to about 210 people.
I'm not going to post exact results, because I made some minor mistakes which means they're off by a percentage point or two, but the general trend was that they looked exactly like the results above: atrocious. If there is some core of elites who are less biased than the general population, they are well past the 100 karma point and probably too rare to feel confident even detecting at this kind of a sample size.
I really have no idea what went so wrong. Last year's results were pretty good - encouraging, even. I wonder if it's just an especially bad question. Bayesian statistics is pretty new; one would expect Bayes to have been born in rather more modern times. It's also possible that I've handled the statistics wrong on this one; I wouldn't mind someone double-checking my work.
Or we could just be really horrible. If we haven't even learned to avoid the one bias that we can measure super well and which is most susceptible to training, what are we even doing here? Some remedial time at PredictionBook might be in order.
HYPOTHESIS TESTING:
I tested a very few of the possible hypothesis that were proposed in the survey design threads.
Are people who understand quantum mechanics are more likely to believe in Many Worlds? We perform a t-test, checking whether one's probability of the MWI being true depends on whether or not one can solve the Schrodinger Equation. People who could solve the equation had on average a 54.3% probability of MWI, compared to 51.3% in those who could not. The p-value is 0.26; there is a 26% probability this occurs by chance. Therefore, we fail to establish that people's probability of MWI varies with understanding of quantum mechanics.
Are there any interesting biological correlates of IQ? We run a correlation between self-reported IQ, height, maternal age, and paternal age. The correlations are in the expected direction but not significant.
Are there differences in the ways men and women interact with the community? I had sort of vaguely gotten the impression that women were proportionally younger, newer to the community, and more likely to be referred via HPMOR. The average age of women on LW is 27.6 compared to 27.7 for men; obviously this difference is not significant. 14% of the people referred via HPMOR were women compared to about 10% of the community at large, but this difference is pretty minor. Women were on average newer to the community - 21 months vs. 39 for men - but to my surprise a t-test was unable to declare this significant. Maybe I'm doing it wrong?
Does the amount of time spent in the community affect one's beliefs in the same way as in previous surveys? I ran some correlations and found that it does. People who have been around longer continue to be more likely to believe in MWI, less likely to believe in aliens in the universe (though not in our galaxy), and less likely to believe in God (though not religion). There was no effect on cryonics this time.
In addition, the classic correlations between different beliefs continue to hold true. There is an obvious cluster of God, religion, and the supernatural. There's also a scifi cluster of cryonics, antiagathics, MWI, aliens, and the Simulation Hypothesis, and catastrophic risk (this also seems to include global warming, for some reason).
Are there any differences between men and women in regards to their belief in these clusters? We run a t-test between men and women. Men and women have about the same probability of God (men: 5.9, women: 6.2, p = .86) and similar results for the rest of the religion cluster, but men have much higher beliefs in for example antiagathics (men 24.3, women: 10.5, p < .001) and the rest of the scifi cluster.
DESCRIPTIONS OF LESS WRONG
Survey users were asked to submit a description of Less Wrong in 140 characters or less. I'm not going to post all of them, but here is a representative sample:
- "Probably the most sensible philosophical resource avaialble."
- "Contains the great Sequences, some of Luke's posts, and very little else."
- "The currently most interesting site I found ont the net."
- "EY cult"
- "How to think correctly, precisely, and efficiently."
- "HN for even bigger nerds."
- "Social skills philosophy and AI theorists on the same site, not noticing each other."
- "Cool place. Any others like it?"
- "How to avoid predictable pitfalls in human psychology, and understand hard things well: The Website."
- "A bunch of people trying to make sense of the wold through their own lens, which happens to be one of calculation and rigor"
- "Nice."
- "A font of brilliant and unconventional wisdom."
- "One of the few sane places on Earth."
- "Robot god apocalypse cult spinoff from Harry Potter."
- "A place to converse with intelligent, reasonably open-minded people."
- "Callahan's Crosstime Saloon"
- "Amazing rational transhumanist calming addicting Super Reddit"
- "Still wrong"
- "A forum for helping to train people to be more rational"
- "A very bright community interested in amateur ethical philosophy, mathematics, and decision theory."
- "Dying. Social games and bullshit now >50% of LW content."
- "The good kind of strange, addictive, so much to read!"
- "Part genuinely useful, part mental masturbation."
- "Mostly very bright and starry-eyed adults who never quite grew out of their science-fiction addiction as adolescents."
- "Less Wrong: Saving the world with MIND POWERS!"
- "Perfectly patternmatches the 'young-people-with-all-the-answers' cliche"
- "Rationalist community dedicated to self-improvement."
- "Sperglord hipsters pretending that being a sperglord hipster is cool." (this person's Autism Quotient was two points higher than LW average, by the way)
- "An interesting perspective and valuable database of mental techniques."
- "A website with kernels of information hidden among aspy nonsense."
- "Exclusive, elitist, interesting, potentially useful, personal depression trigger."
- "A group blog about rationality and related topics. Tends to be overzealous about cryogenics and other pet ideas of Eliezer Yudkowsky."
- "Things to read to make you think better."
- "Excellent rationality. New-age self-help. Worrying groupthink."
- "Not a cult at all."
- "A cult."
- "The new thing for people who would have been Randian Objectivists 30 years ago."
- "Fascinating, well-started, risking bloat and failure modes, best as archive."
- "A fun, insightful discussion of probability theory and cognition."
- "More interesting than useful."
- "The most productive and accessible mind-fuckery on the Internet."
- "A blog for rationality, cognitive bias, futurism, and the Singularity."
- "Robo-Protestants attempting natural theology."
- "Orderly quagmire of tantalizing ideas drawn from disagreeable priors."
- "Analyze everything. And I do mean everything. Including analysis. Especially analysis. And analysis of analysis."
- "Very interesting and sometimes useful."
- "Where people discuss and try to implement ways that humans can make their values, actions, and beliefs more internally consistent."
- "Eliezer Yudkowsky personality cult."
- "It's like the Mormons would be if everyone were an atheist and good at math and didn't abstain from substances."
- "Seems wacky at first, but gradually begins to seem normal."
- "A varied group of people interested in philosophy with high Openness and a methodical yet amateur approach."
- "Less Wrong is where human algorithms go to debug themselves."
- "They're kind of like a cult, but that doesn't make them wrong."
- "A community blog devoted to nerds who think they're smarter than everyone else."
- "90% sane! A new record!"
- "The Sequences are great. LW now slowly degenerating to just another science forum."
- "The meetup groups are where it's at, it seems to me. I reserve judgment till I attend one."
- "All I really know about it is this long survey I took."
- "The royal road of rationality."
- "Technically correct: The best kind of correct!"
- "Full of angry privilege."
- "A sinister instrument of billionaire Peter Thiel."
- "Dangerous apocalypse cult bent on the systematic erasure of traditional values and culture by any means necessary."
- "Often interesting, but I never feel at home."
- "One of the few places I truly feel at home, knowing that there are more people like me."
- "Currently the best internet source of information-dense material regarding cog sci, debiasing, and existential risk."
- "Prolific and erudite writing on practical techniques to enhance the effectiveness of our reason."
- "An embarrassing Internet community formed around some genuinely great blog writings."
- "I bookmarked it a while ago and completely forgot what it is about. I am taking the survey to while away my insomnia."
- "A somewhat intimidating but really interesting website that helps refine rational thinking."
- "A great collection of ways to avoid systematic bias and come to true and useful conclusions."
- "Obnoxious self-serving, foolish trolling dehumanizing pseudointellectualism, aesthetically bankrupt."
- "The cutting edge of human rationality."
- "A purveyor of exceedingly long surveys."
PUBLIC RELEASE
That last commenter was right. This survey had vastly more data than any previous incarnation; although there are many more analyses I would like to run I am pretty exhausted and I know people are anxious for the results. I'm going to let CFAR analyze and report on their questions, but the rest should be a community effort. So I'm releasing the survey to everyone in the hopes of getting more information out of it. If you find something interesting you can either post it in the comments or start a new thread somewhere.
The data I'm providing is the raw data EXCEPT:
- I deleted a few categories that I removed halfway through the survey for various reasons
- I deleted 9 entries that were duplicates of other entries, ie someone pressed 'submit' twice.
- I deleted the timestamp, which would have made people extra-identifiable, and sorted people by their CFAR random number to remove time order information.
- I removed one person whose information all came out as weird symbols.
- I numeralized some of the non-numeric data, especially on the number of months in community question. This is not the version I cleaned up fully, so you will get to experience some of the same pleasure I did working with the rest.
- I deleted 117 people who either didn't answer the privacy question or who asked me to keep them anonymous, leaving 1067 people.
Here it is: Data in .csv format , Data in Excel format
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (640)
On the reason for the absence of multiple comparison correction in my various quick tests here: http://lesswrong.com/lw/h56/the_universal_medical_journal_article_error/8q28
A question arose on
#lesswrongas to whether female LWers might be more likely to find LW through MoR than not. There is an imbalance in MoR referrals by gender, but it's not sufficiently extreme to hit significance in the limited survey dataset (need moar women):Doesn't need to hit an arbitrary (if historically established) 0.05 to be significant. 0.1048 still means a (EDIT:) higher probability that you've found something than not.
(Thanks for the correction.)
That is not what p-values mean.
About 25% of cis women who answered the question are vegetarian, compared to about 12.5% of cis men. This is much less extreme than among people I've met in person (only 2 men that I remember of, vs at least 10 women).
I wouldn't necessarily read too much into your calibration question, given that it's just one question, and there was something of a gotcha.
One thing I learned from doing calibration exercises is that I tended to be much too tentative with my 50% guesses.
When I answered the calibration question, I used my knowledge of other math that either had to, or couldn't have come before him, to narrow the possible window of his birth down to about 200 years. Random chance would then give me about a 20% shot. I thought I had somewhat better information than random chance within that window so I estimated my guess (IIRC) at 30%. I was, alas wrong, but I'm pretty confident that I would get around 30% of problems with a similar profile correct. If this problem was tricky, then it is more likely than average to be a problem that people get wrong in a large set. But this will be balanced by problems which are straightforward.
Not to suggest that this result isn't evidence of LW's miscalibration. In fact, it's strong enough evidence for me to throw into serious doubt the last survey's finding that we were better calibrated than a normal population. OTOH neither bit of evidence is terribly strong. A set of 5-10 different problems would make for much stronger evidence one way or the other.
How many of us are there:
A couple of months ago, I asked Trike, the company that manages the website, for a complete list of LessWrong registration dates in order to make a growth chart. I received it on 08-23-2012. The data shows that LessWrong has 13,727 total users, not including spammers and accounts that were deleted.
See also: LessWrong Growth Bar Graph (in the thread "Preventing discussion from being watered down by an "endless September" user influx.")
I didn't do this myself because I didn't trust my statistical ability enough, and I forgot to mention it on the original post, but...
Can someone check for birth order effects? Whether Less Wrongers are more likely to be first-borns than average? Preferably someone who's read Judith Rich Harris' critique of why most birth order effect analyses are hopelessly wrong? Or Gwern? I would trust Gwern on this.
I don't know Harris's critique, but here are some numbers.
Out of survey respondents who reported that they have 1 sibling (n=453), 76% said that they were the oldest (i.e., 0 older siblings). By chance, you'd expect 50% to be oldest.
Of those with 2 siblings, 50% are the oldest (vs. 33% expected by chance), n=240.
Of those with 3 siblings, 45% are the oldest (vs. 25% expected by chance), n=120.
Of those with 4 or more siblings, 50% are the oldest (vs. under 20% expected by chance), n=58.
Of those with 0 siblings, 100% are the oldest (vs. 100% expected by chance), n=163.
Overall, 69% of those who answered the "number of older siblings" question are the oldest.
Those look like big effects, unlikely to be explained by whatever artifacts Harris has found.
There are a handful of people who left the number of older siblings blank but did report a total number of siblings), or who reported a non-integer number of siblings (half-siblings), but they are too few to make much difference in the numbers.
This doesn't seem to vary by degree of involvement in LW; overall 71% of those in the top third of LW exposure (based on sequence-reading, karma, etc.) are the oldest. Here is a little table with the breakdown for them; it shows the percent of people who are the oldest, by number of siblings, for all respondents vs. the highest third in LW exposure.
n all high-LW
0 100 100
1 76 80
2 50 45
3 45 51
4+ 50 62
That 62% is 8/13, so not very meaningful.
There seems to be a pretty big potential confounder: age. Many respondents' younger siblings are too young to be contributing to this site, while no one's older siblings are too old (unless they're dead, but since ~98% of the community is under age 60 that's not a significant concern).
Can somebody redo the analysis by controlling for age?
You're saying that if we randomly picked 22-31 year-olds, a disproportionate member would be eldest children? For that to work, there'd have to be more eldest children in that age-range than youngest. Given the increase in population, that is certainly plausible. You would expect more younger families than older families, which means that within an age range there would be a disproportionate number of older siblings (unless it's so young that not all of the younger siblings have been born yet) but it doesn't seem like it would be nearly that significant.
The fact that most of the respondents are eldest children is a confounder for this.
In that case, wouldn't people over 60 also be too old?
I don't know anything about birth order effects, sorry.
Not a survey response but too good to omit:
http://www.onislam.net/english/ask-about-islam/islam-and-the-world/worldview/460333-fiction-depiction-allegory-.html
Another analysis: t-test/logistic regression does not indicate a relationship between getting the first CFAR logic puzzle right and having answered more survey questions than those who got the logic question wrong. (Tim Tyler suggested that there might be a commitment/rushing effect.)
These are the results of the CFAR questions; I have also posted this as its own Discussion section post.
SUMMARY: The CFAR questions were all adapted from the heuristics and biases literature, based on five different cognitive biases or reasoning errors. LWers, on the whole, showed less bias than is typical in the published research (on all 4 questions where this was testable), but did show clear evidence of bias on 2-3 of those 4 questions. Further, those with closer ties to the LW community (e.g., those who had read more of the sequences) showed significantly less bias than those with weaker ties (on 3 out of 4-5 questions where that was testable). These results all held when controlling for measures of intelligence.
METHOD & RESULTS
Being less susceptible to cognitive biases or reasoning errors is one sign of rationality (see the work of Keith Stanovich & his colleagues, for example). You'd hope that a community dedicated to rationality would be less prone to these biases, so I selected 5 cognitive biases and reasoning errors from the heuristics & biases literature to include on the LW survey. There are two possible patterns of results which would point in this direction:
The 5 biases were selected in part because they can be tested with everyone answering the same questions; I also preferred biases that haven't been discussed in detail on LW. On some questions there is a definitive wrong answer and on others there is reason to believe that a bias will tend to lead people towards one answer (so that, even though there might be good reasons for a person to choose that answer, in the aggregate it is evidence of bias if more people choose that answer).
This is only one quick, rough survey. If the results are as predicted, that could be because LW makes people more rational, or because LW makes people more familiar with the heuristics & biases literature (including how to avoid falling for the standard tricks used to test for biases), or because the people who are attracted to LW are already unusually rational (or just unusually good at avoiding standard biases). Susceptibility to standard biases is just one angle on rationality. Etc.
Here are the question-by-question results, in brief. The comment below contains the exact text of the questions, and more detailed explanations.
Question 1 was a disjunctive reasoning task, which had a definitive correct answer. Only 13% of undergraduates got the answer right in the published paper that I took it from. 46% of LWers got it right, which is much better but still a very high error rate. Accuracy was 58% for those high in LW exposure vs. 31% for those low in LW exposure. So for this question, that's:
1. LWers biased: yes
2. LWers less biased than others: yes
3. Less bias with more LW exposure: yes
Question 2 was a temporal discounting question; in the original paper about half the subjects chose money-now (which reflects a very high discount rate). Only 8% of LWers did; that did not leave much room for differences among LWers (and there was only a weak & nonsignificant trend in the predicted direction). So for this question:
1. LWers biased: not really
2. LWers less biased than others: yes
3. Less bias with more LW exposure: n/a (or no)
Question 3 was about the law of large numbers. Only 22% got it right in Tversky & Kahneman's original paper. 84% of LWers did: 93% of those high in LW exposure, 75% of those low in LW exposure. So:
1. LWers biased: a bit
2. LWers less biased than others: yes
3. Less bias with more LW exposure: yes
Question 4 was based on the decoy effect aka asymmetric dominance aka attraction effect (but missing a control condition). I don't have numbers from the original study (and there is no correct answer) so I can't really answer 1 or 2 for this question, but there was a difference based on LW exposure: 57% vs. 44% selecting the less bias related answer.
1. LWers biased: n/a
2. LWers less biased than others: n/a
3. Less bias with more LW exposure: yes
Question 5 was an anchoring question. The original study found an effect (measured by slope) of 0.55 (though it was less transparent about the randomness of the anchor; transparent studies w. other questions have found effects around 0.3 on average). For LWers there was a significant anchoring effect but it was only 0.14 in magnitude, and it did not vary based on LW exposure (there was a weak & nonsignificant trend in the wrong direction).
1. LWers biased: yes
2. LWers less biased than others: yes
3. Less bias with more LW exposure: no
One thing you might wonder: how much of this is just intelligence? There were several questions on the survey about performance on IQ tests or SATs. Controlling for scores on those tests, all of the results about the effects of LW exposure held up nearly as strongly. Intelligence test scores were also predictive of lower bias, independent of LW exposure, and those two relationships were almost the same in magnitude. If we extrapolate the relationship between IQ scores and the 5 biases to someone with an IQ of 100 (on either of the 2 IQ measures), they are still less biased than the participants in the original study, which suggests that the "LWers less biased than others" effect is not based solely on IQ.
MORE DETAILED RESULTS
There were 5 questions related to strength of membership in the LW community which I standardized and combined into a single composite measure of LW exposure (LW use, sequence reading, time in community, karma, meetup attendance); this was the main predictor variable I used (time per day on LW also seems related, but I found out while analyzing last year's survey that it doesn't hang together with the others or associate the same way with other variables). I analyzed the results using a continuous measure of LW exposure, but to simplify reporting, I'll give the results below by comparing those in the top third on this measure of LW exposure with those in the bottom third.
There were 5 intelligence-related measures which I combined into a single composite measure of Intelligence (SAT out of 2400, SAT out of 1600, ACT, previously-tested IQ, extra credit IQ test); I used this to control for intelligence and to compare the effects of LW exposure with the effects of Intelligence (for the latter, I did a similar split into thirds). Sample sizes: 1101 people answered at least one of the CFAR questions; 1099 of those answered at least one LW exposure question and 835 of those answered at least one of the Intelligence questions. Further details about method available on request.
Here are the results, question by question.
Question 1: Jack is looking at Anne, but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person?
This is a "disjunctive reasoning" question, which means that getting the correct answer requires using "or". That is, it requires considering multiple scenarios. In this case, either Anne is married or Anne is unmarried. If Anne is married then married Anne is looking at unmarried George; if Anne is unmarried then married Jack is looking at unmarried Anne. So the correct answer is "yes". A study by Toplak & Stanovich (2002) of students at a large Canadian university (probably U. Toronto) found that only 13% correctly answered "yes" while 86% answered "cannot be determined" (2% answered "no").
On this LW survey, 46% of participants correctly answered "yes"; 54% chose "cannot be determined" (and 0.4% said"no"). Further, correct answers were much more common among those high in LW exposure: 58% of those in the top third of LW exposure answered "yes", vs. only 31% of those in the bottom third. The effect remains nearly as big after controlling for Intelligence (the gap between the top third and the bottom third shrinks from 27% to 24% when Intelligence is included as a covariate). The effect of LW exposure is very close in magnitude to the effect of Intelligence; 60% of those in the top third in Intelligence answered correctly vs. 37% of those in the bottom third.
original study: 13%
weakly-tied LWers: 31%
strongly-tied LWers: 58%
Question 2: Would you prefer to receive $55 today or $75 in 60 days?
This is a temporal discounting question. Preferring $55 today implies an extremely (and, for most people, implausibly) high discount rate, is often indicative of a pattern of discounting that involves preference reversals, and is correlated with other biases. The question was used in a study by Kirby (2009) of undergraduates at Williams College (with a delay of 61 days instead of 60; I took it from a secondary source that said "60" without checking the original), and based on the graph of parameter values in that paper it looks like just under half of participants chose the larger later option of $75 in 61 days.
LW survey participants almost uniformly showed a low discount rate: 92% chose $75 in 61 days. This is near ceiling, which didn't leave much room for differences among LWers, and in fact there were not statistically significant differences. For LW exposure, top third vs. bottom third was 93% vs. 90%, and for Intelligence it was 96% vs. 91%.
original study: ~47%
weakly-tied LWers: 90%
strongly-tied LWers: 93%
Question 3: A certain town is served by two hospitals. In the larger hospital, about 45 babies are born each day. In the smaller one, about 15 babies are born each day. Although the overall proportion of girls is about 50%, the actual proportion at either hospital may be greater or less on any day. At the end of a year, which hospital will have the greater number of days on which more than 60% of the babies born were girls?
This is a statistical reasoning question, which requires applying the law of large numbers. In Tversky & Kahneman's (1974) original paper, only 22% of participants correctly chose the smaller hospital; 57% said "about the same" and 22% chose the larger hospital.
On the LW survey, 84% of people correctly chose the smaller hospital; 15% said "about the same" and only 1% chose the larger hospital. Further, this was strongly correlated with strength of LW exposure: 93% of those in the top third answered correctly vs. 75% of those in the bottom third. As with #1, controlling for Intelligence barely changed this gap (shrinking it from 18% to 16%), and the measure of Intelligence produced a similarly sized gap: 90% for the top third vs. 79% for the bottom third.
original study: 22%
weakly-tied LWers: 75%
strongly-tied LWers: 93%
(continued below, due to restrictions on comment length)
Yes, that would be interesting. Perhaps in a top-level post as Morendil suggests.
IIRC I had read the exact same question on LW before, so it might just be that plenty of LWers taking the survey also had.
How many of the people taking the $55 today have zero income?
A hint that this analysis is worth a top-level post, perhaps?
I think you're right; I've posted it to the discussion section (I guess I'll leave it here too).
(more detailed results, continued)
Question 4: Imagine that you are a doctor, and one of your patients suffers from migraine headaches that last about 3 hours and involve intense pain, nausea, dizziness, and hyper-sensitivity to bright lights and loud noises. The patient usually needs to lie quietly in a dark room until the headache passes. This patient has a migraine headache about 100 times each year. You are considering three medications that you could prescribe for this patient. The medications have similar side effects, but differ in effectiveness and cost. The patient has a low income and must pay the cost because her insurance plan does not cover any of these medications. Which medication would you be most likely to recommend?
This question is based on research on the decoy effect (aka "asymmetric dominance" or the "attraction effect"). Drug C is obviously worse than Drug B (it is strictly dominated by it) but it is not obviously worse than Drug A, which tends to make B look more attractive by comparison. This is normally tested by comparing responses to the three-option question with a control group that gets a two-option question (removing option C), but I cut a corner and only included the three-option question. The assumption is that more-biased people would make similar choices to unbiased people in the two-option question, and would be more likely to choose Drug B on the three-option question. The model behind that assumption is that there are various reasons for choosing Drug A and Drug B; the three-option question gives biased people one more reason to choose Drug B but other than that the reasons are the same (on average) for more-biased people and unbiased people (and for the three-option question and the two-option question).
Based on the discussion on the original survey thread, this assumption might not be correct. Cost-benefit reasoning seems to favor Drug A (and those with more LW exposure or higher intelligence might be more likely to run the numbers). Part of the problem is that I didn't update the costs for inflation - the original problem appears to be from 1995 which means that the real price difference was over 1.5 times as big then.
I don't know the results from the original study; I found this particular example online (and edited it heavily for length) with a reference to Chapman & Malik (1995), but after looking for that paper I see that it's listed on Chapman's CV as only a "published abstract".
49% of LWers chose Drug A (the one that is more likely for unbiased reasoners), vs. 50% for Drug B (which benefits from the decoy effect) and 1% for Drug C (the decoy). There was a strong effect of LW exposure: 57% of those in the top third chose Drug A vs. only 44% of those in the bottom third. Again, this gap remained nearly the same when controlling for Intelligence (shrinking from 14% to 13%), and differences in Intelligence were associated with a similarly sized effect: 59% for the top third vs. 44% for the bottom third.
original study: ??
weakly-tied LWers: 44%
strongly-tied LWers: 57%
Question 5: Get a random three digit number (000-999) from http://goo.gl/x45un and enter the number here.
Treat the three digit number that you just wrote down as a length, in feet. Is the height of the tallest redwood tree in the world more or less than the number that you wrote down?
What is your best guess about the height of the tallest redwood tree in the world (in feet)?
This is an anchoring question; if there are anchoring effects then people's responses will be positively correlated with the random number they were given (and a regression analysis can estimate the size of the effect to compare with published results, which used two groups instead of a random number).
Asking a question with the answer in feet was a mistake which generated a great deal of controversy and discussion. Dealing with unfamiliar units could interfere with answers in various ways so the safest approach is to look at only the US respondents; I'll also see if there are interaction effects based on country.
The question is from a paper by Jacowitz & Kahneman (1995), who provided anchors of 180 ft. and 1200 ft. to two groups and found mean estimates of 282 ft. and 844 ft., respectively. One natural way of expressing the strength of an anchoring effect is as a slope (change in estimates divided by change in anchor values), which in this case is 562/1020 = 0.55. However, that study did not explicitly lead participants through the randomization process like the LW survey did. The classic Tversky & Kahneman (1974) anchoring question did use an explicit randomization procedure (spinning a wheel of fortune; though it was actually rigged to create two groups) and found a slope of 0.36. Similarly, several studies by Ariely & colleagues (2003) which used the participant's Social Security number to explicitly randomize the anchor value found slopes averaging about 0.28.
There was a significant anchoring effect among US LWers (n=578), but it was much weaker, with a slope of only 0.14 (p=.0025). That means that getting a random number that is 100 higher led to estimates that were 14 ft. higher, on average. LW exposure did not moderate this effect (p=.88); looking at the pattern of results, if anything the anchoring effect was slightly higher among the top third (slope of 0.17) than among the bottom third (slope of 0.09). Intelligence did not moderate the results either (slope of 0.12 for both the top third and bottom third). It's not relevant to this analysis, but in case you're curious, the median estimate was 350 ft. and the actual answer is 379.3 ft. (115.6 meters).
Among non-US LWers (n=397), the anchoring effect was slightly smaller in magnitude compared with US LWers (slope of 0.08), and not significantly different from the US LWers or from zero.
original study: slope of 0.55 (0.36 and 0.28 in similar studies)
weakly-tied LWers: slope of 0.09
strongly-tied LWers: slope of 0.17
If we break the LW exposure variable down into its 5 components, every one of the five is strongly predictive of lower susceptibility to bias. We can combine the first four CFAR questions into a composite measure of unbiasedness, by taking the percentage of questions on which a person gave the "correct" answer (the answer suggestive of lower bias). Each component of LW exposure is correlated with lower bias on that measure, with r ranging from 0.18 (meetup attendance) to 0.23 (LW use), all p < .0001 (time per day on LW is uncorrelated with unbiasedness, r=0.03, p=.39). For the composite LW exposure variable the correlation is 0.28; another way to express this relationship is that people one standard deviation above average on LW exposure 75% of CFAR questions "correct" while those one standard deviation below average got 61% "correct". Alternatively, focusing on sequence-reading, the accuracy rates were:
75% Nearly all of the Sequences (n = 302)
70% About 75% of the Sequences (n = 186)
67% About 50% of the Sequences (n = 156)
64% About 25% of the Sequences (n = 137)
64% Some, but less than 25% (n = 210)
62% Know they existed, but never looked at them (n = 19)
57% Never even knew they existed until this moment (n = 89)
Another way to summarize is that, on 4 of the 5 questions (all but question 4 on the decoy effect) we can make comparisons to the results of previous research, and in all 4 cases LWers were much less susceptible to the bias or reasoning error. On 1 of the 5 questions (question 2 on temporal discounting) there was a ceiling effect which made it extremely difficult to find differences within LWers; on 3 of the other 4 LWers with a strong connection to the LW community were much less susceptible to the bias or reasoning error than those with weaker ties.
REFERENCES
Ariely, Loewenstein, & Prelec (2003), "Coherent Arbitrariness: Stable demand curves without stable preferences"
Chapman & Malik (1995), "The attraction effect in prescribing decisions and consumer choice"
Jacowitz & Kahneman (1995), "Measures of Anchoring in Estimation Tasks"
Kirby (2009), "One-year temporal stability of delay-discount rates"
Toplak & Stanovich (2002), "The Domain Specificity and Generality of Disjunctive Reasoning: Searching for a Generalizable Critical Thinking Skill"
Tversky & Kahneman's (1974), "Judgment under Uncertainty: Heuristics and Biases"
Okay, now I'm confused. When I did this question, I remember I ignored C as being strictly dominated by B and pulled out a calculator. When I saw this question in the analysis, I did the same thing before scrolling down. Here's what I got:
Drug A saves you from 70 headaches at $350/yr, for a cost of $5 per averted headache. Drug B saves you from 50 headaches at a cost of $100/yr, for a cost of $2 per averted headache.
This seems to contradict your statement "Cost-benefit reasoning seems to favor Drug A". Drug A has a higher cost per prevented headache according to my calculations, which would make Drug B the better one. Am I failing at basic arithmetic, or misunderstanding the question, or what? Please help.
EDIT: I was solving the wrong problem, and a bunch of people showed me why. Thanks for the explanations! I'm glad I got to learn where I was wrong.
You're right about the cost per averted headache, but we aren't trying to minimize the cost per averted headache; otherwise we wouldn't use any drug. We're trying to maximize utility. Unless avoiding several hours of a migraine is worth less to you than $5 (which a basic calculation using minimum wage would indicate that it is not, even excluding the unpleasantness of migraines -- and as someone who gets migraines occasionally, I'd gladly pay a great deal more than $5 to avoid them), you should get Drug A.
Since each drug only reduces the number of headaches to a certain number, cost per headache isn't the right way to look at it. Compare a drug that reduces the headaches to 99/year and costs $0, to a drug that eliminates the headaches completely for $1.
Instead of comparing the cost per headache, it's better to assign a value to time, and calculate the net benefit or harm of each drug. If we assume one hour of time is valued at $7.25, or the US minimum wage, and using the stated information that each headache lasts three hours, the free drug nets you 1*3-0=21.75, drug A nets 70*3*7.25-350=1172.50, and drug B nets 50*3*7.25-100=987.5
That's not a good way of looking at severe pain. People often will do long hours of mind-numbing tasks in order to prevent real or imaginary future short-term discomfort, like working out to get in shape for a one-time event.
You're right; I was generalizing from my experiences with migraines, where the pain goes away if I'm lying in a quiet, dark room
Assuming I did the math right, it seems that folks valuing their time at more than $4.16 an hour should prefer drug A, and those valuing it at less should prefer drug B. To really make this unambiguous, "low income" needs to be defined; assuming it's at least minimum wage, drug A wins pretty clearly...
I think I did the wrong math ($ per headache saved) when taking the actual survey, sadly...
I think this might just be due to the fact that the meme that “time is money” has been repeatedly expounded on LW, rather than long-time LWers are less prone to the decoy effect. All the rot13ed discussions about that question immediately identified Drug C as a decoy and focused on whether a low-income person should be willing to pay $12.50 to be spared a three-hour headache, with a sizeable minority arguing that they shouldn't. I'd look at the income and country of people who chose each drug -- I guess the main effect is what each responded took “low income” to mean.
"time is money" seems to me a pretty common and natural way to think if you live in a society whose workers tend to be paid hourly, whether you're new to LW or not.
Even people nominally paid hourly often cannot freely choose how many and which hours to work. (With unemployment rates as high as there are now in much of the western world, employers have more bargaining power than workers, etc.) It's not like if I got a headache this evening, I could say “rather than having a three-hour headache, I'll take this $12.50 drug which will stop it, work two hours and earn $20, and then have fun for one hour”.
Exactly. In South Africa that $350 could represent 16% or more of a possible yearly salary in some of our poorer areas.
I had no knowledge of such a survey. These might be more efficient if they were posted in a blatantly obvious manner, like on the banner.
10 people said "Drug C: reduces the number of headaches per year from 100 to 60. It costs $100 per year" over "Drug B: reduces the number of headaches per year from 100 to 50. It costs $100 per year" on CFAR question #4...
I said "Drug A: reduces the number of headaches per year from 100 to 30. It costs $350 per year" personally. I think there's a case for B, maybe, but who picks C?
Another result: no correlation between autism score and consequentialism endorsement.
I wonder whether consequentialism endorsement and possibly some of the probability questions correlate with the two family background questions.
Two? I see
FamilyReligionbut I dunno what your other one is. But to test family &MoralViews:I wondered if maybe the levels were screwing things up, even though they're in a logical order which should show any correlation if it exists, so I binned all the results into just binary 'atheist' and 'theist' (as it were), and looked at a chi-squared:
I am a little surprised. Maybe I messed up somehow.
The one about which religion.
That's
FamilyReligionthen... I don't see why there'd be two such questions about family religion as you seem to think.I meant RELIGIOUS BACKGROUND.
That field has 41 levels, oy gevalt (I particularly like the religious background "Mother: Jewish; Fat"). Someone else can figure out that analysis!
;-D
(Yvain should use larger text fields the next time.)
The lesson I have drawn from the survey is that free-response text fields are the devil and no one is to be trusted with them.
Now that I think about that, lumping Protestants and Orthodoxes together and keeping Catholics separate is about as bizarre as it gets.
Some Bayesian analysis using the BEST MCMC library for normal two-group comparisons:
(Full size image.)
The results are interesting and not quite the same as a t-test:
the difference in means estimate is sharper than the t-test: Yvain's t-test gave a p-value of 0.26 if the null hypothesis were true (he makes the classic error when he says "there is a 26% probability this occurs by chance" - no, there's a 26% chance this happened by chance if one assumes the null hypothesis is true, which says absolutely nothing about whether this happened by chance).
We, however, by using Bayesian techniques can say that given the difference in mean beliefs: there is a 7.2% chance that the null hypothesis (equal belief) or the opposite hypothesis (lower belief) is true in this sample.
We also get an effect-size for free from the difference in means. -0.132 (mode) isn't too impressive, but it's there.
However, both BEST and the t-test are normal tests. The histograms look like the data may be a bimodal distribution: a hump of skeptics at 10%, a hump of believers in the 70%s - and the weirdly low 40s in both groups is just a low point in both? I don't know how much of an issue this is.
For what it's worth, I interpreted his "there is a 26% probability this occurs by chance" exactly as "if there's no real difference, there's a 26% probability of getting this sort of result by chance alone" or equivalently "conditional on the null hypothesis Pr(something at least this good) = 26%". I'd expect that someone who was making the classic error would have said "there is a 26% probability this occurred by chance".
IQ Trend Analysis:
The self-reported IQ results on these surveys have been, to use Yvain's wording, "ridiculed" because they'd mean that the average LessWronger is gifted. Various other questions were added to the survey this time which gives us things to check against, and the results of these other questions have made the IQ figures more believable.
Summary:
LessWrong has lost IQ points on the self-reported scores every year for a total of 7.18 IQ points in 3.7 years or about 2 points per year. If LessWrong began with 145.88 IQ points in May 2009, then LessWrong has lost over half of it's giftedness (using IQ 132 as the definition, explained below).
The self-reported figures for each year:
IQ on 03/12/2009: 145.88
IQ on 00/00/2010: Unknown*
IQ on 12/05/2011: 140
IQ on 11/29/2012: 138.7
IQ points lost each year:
2.94 IQ point drop for 2010 (Estimated*)
2.94 IQ point drop for 2011 (Estimated*)
1.30 IQ point drop for 2012
Analysis:
Average IQ points lost per year: 1.94
Total IQ points lost: 7.18 in 3.7 years
Total IQ points LessWrong had above the gifted line: 13.88 (145.88 - 132*)
Percent less giftedness on the last survey result: 52% (7.18 / 13.88)
Footnotes:
* Unknown 2010 figures: There was no 2010 survey. The first line of the 2011 survey proposition mentions that.
* Estimated IQ point drops for 2010 and 2011: I divided the 2011 IQ drop by 2 and distributed it across 10/11.
* IQ 132 significance: IQ 132 is the top 2% (This may vary a little bit from one IQ test to another) which would qualify one as gifted by every IQ-based definition I know of. It is also (roughly) Mensa's entrance requirement (depending on the test) though Mensa does not dictate the legal or psychologist's definitions of giftedness. They are a club, not a developmental psychology authority.
This comment is relevant; we have a dataset of users who both took the Raven's test and self-reported IQ. The means of the group that did both was rather close to the means of the group that did each separately, but the correlation between the tests was low at .2. If you looked just at responders with positive karma, the correlation increased to a more respectable .45; if you looked just responders without positive karma, the correlation was -.11. This was a small fraction of responders as a whole, and the average IQ is already tremendously inflated by nonresponse. (If we assumed that, on average, people who didn't self-report an IQ were IQ 100, then the LW average would be only 112!)
As I mentioned previously, and judging from the graphs, the standard deviations of the IQs are obviously mixed up, because they were not determined in the questionnaire, and probably people who answered are not educated about them either. Including IQs in s.d. 24 with those in s.d. 16 and 15 is bound to inflate the average IQ. The top scores in that graph, or at the very least some of them, are in s.d. 24, which means that they would be a lot lower in s.d. 15. IQ 132 is the cutoff for s.d. 16, while s.d. 15 is the one most adopted in recent scientific literature. For s.d. 24, it is 148. Mensa and often people on the press like to use s.d. 24 to sound more impressive to amateurs.
This probably makes tests like the SAT more reliable as an estimation, because they have the same standard for all who submitted their scores, although in this case the ceiling effect would become apparent, because perfect or nearly-perfect scores wouldn't go upwards of a certain IQ.
Ooh, you bring up good points. These are a source of noise, for sure.
Now I'm wondering if there are any clever ways to compensate for any of these and remove that noise from the survey...
Error bars, please!
The summary data:
The basic formula for a confidence interval of a population is:
mean ± (z-score of confidence × (standard deviation / √n)). So for z-score=95%=1.96:Or to run the usual t-tests and look at the confidence interval they calculate for the difference; for 2009 & 2012, the 95% CI for the difference in mean IQ is 3.563-10.578:
To add a linear model (for those unfamiliar, see my HPMoR examples) which will really just recapitulate the simple averages calculation:
Note that Epiphany dates the 2009 survey to around March, while the other two surveys happened around November, so inputting the survey dates just as years lowballs the time gap between the first & second surveys. Your linear trend'll be a bit exaggerated.
I've fixed it as appropriate.
Before, the slope per year was -2.24 (minus 2.25 points a year), now the slope spits out as -0.00519 but if I'm understanding my changes right, the unit has switched from per year to per day and 365.25 times -0.005 IQ points per day is -1.896 per year.
2.25 vs 1.9 is fairly different.
I was lazy and ignored all non-numerical IQ comments, so I got slightly different numbers. But my 95% confidence intervals are:
This is one question where the results really suprised me. Combining natural and engendered pandemics, almost a third of respondents picked it as the top near term X risk which was almost twice as many as the next highest risk. I wonder if the x risk discussions we tend to have may be somewhat misallocated.
Note that x-risks as defined by that questions are not the same as x-risks as defined by Bostrom. In principle, a catastrophe might kill 95% of the population but humanity could later recover and colonize the galaxy, or a different type of catastrophe might only kill 5% of the population but permanently prevent humans from creating extraterrestrial settlements, thereby setting a ceiling to economic growth forever.
So, if extraterrestrial settlements are unlikely to be ever created regardless of any catastrophe, the point is moot.
I think that the likes of Bostrom would consider anything that would prevent us from establishing extraterrestrial settlements to be a catastrophe itself, even though it's ‘business as usual’.
Then the 'catastrophe' could be quite possibly intrinsic in the laws of physics and the structure of the solar system.
Many are.
I think I went for political/economic collapse, but with no very great certainty. This is probably a question which could lead to some interesting discussion.
Wiping out 90% or so of the human race without killing everyone seems unlikely in general. It wasn't on the list, but I'd probably go for infrastructure disaster-- something which could include more than one of the listed items.
Less likely than killing 100% of the human race? Why?
Remember that humanity went through bottlenecks where the total population was reduced to tens of thousands scattered in pockets of hundreds to thousands. Humanity survived the Toba super eruption in prehistoric times, and would probably survive the Chicxulub impact if it happened today.
Other than an impact powerful enough to sterilize the biosphere, I don't see many things capable of obliterating the human species in the foreseable future. Pandemics don't have a 100% kill rate (at least the natural ones, maybe an engineered one could, but who would be foolish enough to create such a thing?)
So many people.
A disgruntled microbiologist?
I'm not an expert, but I don't think that a single individual, or even a small team, could do that.
The genetic variety created and maintained by sexual reproduction pretty much ensures that no single infection mechanism is effective on all individuals: key components such as the cell surface proteins and the immune system show a large phenotypic variability even among people of common ancestry living in a small geographic region (that's also the reason why finding compatible organs for transplants is difficult).
Even for the most infectious pathogens, there is always a sizeable part of the population that is completely or partially immune.
In order to create an artificial pathogen capable of infecting and killing everybody, you have to engineer multiple redundant infection mechanisms tailored to every relevant phenotipic variation, including the rare ones. Even if your pathogen kills 99.99% of human population, far more than any natural pathogen ever did, there would be 700,000 people left, more than enough to repopulate the planet.
Is this actually true? Of course, few diseases would actually have good odds of infecting everyone, but surely that's more a matter of exposure. [EDIT: or how you define "partial immunity".]
By "partial immunity" I mean that you catch the disease, but only in attenuated form, maybe even subclinical or asymptomatic, and usually develop full immunity afterwards. This happened even with higly infectious diseases such as the medieval Black Death (Yersinia pestis), malaria, smallpox, and now happens with HIV.
AFAIK, a superbug capable of infecting and killing everyone doesn't seem to be biologically plausible, at least without extensive genetic engineering.
Well, genetic engineering is a common part of scenarios like this.
However, it was my understanding that not all natural diseases grant immunity to survivors. I'm not an expert, of course.
Tetanus doesn't grant immunity if you actually get it and survive. They are soil/intestinal bacteria normally and they don't grow within you to a high enough number that your immune system can get a good look at them, their toxin is just potent enough that even at low concentrations it kills you.
There are also protist pathogens which express vast quantities of a particular coat protein on their surface such that when you form an adaptive immune response agains them it is almost certainly against that protein - and something like one in 10^9 cell divisions their DNA rearranges such that they start expressing a different coat protein and evade the last immune response that their host managed to raise, resetting back to no immunity.
Aha, I knew it!
That's really interesting, actually.
I've been led to understand that this was usually the other way around, or that the mechanism that allowed their survival in the first place was "change something in the immune system, see if it works, repeat until it does". Through some magical process of biology or chemistry afterwards, the found solution is then "remembered" and ready to be deployed again if the disease returns. I'm not quite sure whether anyone understands the exact mechanism behind this magic, but I certainly don't (yet). *
By "the other way around", I mean a selection effect; they survived because they were already more resistant and had the right biological configuration ready to become immune to it or somesuch. I'm not clear on the details, this is all second-hand (but from people who knew what they were talking about, or so it seemed at the time).
* ETA: Got curious. Looks like there's a pretty good understanding of the matter in the field after all. +1 esteem for immunology and +0.2 for scientific medicine in general. And those are some really great wikipedia articles.
Oh, yeah, I know about that. I understood that it didn't work on everything, though. (Well, it doesn't work on the common cold, for a start, although I'm not sure if that kind of constant low-level mutation is feasible for more ... powerful ... diseases.
EDIT: turns out it is.
I don't know about 90% of the human race, but after the recent tunnel collapse in Japan, I think infrastructure disaster is looking a lot more likely, or possibly slow, grinding infrastructure failure.
You could make a case that too much is taken by elites, or that too much is given away, but I think the big problem is that building is fun and maintenance is boring.
Note that the question on the survey was not about existential risks:
I answered bio-engineered pandemics, but would have answered differently for x-risks.
Now I wish I had written a funnier description...so many of these are silly~
I'd like to note that my suggestion as I offered it didn't include an "Other" option -- you added that one by yourself, and it ended up being selected by more people than "Reactionary" "Conservative" and "Communist" combined. My suggested question would have forced the current "Others" to choose between the five options provided or not answer at all.
Alternate Explanations for LW's Calibration Atrociousness:
Maybe a lot of the untrained people simply looked up the answer to the question. If you did not rule that out with your study methods, then consider seeing whether a suspiciously large number of them entered the exact right year?
Maybe LWers were suffering from something slightly different from the overconfidence bias you're hoping to detect: difficulty admitting that they have no idea when Thomas Bayes was born because they feel they should really know that.
The mean was 1768, the median 1780, and the mode 1800. Only 169 of 1006 people who answered the question got an answer within 20 years of 1701. Moreover, the three people that admitted to looking it up (and therefore didn't give a calibration) all gave incorrect answers: 1750, 1759, and 1850. So it seems like your first explanation can't be right.
After trying a bunch of modifications to the data, it seems like the best explanation is that the poor calibration happened because people didn't think about the error margin carefully enough. If we change the error margin to 80 years instead of 20, then the responses seem to look roughly like the untrained example from the graph in Yvain's analysis.
Another observation is that after we drop the 45 people who gave confidence levels >85% (and in fact, 89% of them were right), the remaining data is absolutely abysmal: the remaining answers are essentially uncorrelated with the confidence levels.
This suggests that there were a few pretty knowledgeable people who got the answer right and that was that. Everyone else just guessed and didn't know how to calibrate; this may correspond to your second explanation.
Another thing I have noticed is that I tend to pigeonhole stuff into centuries; for example, once in a TV quiz there was a question “which of these pairs of people could have met” (i.e. their lives overlapped), I immediately thought “It can't be Picasso and van Gogh: Picasso lived in the 20th century, whereas van Gogh lived in the 19th century.” I was wrong. Picasso was born in 1881 and van Gogh died in 1890. If other people also have this bias, this can help explain why so many more people answered 17xx than 16xx, thereby causing the median answer to be much later than the correct answer.
I hate the nth century convention because it doesn't match up with the numbers used for the dates, so I always refer to the dates.... but that actually tends to confuse people.
I was going to say “the 1700s”, but that's ambiguous as in principle it could refer either to a century or to its first decade. (OTOH, it would be more accurate, as my mental pigeonholes lump the year 1700 together with the following 99 years, not with the previous.)
Good points, Kindly, thank you. New alternate explanation idea:
When these people encounter this question, they're slogging through this huge survey. They're not doing an IQ test. This is more casual. They're being asked stuff like "How many partners do you have?" By the time they get down to that question, they're probably in a casual answering mode, and they're probably a little tired and looking for an efficient way to finish. When they see the Bayes question, they're probably not thinking "This question is so important! They're going to be gauging LessWrong's rationality progress with it! I had better really think about this!" They're probably like "Output answer, next question."
If we really want to test them, we need to make it clear that we're testing them. And if we want them to be serious about it, we have to make it clear that it's important. I hypothesize that if we were to do a test (not a survey) and explain that it's serious because we're gauging LessWrong's progress, and also make it short so that the person can focus a lot of attention onto each question, we'd see less atrocious results.
In hindsight, I wonder why I didn't think about the effects of context before. Yvain didn't seem to either; he thought something might be wrong with the question. This seems like one of those things that is right in front of our faces but is hard to see.
I think that people may be rationing their mental stamina, and may not be going through all the steps it takes to answer this type of question.
My first thought about this is that people's rationality 'in real life' totally is determined by how likely they are to notice a Bayes question in an informal setting, where they may be tired and feeling mentally lazy. In Keith Stanovich's terms, rationality is mostly about the reflective mind: it's someone's capacity and habits to re-compute a problem's answer, using the algorithmic mind, rather than accept the intuitive default answer that their autonomous mind spits out.
IQ tests tend to be formal; it's very obvious that you're being tested. They don't measure rationality in the sense that most LWers mean it; the ability to apply thinking techniques to real life in order to do better.
It might still be valuable to know how LWers do on a more formal test of probability-related knowledge; after all, most people in the general public don't know Bayes' theorem, so it'd be neat to see how good LW is at increasing "rationality literacy". But that's not the ultimate goal. There are reasons why you might want to measure a group's ability to pick out unexpected rationality-related problems and activate the correct mindware. If your Bayesian superpowers only activate when you're being formally tested, they're not all that useful as superpowers.
I don't think you could really apply any 'algorithmic' method to that question (other than looking it up, but that would be cheating). It was a test on how much confidence you put in your heuristics. (BTW, It seems that I've underestimated mine, or I've been lucky, since I've got the date off by one year but estimated my confidence at 50% IIRC). Still, it was a valuable test, since most of human reasoning is necessarily heuristic.
Really? What probability do you assign to that statement being true? :D
I'm under the impression that Bayes' theorem is included in the high school math programs of most developed countries, and I'm certain it is included in any science and engineering college program.
It was in my high school curriculum (in Italy, in the mid-2000s), but the teacher spent probably only 5 minutes on it, so I would be surprised if a nontrivial number of my classmates who haven't also heard of it somewhere else remember it from there. IIRC it was also briefly mentioned in the part about probability and statistics of my "introduction to physics" course in my first year of university, but that's it. I wouldn't be surprised if more than 50% of physics graduates remember hardly anything about it other than its name.
I'm pretty sure Ireland doesn't have it on our curriculum, not sure how typical we are.
There are national and international surveys of quantitative literacy in adults. The U.S. does reasonably well in these, but in general the level of knowledge is appalling to math teachers. See this pdf (page 118 of the pdf, the in-text page number is "Section III, 93") for the quantitative literacy questions, and the percentage of the general population attaining each level of skill. less than a fifth of the population can handle basic arithmetic operations to perform tasks like this:
People who haven't learned and retained basic arithmetic are not going to have a grasp of Bayes' theorem.
I assign about 80% probability to less than 25% of adults knowing Bayes theorem and how to use it. I took physics and calculus and other such advanced courses in high school, and graduated never having heard of Bayes' Theorem. I didn't learn about it in university, either–granted, I was in 'Statistics for Nursing', it's possible that the 'Statistics for Engineering' syllabus included it.
Must be a problem of the American school system, I suppose.
Did they teach you about conditional probability? Usually Bayes' theorem is introduced right after the definition of conditional probability.
Only 80%?
In the USA, about 30% of adults have a bachelor's degree or higher, and about 44% of those have done a degree where I can slightly conceive that they might possibly meet Bayes' theorem (those in the science & engineering and science- & engineering-related categories (includes economics), p. 3), i.e. as a very loose bound 13% of US adults may have met Bayes' theorem.
Even bumping the 30% up to the 56% who have "some college" and using the 44% for a estimate of the true ratio of possible-Bayes'-knowledge, that's only just 25% of the US adult population.
(I've no idea how this extends to the rest of the world, the US data was easiest to find.)
You did your research and earned your confidence level. I didn't look anything up, just based an estimate on anecdotal evidence (the fact that I didn't learn it in school despite taking lots of sciences). Knowing what you just told me, I would update my confidence level a little–I'm probably 90% sure that less than 25% of adults know Bayes Theorem. (I should clarify that=adults living in the US, Canada, Britain, and other countries with similar school systems. The percentage for the whole world is likely significantly lower.)
I hear Britain's school system is much better than the US's.
The UK high school system does not cover Bayes Theorem.
Once you control for demographics, the US public school system actually performs relatively well.
It's not great by international standards, but I have heard that the US system is particularly bad for an advanced country.
Well, it's certainly not included in the US high school curriculum.
I can see why you'd criticize someone for saying "the problem is that the setting wasn't formal enough" but that's not exactly what I was getting at. What I was getting at is that there's a limit to how much thinking that one can do in a day, everyone's limit is different, and a lot of people do things to ration their brainpower so they avoid running out of it. This comment on mental stamina explains more.
My point was, more clearly worded: It would be a very rare person who possesses enough mental stamina to be rational in literally every single situation. That's a wonderful ideal, but the reality is that most people are going to ration brainpower. If your expectation is that rationalists should never ration brainpower and should be rational constantly, this is an unrealistic expectation. A more realistic expectation is that people should identify the things they need to think extra hard about, and correctly use rational thinking skills at those times. Therefore, testing for the skills when they're trying is probably the only way to detect a difference. There are inevitably going to be times when they're not trying very hard, and if you catch them at one of those times, well, you're not going to see rational thinking skills. It may be that some of these things can be ingrained in ways that don't use up a person's mental stamina, but to expect that rationality can be learned in such a way that it is applied constantly strikes me as an unreasoned assumption.
Now I wonder if the entire difference between the control groups results and LessWrong's results was that Yvain asked the control group only one question, whereas LessWrong had answered 14 pages of questions prior to that.
Agreed that rationality is mentally tiring...I went back and read your comment, too. However:
To me, rationality is mostly the ability to notice that "whew, this is a problem that wasn't in the problem-set of the ancestral environment, therefore my intuitions probably won't be useful and I need to think". The only way a rationalist would have to be analytical all the time is if they were very BAD at doing this, and had to assume that every situation and problem required intense thought. Most situations don't. In order to be an efficient rationalist, you have to be able to notice which situations do.
Any question on a written test isn't a great measure of real-life rationality performance, but there are plenty of situations in everyday life when people have to make decisions based on some unknown quantities, and would benefit from being able to calibrate exactly how much they do know. Some people might answer better on the written test than if faced with a similar problem in real life, but I think it's unlikely that anyone would do worse on the test than in real life.
Uh, what? The point of LessWrong is to make people better all the time, not just better when they think "ah, now it's time to turn on my rationality skills." If people aren't applying those skills when they don't know they're being tested, that's a very serious problem, because it means the skills aren't actually ingrained on the deep and fundamental level that we want.
You know that, Katydee, but do all the people who are taking the survey think that way? The majority of them haven't even finished the sequences. I agree with you that it's ideal for us to be good rationalists all the time, but mental stamina is a big factor.
Being rational takes more energy than being irrational. You have to put thought into it. Some people have a lot of mental energy. To refer to something less vague and more scientific: there are different levels of intelligence and different levels of intellectual supersensitivity (A term from Dabrowski that refers to how excitable certain aspects of your nervous system are.) Long story short: Some people cannot analyze constantly because it's too difficult for them to do so. They run out of juice. Perhaps you are one of those rare people who has such high stamina for analysis that you rarely run into your limit. If that's the case, it probably seems strange to you that anybody wouldn't attempt to maintain a state of constant analysis. Most people with unusual intellectual stamina seem to view others as lazy when they observe that those other people aren't doing intellectual things all the time. It frequently does not occur to them to consider that there may be an intellectual difference. The sad truth is that most people have much lower limits on how much intellectual activity they can do in a day than "constant". If you want to see evidence of this, you can look at Ford's studies where he shows that 40 hours a week is the optimum number of hours for his employees to work. Presumably, they were just doing factory work assembling car parts, which (if it fits the stereotype of factory work being repetitive) was probably pretty low on the scale for what's intellectually demanding, but he found that if they tried to work 60 hours for two weeks in a row, their output would dip below the amount he'd normally get from 40 hours. This is because of mistakes. You'd think that the average human brain could do repetitive tasks constantly but evidently, even that tires the brain.
So in reality, the vast majority of people are not capable of the kind of constant meta-cognitive analysis that is required to be rational all the time. You use the word "ingrained" and I have seen Eliezer talk about how patterns of behavior can become habits (I assume he means that the thoughts are cached) and I think this kind of habit / ingrained response works beautifully when no decision-making is required and you can simply do the same thing that you usually do. But whenever one is trying to figure something out (like for instance working out the answers to questions on a survey) they're going to need to put additional brainpower into that.
I had an experience where, due to unexpected circumstances, I developed some vitamin deficiencies. I would run out of mental energy very quickly if I tried to think much. I had, perhaps, a half an hour of analysis available to me in a day. This is very unusual for me because I'm used to having a brain that loves analysis and seems to want to do it constantly (I hadn't tested the actual number of hours for which I was able to analyze, but I would feel bored if I wasn't doing something like psychoanalysis or problem-solving for the majority of the day). When I was deficient, I began to ration my brainpower. That sounds terrible, but that is what I did. I needed to protect my ability to analyze to make sure I had enough left over to be able to do all the tasks I needed to do each day. I could feel that slipping away while I was working on problems and I could observe what happened to me after I fatigued my brain. (Vegetable like state.)
As I used my brainpower rationing strategies, it dawned on me that others ration brainpower, too. I see it all the time. Suddenly, I understood what they were doing. I understood why they kept telling me things like "You think too much!" They needed to change the subject so they wouldn't become mentally fatigued. :/
Even if the average IQ at LessWrong is in the gifted range, that doesn't give everyone the exact same abilities, and doesn't mean that everyone has the stamina to analyze constantly. Human abilities vary wildly from person to person. Everyone has a limit when it comes to how much thinking they can do in a day. I have no way of knowing exactly what LessWrong's average limit is, but I would not be surprised if most of them use strategies for rationing brainpower and have to do things like prioritize answering survey questions lower on their list of things to "give it their all" on, especially when there are a lot of them, and they're getting tired.
I'm not sure I can reliably recognize what mental fatigue feels like. I'd like to be able to diagnose it in myself (because I suspect that I have less mental energy than I used to), so do you know of any reasonably quick way to induce something that feels like mental fatigue, e.g. alcohol?
Alcohol doesn't induce mental fatigue in me; high temperatures and dehydration do. YMMV.
EDIT: So does not eating enough sugars.
Whatever your worst subject is, do a whole bunch of exercises in it until you start making so many mistakes it is not worth continuing. No need for alcohol, might as well wear out your brain.
It would be interesting to see if you'd get different types of fatigue from doing different kinds of activities. For instance, if I do three hours of math problems, I have trouble speaking after that - it's like my symbol manipulation circuitry is fried. (I have dyslexia, so that's probably related.) If I wear out my verbal processor (something that I think only started happening to me after I developed some unexpected vitamin deficiencies) this results in irritation. I can't explain myself very well, so people jump on me for mistakes, and it's really hard to tell them what I meant instead, so I get frustrated.
So, exercising each area of mental abilities might yeild different fatigue symptoms.
If you decide to experiment on yourself I'm definitely curious about your results!
That happens to me, too.
What are your fatigue symptoms? How much can you do of each activity before becoming fatigued?
If I've been reading/studying too long, I find much harder to concentrate and am more easily distracted by stray thoughts.
If I've been writing computer code/doing maths too long, I make the kind of trivial mistakes that screw up the results but are hard to locate way more often.
It depends -- usually between 20 minutes and 3 hours.
Re the problem of having to think all the time: a good start is to develop a habit of rejecting certainty about judgments and beliefs that you haven't examined sufficiently (that is, if your intuition shouts at you that something is quite clear, but you haven't thought about that for a few minutes, ignore (and mark as a potential bug) that intuition unless you understand a reliable reason to not ignore it in that case). If you don't have the stamina or incentives to examine such beliefs/judgments in more detail, that's all right, as long as you remain correspondingly uncertain, and realize that the decisions you make might be suboptimal for that reason (which should suitably adjust your incentives for thinking harder, depending on the importance of the decisions).
The process of choosing a probability is not quite that simple. You're not just making a boolean decision about whether you know enough to know, you're actually taking the time to distinguish between 10 different amounts of confidence (10%, 20%, 30%, etc), and then making ten more tiny distinctions (30%, 31%, 32% for instance)... at least that's the way that I do it. (More efficient than making enough distinctions to choose between 100 different options.) When you are wondering exactly how likely you are to know something in order to choose a percentage, that's when you have to start analyzing things. In order to answer the question, my thought process looked like this:
Bayes. I have to remember who that is. Okay, that's the guy that came up with Bayesian probability. (This was instant, but that doesn't mean it took zero mental work.)
Do I have his birthday in here? Nothing comes to mind.
Digs further: Do I have any reason to have read about his birthday at any point? No. Do I remember seeing a page about him? I can't remember anything I read about his birthday.
Considers whether I should just go "I don't know" and put a random year with a 0% probability. Decides that this would be copping out and I should try to actually figure this out.
When was Bayesian probability invented? Let's see... at what point in history would that have occurred?
Try to brainstorm events that may have required Bayesian probability, or that would have suggested it didn't exist yet.
Try to remember the time periods for when these events happened.
Defines a vague section of time in history.
Considers whether there might be some method of double-checking it.
Considers the meaning of "within 20 years either way" and what that means for the probability that I'm right.
Figures out where in my vague section of time the 40 year range should be fit.
Figures out which year is in the middle of the 40 year range and types it in.
Consider how many years Bayes would likely have to have lived for before giving his theorems to the world and adjust the year to that.
Considers whether it was at all possible for Bayesian probability to have existed before or after each event.
If possible, consider how likely it was that Baye's probability existed before/after each event.
Calculate how many 40-year ranges there are in the vague section of time between the events where Bayes could not have been born.
Calculate the chance that I chose the correct 40-year section out of all the possible sections, if odds are equal.
Compare this to my probabilities regarding how likely it was for Bayes theorem to have existed before and after certain events.
Adjust my probability figure to take all that into account.
My answer to this question took at least twenty steps, and that doesn't even count all the steps I went through for each event, nor does it count all the sub steps I went through for things that I sort of hand-waved like "Adjust my probability figure to take all that into account".
If you think figuring out stuff is instant, you underestimate the number of steps your brain does in order to figure things out. I highly recommend doing meditation to improve your meta-cognition. Meta-cognition is awesome.
(I was commenting on a skill/habit that might be useful in the situations where you don't/can't make the effort of explicitly reasoning about things. Don't fight the hypothetical.)
Is it your position that there is a thinking skill that is actually accurate for figuring stuff out without thinking about it?
I expect you can improve accuracy in the sense of improving calibration, by reducing estimated precision, avoiding unwarranted overconfidence, even when you are not considering questions in detail, if your intuitive estimation has an overconfidence problem, which seems to be common (more annoying in the form of an "The solution is S!" for some promptly confabulated arbitrary S, when quantifying uncertainty isn't even on the agenda).
(I feel the language of there being "positions" has epistemically unhealthy connotations of encouraging status quo bias with respect to beliefs, although it's clear what you mean.)
The straightforward interpretation of your words evaluates as a falsity, as you can't estimate informal beliefs to within 1%.
I'd put it more in terms of decibels of log-odds than percentages of probability. Telling 98% from 99% (i.e. +17 dB from +20 dB) sounds easier to me than telling 50% from 56% (i.e. 0 dB from +1 dB).
Well, you can, but it would be a waste of time.
No, I'm pretty certain you can't. You can't even formulate truth conditions for correctness of such an evaluation. Only in very special circumstances getting to that point would be plausible (when a conclusion is mostly determined by data that is received in an explicit form or if you work with a formalizable specification of a situation, as in probability theory problems; this is not what I meant by "informal beliefs").
The point is to make these things automatic so that one doesn't have to analyze all the time. I definitely don't feel like I "maintain a state of constant analysis," even when applying purportedly advanced rationality techniques. It basically feels the same as thinking about things normally, except that I am right more often.
I don't believe that your claim is true, but if it is I think LessWrong is doomed as a concept. I frankly do not think people will be able to accurately evaluate when they need to apply thinking skills to their decisions, so if we cannot teach skills on this level-- teach habits, as you say-- I do not think LessWrong will ever accomplish anything of real worth.
One example of a skill that I have taken on on this level is reference class forecasting. If I need to estimate how long something will take, my go-to method is to take the outside view. I am so used to this that it is now the automatic response to questions of estimating times.
I don't use "brainpower rationing" because I frankly have never felt the need to do so. I have told people that they "think too much" under certain circumstances (most notably when thinking is impeding action), and the thought of "brainpower rationing" has never come to mind until I saw this post.
What do you make of this?
Maybe I misinterpreted here but it sounds like you're saying you don't believe in mental stamina limits? Maybe you mean that you don't think rationality requires much brainpower?
I don't think we'd be doomed, and there are a few reasons for that:
There are people in existence who really can analyze pretty much constantly. THOSE people would theoretically have a pretty good chance of being rational all the time.
People who cannot analyze anywhere near constantly can simply choose their battles. If they're aware of their mental stamina limits, they can work with them. Realizing you don't know stuff and that you don't have enough mental stamina to figure it out right now is kind of sad but it is still perfectly rational, so perhaps rationalists with low mental stamina can still be good rationalists that way.
There are things that decrease mental fatigue. For instance, taking 15 minute breaks every 90 minutes (The book "The power of full engagement: manage energy not time" talks about this). We could do experiments on ourselves to find out what other things reduce or prevent mental fatigue. There may be low-hanging apples we're totally unaware of.
Okay, so you've learned to instantly go to a certain method. I can believe that this does not take much brainpower. However, how much brainpower does it take to execute the outside view method, on average, for the types of things you use it for? How many times can you execute the outside view in a day? Have you ever tried to reach your mental stamina limit?
Do you ever get home from work and feel relieved that you can relax now, and then do something that's not mentally taxing? Do you ever find that you're starting to hate an activity, and notice you're making more and more mistakes? Do you ever feel lazy and can't be bothered to do anything useful? I bet you do experience mental fatigue but don't recognize it as such. A lot of people just berate themselves for being unproductive, and don't consciously recognize that they've hit a real limit.
I think mental stamina is an important concept.
I'll add mental exuberance (not an ideally clear word, but I don't have a better one)-- how much people feel an impulse to think.
Nancy, there is already a term for this. It's "intellectual overexcitability" or "intellectual supersensitivity". These are terms from Dabrowski. Look up the "Theory of Positive Disintegration" to learn more.
Those terms seem like pathologizing-- which is not surprising, considering that Dabrowski puts emphasis on the difficulties of the path. I was thinking more of the idea that some people like thinking more than others, just as some people like moving around more than others, which is something much less intense.
I was wondering whether Dabrowski was influenced by Gurdjieff, and it turns out that he was.
My method of doing the same calculation was:
The more difficult part was the probability estimate. But using the heuristics taught to me by this book, this took only a few calculations. And the more I do these types of calculations, the faster and more calibrated I become. Eventually I hope to make them automatic at the 8 + 4 = 12 level.
If I were doing the calculation "for real" and not on a survey my algorithm would be much easier:
I know they exist on some level thanks to my experience with dual n-back, but I've yet to encounter any practical situation that imposes them (aside from "getting tired, which is different), and if I did I'm sure I could train my way out, just as I trained my way out of certain physical stamina limits. For example, it was once hard for me to maintain my endurance throughout a full fencing bout, but following some training I can do several in a row without becoming seriously fatigued. I'm sure better fencers than me can do even more.
LessWrong and CFAR, in my view, should provide the mental equivalent of that training if it is indeed necessary for the practice of rationality. I'm not, however, convinced that it is.
Immeasurably small (no perceived effort and takes less time than the alternative)/indeterminate/not in this respect. Most of the effort was involved in correctly identifying situations in which the method was useful, not in actually executing the method, but once the method became sufficiently ingrained that too went away.
No. My work is generally fun.
Not really. Sometimes I get bored, does that count?
Negative.
Fascinating!
It's making me realize why my summer project, which was to read Eat That Frog by Brian Tracey, was such a failure. The book is meant to be applied to work, preferably in an office environment–i.e. during your 40 productive work-hours. I was already working 40 hours a week at my extremely stimulating job as a nurse's aid at the hospital, where I had barely any time to sit down and think about anything, and I certainly didn't have procrastination problems. Then I would get home, exhausted with my brain about to explode from all the new interesting stuff I'd been seeing and doing all day, and try to apply Brian Tracey's productivity methods to the personal interest projects I was doing in my spare time.
This was a very efficient way to make these things not fun, make me feel guilty about being a procrastinator, etc. It gave me an aversion to starting projects, because the part of my brain that likes and needs to do something easy and fun after work knew it would be roped into doing something mentally tiring, and that it would be made to feel guilty over not wanting to do it.
I'm hoping that once I'm graduated and work as a nurse for a year or two, so that I have a chance to get accustomed to a given unit and don't have to spend so much mental effort, I'll have more left over for outside interests and can start reading about physics and programming for fun again. (Used to be able to do this in first and second year, definitely can't now.)
I'm glad you seem to have benefited from my explanation. If you want to do mentally draining reading, maybe weekends or later on in the evenings after you've rested would be a good time for that? If you've rested first, you might be able to scrape up a little extra juice.
Of course everyone has their own mental stamina limit, so nobody can tell you whether you do or don't have enough stamina to do additional intellectual activities after work. And it may vary day to day, as work is not likely to demand the exact same amount of brainpower every day.
An interesting experiment would be to see if there's anything that restores your stamina like a bath, a 20 minute nap after work, meditation, watching TV, or playing a fun game. Simply laying down in a dark quiet place does wonders for me if I am stressed out or fatigued. I would love to see someone log their mental stamina over time and correlate that to different activities that might restore stamina.
There are also stress reduction techniques that may help prevent you from losing stamina in the first place that could be interesting to experiment with.
And if you're not taking 15 minute breaks every 90 minutes during work, you might be "over-training" your brain. Over-training might result in an amplification of fatigue. "The Power of Full Engagement: Manage Energy Not Time" is likely to be of interest.
If you decide to do mental stamina experiments, definitely let me know!
I hadn't actually thought of that before...but it's an awesome idea! I will let you know if I get around to it.
Woo-hoo! (:
I've also found that pouring lots of cold water on my face helps me squeeze out the last drops of stamina I have left, and allow me to work twenty more minutes or so. (It doesn't actually restore stamina, so it doesn't work if I do that more than a couple times in a row.)
Hmmm. That might be one or a combination of the following:
Enjoying physical sensation. (Enjoyment seems to restore stamina for me, perhaps that's because the brain uses neurotransmitters for processing, and triggering pleasure involves increasing the amount of certain neurotransmitters.)
Fifteen minute breaks are supposed to be optima, and if you maximized pleasure during your break, I wonder what amount of stamina that would restore?
Probably 2. -- the break actually lasts about one minute.
Thanks for the details. If I remember correctly, I was running out of the ability to care by the time I got to the Bayes question.
What were the vitamin deficiencies?
If you'll excuse the expression, I'm suspicious of your sudden epiphany. That is, I accept your suggestion as a possible explanation (although I'm not convinced, mainly because this doesn't describe the way I answered the question; I don't know about anyone else). But I think saying "Oh gosh! The true answer has been staring us in the face all along!" is premature.
I am not sure why you took "a new explanation" so seriously. I guess I have to be really careful on LessWrong to distinguish ideas from actual beliefs. I do not think it's "The True Answer". I just think it's a rather obvious alternate explanation that should have occurred to me immediately, and didn't, and I'm surprised about that, and about the fact that it didn't seem to occur to Yvain either. I reworded some things to make it more obvious that I am not trying to present this as "The True Answer" but just as an idea.
Thank you, I appreciate that.
Would you mind trying to avoid jumping to the conclusion that I'm acting stupid in the future, Kindly? I definitely don't mind being told "Your statement could be interpreted as such-and-such stupid behavior, so you may want to change it." but it's a little frustrating when people speak to me as if they really believe I am as confused as your "The True Answer" interpretation would imply.
I'm not sure why you're accusing me of this. I often disagree with people, but I usually don't assume the people I disagree with are stupid. This is especially true when we disagree due to a misunderstanding.
(I don't intend to continue this line of conversation.)
Only if you took the SAT before 1994. Here's the percentiles for SATs taken in 2012; someone who was 97th percentile would get ~760 on math and ~730 on critical reading, adding up to 1490 (leaving alone the writing section to keep it within 1600), and 97th percentile corresponds to an IQ of 128.
An important part of the calibration chart (for people) is the frequency of times that they provide various calibrations. Looking at your table, I would focus on the large frequency between 10% and 30%.
I'll also point out that fixed windows are a pretty bad way to do elicitation. I tend to come from the calibration question from the practical side: how do we get useful probabilities out of subject-matter experts without those people being experts at calibration? Adopting those strategies seems more useful than making people experts at calibration.
I previously mentioned that item non-response might be a good measure of Conscientiousness. Before doing anything fancy with non-response, I first checked that there was a correlation with the questionnaire reports. The correlation is zero:
I am completely surprised. The results in the economics paper looked great and the rationale is very plausible. Yet... The 2 sets of data here have the right ranges, there's plenty of variation in both dimension, I'm sure I'm catching most of the item non-responses or NAs given that there are non-responses as high as 34, there's a lot of datapoints, and it's not that the correlation is the opposite direction which might indicate a coding error but that there's none at all. Yvain questions the Big Five results, but otherwise they look exactly as I would've predicted before seeing the results: low C and E and A, high O, medium N.
There may be something very odd about LWers and Conscientiousness; when I try C vs Income, there's a almost-zero correlation again:
I guess the next step is a linear model on income vs age, Conscientiousness, and IQ:
So all of them combined don't explain much and most of the work is being done by the age variable... There's many high-income LWers, supposedly (in this subset of respondents reporting age, income, IQ, and Conscientiousness, the max is 700,000), so I'd expect a cumulative r^2 of more than 0.173 for all 3 variables; if those aren't governing income, what is? Maybe everyone working with computers is rich and the others poor? Let's look at everyone who submitted salary and profession and see whether the practical computer people are making bank:
Wow. Just wow. 76k vs 43k. I mean, maybe this would go away with enough fiddling (eg. cost-of-living) but it's still dramatic. This suggests a new theory to me: maybe Conscientiousness does correlate with income at its usual high rate for everyone but computer people who are simply in so high demand that lack of Conscientiousness doesn't matter:
So for the CS people the correlation is small and non-statistically-significant, for non-CS people the correlation is almost 3x larger and statistically-significant.
I am also surprised by this. I wonder about the effect of "I'm taking this survey so I don't have to go to bed / do work / etc.," but I wouldn't have expected that to be as large as the diligence effect.
Also, perhaps look at nonresponse by section? I seem to recall the C part being after the personality test, which might be having some selection effects.
What do you mean? I can't compare non-response with anyone who didn't supply a C score, and there were plenty of questions to non-response on after the personality test section.
It seems to me that other survey non-response may be uncorrelated with C once you condition on taking a long personality survey, especially if the personality survey doesn't allow nonresponse. (I seem to recall taking all of the optional surveys and considering the personality one the most boring. I don't know how much that generalizes to other people.) The first way that comes to mind to gather information for this is to compare the nonresponse of people who supplied personality scores and people who didn't, but that isn't a full test unless you can come up with another way to link the nonresponse to C.
I was thinking it might help to break down the responses by section, and seeing if nonresponse to particular sections was correlated with C, but the result could only be that some sections are anticorrelated if a few are correlated. So that probably won't get you anything.
Why would the strong correlation go away after adding a floor? That would simply restrict the range... if that were true, we'd expect to see a cutoff for all C scores but in fact we see plenty of very low C scores being reported.
Yes. You'd expect, by definition, that people who answered the personality questions would have fewer non-responses than the people who didn't... That's pretty obvious and true:
Were you expecting that people with high C would or wouldn't skip questions? I can see arguments either way. Conscientious people might skip questions they don't have answers to or that they aren't willing to put the time into to give a good answer, or they might put in the work to have answers they consider good to as many questions as possible.
Is it feasible to compare wrong sort of answer with C?
Is it possible that the test for C wasn't very good?
Wouldn't; that was the claim of the linked paper.
Not really, if it wasn't caught by the no-answer check or the NA check.
As I said, it came out as expected for LW as a whole, and it did correlate with income once the CS salaries were removed... Hard to know what ground-truth there could be to check the scores against.
There is a correlation of 0.13 between non-responses and N.
Of course, there's also a correlation of -0.13 between C and the random number generator.
People who had seen the RNG give a large number were primed to feel unusually reckless when taking the Big 5 test. Duh. (Just kidding.)
On IQ Accuracy:
As Yvain says, "people have been pretty quick to ridicule this survey's intelligence numbers as completely useless and impossible and so on" because if they're true, it means that the average LessWronger is gifted. Yvain added a few questions to the 2012 survey, including the ACT and SAT questions and the Myers-Briggs personality type question that I requested (I'll explain why this is interesting), and that give us a few other things to check against, which has made the figures more believable. The ridicule may be an example of the "virtuous doubt" that Luke warns about in Overconfident Pessimism, so it makes sense to "consider the opposite":
The distribution of Myers-Briggs personality types on LessWrong replicates the Mensa pattern. This is remarkable since the patterns of personality types here are, in many significant ways, the exact opposite of what you'd find in the regular population. For instance, the introverted rationalists and idealists are each about 1% of the population. Here, they are the majority and it's the artisans and guardians who are relegated to 1% or less of our population.
Mensa's personality test results were published in the December 1993 Mensa Bulletin. Their numbers.
So, if you believe that most of the people who took the survey lied about their IQ, you also need to believe all of the following:
That most of these people also realized they needed to do IQ correlation research and fudge their SAT and ACT scores in order for their IQ lie to be believable.
Some explanation as to why the average of lurker's IQ scores would come out so close to the average of poster's IQ scores. The lurkers don't have karma to show off, and there's no known incentive good enough to get so many lurkers to lie about their IQ score. Vaniver's figures.
Some explanation for why the personality type pattern at LessWrong is radically different from the norm and yet very similar to the personality type pattern Mensa published and also matched my predictions. Even if they had knowledge of the Mensa personality test results and decided to fudge their personality type responses, too, they somehow managed to fudge them in such a way that their personality types accidentally matched my predictions.
That they decided not to cheat when answering the Bayes birthday question even though they were dishonest enough to lie on the IQ question, motivated to look intelligent, and it takes a lot less effort to fudge the Bayes question than the intelligence and personality questions. (This was suggested by ArisKatsaris).
That both posters and lurkers had some motive strong enough to justify spending 20+ minutes doing the IQ correlation research and fudging personality test questions while probably bored of ticking options after filling out most of a very long survey.
It's easier just to put the real number in the IQ box than do all that work to make it believable, and it's not like the liars are likely to get anything out of boasting anonymously, so the cost-benefit ratio is just not working in favor of the liar explanation.
If you think about it in terms of Occam's razor, what is the better explanation? That most people lied about their IQ, and fudged their SAT, ACT and personality type data to match, or that they're telling the truth?
Summary of criticism:
Possible Motive to Lie: The desire to be associated with a "gifted" group:
In re to this post, it was argued by NonComposMentis that a potential motive to lie is that if the outside world perceives LessWrong as gifted, then anyone having an account on LessWrong will look high-status. In rebuttal:
I figure that lurkers would not be motivated to fudge their results because they don't have a bunch of karma on their account to show off and anybody can claim to read LessWrong, so fudging your IQ just to claim that the site you read is full of gifted people isn't likely to be motivating. I suggested that we compare the average IQs of lurkers and others. Vaniver did the math and they are very, very close..
I argued, among other things, that it would be falling for a Pascal's mugging to believe that investing the extra time (probably at least $5 worth of time for most of us) into fudging the various different survey questions is likely to contribute to a secret conspiracy to inflate LessWrong's average IQ.
Did the majority avoid filling out intelligence related questions, letting the gifted skew the results?
Short answer: 74% of people answered at least one intelligence related question and since most people filled out only one or two, the fact that the self-report, ACT and SAT score averages are so similar is remarkable.
I realized, while reading Vaniver's post that if only 1/3 of the survey participants filled out the IQ score, this may have been due to something which could have skewed the results toward the gifted range, for instance, if more gifted people had been given IQ tests for schooling placement (and the others didn't post their IQ score because they did not know it) or if the amount of pride one has in their IQ score has a significant influence on whether one reported it.
So I went through the data and realized that most of the people who filled out the IQ test question did not fill out all the others. That means that 804 people (74% not 33%) answered at least one intelligence related question. As we have seen, the IQ correlations for the IQ, SAT and ACT questions were very close to each other (unsurprisingly, it looks like something's up with the internet test... removing those, it's 63% of survey participants that answered an intelligence related question). It's remarkable in and of itself that each category of test scores generated an average IQ so similar to the others considering that different people filled them out. I mean if 1/3 of the population filled out all of the questions, and the other 2/3 filled out none, we could say "maybe the 1/3 did IQ correlation research and fudged these" but if most of the population fills out one or two, and the averages for each category come out close to the averages for the other categories, why is that? How would that happen if they were fudging?
It does look to me like people gave whatever test scores they had and that not all the people had test scores to give but it does not look to me like a greater proportion of the gifted people provided an intelligence related survey answer. Instead it looks like most people provided an intelligence related survey answer and the average LessWronger is gifted.
Exploration of personality test fudging:
Erratio and I explored how likely it is that people could successfully fudge their personality tests and why they might do that.
There are a lot of questions on the personality test that have an obvious intelligence component, so it's possible that people chose the answer they thought was most intelligent.
There are also intelligence related questions where it's not clear which answer is most intelligent. I listed those.
The intelligence questions would mostly influence the sensing/intuition dichotomy and the thinking/feeling dichotomy. This does not explain why the extraversion/introversion and perceiving/judging results were similar to Mensa's.
Looking at Groups of IQs:
I acknowledge that the sample set for the highest IQ groups are, of course, rather small, but that's all we've got. What's been happening with the numbers for the highest IQ groups, if indicative of what's really happening, is not encouraging. The highest two groups have decreased in numbers while the lowest two have increased. Also, it looks like the prominence of each group has shifted over time such that the highest group went from being 1/5 to 1/20 and the moderately gifted and normal groups have grown substantially.
Exceptionally Gifted Respondents (Self-Reported IQ)
(Defined as having an IQ of 160 or more)
2009: 11 (7%)
2011: 27 (3%)
2012: 22 (2%) (Decreased)
Highly Gifted Respondents (Self-Reported IQ)
(Defined as having an IQ between 145-159)
2009: 17 (11%)
2011: 88 (9%)
2012: 81 (7%) (Decreased)
Moderately Gifted Respondents (Self-Reported IQ)
(Defined as having an IQ between 132-144)
2009: 22 (14%)
2011: 125 (13%)
2012: 149 (11%) (Increased)
Normal Respondents (Self-Reported IQ)
(Defined as having an IQ between 100-131)
2009: 11 (7%)
2011: 91 (10%)
2012: 94 (9%) (Increased)
Each Group as a Percentage of Total IQ Respondents, by Year:
2009 Group IQ Distribution (As a percentage of 61 total IQ respondents)
18% Exceptionally Gifted
28% Highly Gifted
36% Moderately Gifted
18% Normal IQ
2011 Group IQ Distribution (As a percentage of 331 total IQ respondents)
8% Exceptionally Gifted
27% Highly Gifted
38% Moderately Gifted
28% Normal IQ
2012 Group IQ Distribution (As a percentage of 346 total IQ respondents)
6% Exceptionally Gifted
23% Highly Gifted
43% Moderately Gifted
27% Normal IQ
I don't find it that hard to see why Lesswrong and Mensa would both select for introverted personalities. Do you?
I think most sensible people can deduce that IQ is positively correlated with SAT and ACT and all of them are positively correlated with "status". I agree that SAT and ACT are more difficult to fudge though. I haven't ever done either of them. Can they be easily redone several times? Do (smart) people liberally talk about their scores in the US?
Many people do IQ tests of different calibers several times and could just remember or report the best result they've gotten. There are different levels of dishonesty. "Lying" is a bit crude.
I don't think anyone on Less Wrong has lied about their IQ. (addendum: not enough to seriously alter the results, anyway.) If you come up with a "valuing the truth" measure, LessWrong would score pretty highly on that considering the elaborate ways people who post here go about finding true statements in the first place. To lie about your IQ would mean you'd have to know to some degree what your real IQ is, and then exaggerate from there.
However, I do think it's more likely than you mention that most people on LessWrong self-reporting IQ simply don't know what their IQ is in absolutely certain terms, since to know your adult IQ you'd have to see a psychometricist and receive an administered IQ test. iqtesk.dk is normed by Mensa Denmark, so it's far more reliable than self-reports. You don't know where the self-reported IQ figures are coming from -- they could be from a psychometricist measuring adult IQ, or they could be from somewhere far less reliable. It could be that they know their childhood IQ was measured at somewhere around 135 for example, and are going by memory. Or they could know by memory that their SAT is 99th percentile and spent a minute to look up what 99th percentile is for IQ, not knowing it's not a reliable proxy. Or they might have taken an online test somewhere that gave ~140 and are recalling that number. Who knows? Either way, I consider "don't attribute to malice what you can attribute to cognitive imperfection" a good mantra here.
126 is actually higher than a lot of people think. As an average for a community, that's really high -- probably higher than all groups I can think of except math professors, physics professors and psychometricists themselves. It's certainly higher than the averages for MIT and Harvard, anyway.
About the similarity between self-reported IQ and SAT scores: SAT scores pre-1994 (which many of the scores on here are not likely to fall into) are not reliable as IQ test proxies; Mensa no longer accepts them. This is because it is much easier to game. I tutor the SAT, and when I took the SAT prior to applying at a tutor company my reading score was 800, but in high school pre-college it was only in the mid-600s. SAT scores in reading are heavily influenced by (1) your implicit understanding of informal logic, and (2) your familiarity with English composition and how arguments/passages may be structured. Considering the SAT has contained these kinds of questions since the mid-90s, I am inclined to throw its value as a proxy IQ test out the window and don't think you can draw conclusions about LessWrong's real collective IQ from the reported SAT scores.
The IQTest.dk result may have given the lowest measure, but I also think it's the most accurate measure. It would not put LessWrong in the 130s, maybe, but it would mean that the community is on the same level of intellect as, say, surgeons and Harvard professors, which is pretty formidable for a community.
Was Mensa's test conducted on the internet? The internet has a systematic bias in personalities. For example, reddit subscriptions to each personality type reddit favor Introversion and Intuition 4,828 INTJ 4,457 INTP 1,817 INFP 1,531 INFJ
IAWYC, but "the internet" is way too broad for what you actually mean -- ISTM that a supermajority of teenagers and young adults in developed countries uses it daily, though plenty of them mostly use it for Facebook, YouTube and similar and probably have never heard of Reddit. (Even I never use Reddit unless I'm following a link to a particular thread from somewhere else -- but the first letter of my MBTI is E so this kind of confirms your point.)
Yeah...by "internet" what I meant was sites that most people do not know about - sites that you would only stumble upon in the course of extensive net usage. I once described it to a friend as "deep" vs "shallow" internet, with depth corresponding to the extent to which a typical visitor to the website uses the internet. Even within a website (say reddit) a smaller sub-reddit would be "deeper" than a main one.
I'm myself am actually a counterexample to my own "extroverts don't use the internet as much" notion...but I'm only a moderate extrovert. (ENTP or ENFP depending on the test...ENTP description fits better. I listed ENTP in the survey.)
By that definition, there are many nearly disconnected "deep internets".
Yes...i'm confused. Is this supposed to be a flaw in the definition? The idea here is to use relative obscurity to describe the degree to which a site is visited only by Internet users who do heavy exploring. There are only a few "shallow" regions... Facebook, Wikipedia, twitter...the shallowest being google. These are all high traffic and even people who never use computers have heard some of these words. There are many deep regions, on the other hand, and most are disconnected.
It is if you then proceed to claim to have statistics over users of the "deep internet".
Yeah, different websites have different personality skews, which complicates things. Fortunately there's evidence against Mensa having used an online sample: Epiphany said the results were published in December 1993. It's fairly easy to give a survey to an Internet forum nowadays, but where would Mensa have found an online sample back in '93? IRC? Usenet? (There is a rec.org.mensa where people posted about personality and the Myers-Briggs back in 1993, but the only relevant post that year was someone asking about Mensans' personalities to no avail.)
I don't have any more data than that, sorry.
To suggest that people on the internet may have certain personality types is a good suggestion, but it raises two questions:
Might your example of Reddit be similar to LW because LW gets lots of users from Reddit? (Or put another way, if the average LessWronger is gifted, maybe "the apple doesn't fall far from the tree" and Reddit has lots of gifted people, too.)
Might gifted people gather in large numbers on the internet because it's easier to find people with similar interests? (Just because people on the internet tend to have those personality types, it doesn't mean they're not gifted.)
As for "the internet" having a systematic bias in personalities, I would like to see the evidence of this that's not based on a biased sample. It's likely that the places you go to find people like you will, well, have people like you, so even if you (or somebody else on one of those sites) observed a pattern in personality types across sites they hang out on, the sample is likely to be biased.
That's a good point - I hadn't considered sample bias. Extending that point, though, Lesswrong and Mensa are a biased sample in more than the simple fact that the people are gifted. It is only a subset of gifted people that choose to participate in Mensa It should be mentioned, I'm using "internet" as shorthand for the "deep" internet ... not facebook. I'm talking websites that most people do not use, that you'd have to spend a lot of time on the internet to find. As such, the "internet" hypothesis would predict a greater bias towards smaller sub-reddits.
Anyway, I was mostly posing an alternate hypothesis. When I first noticed the trend on the personality forums, this is what I thought was happening -
Slacking off / internet addiction selects for Perceiving and low Conscientiousness.
Non-social-networking internet use selects for Introversion.
Any forum discussing an idea without immediate practical benefits selects for iNtuition.
And then, factor in lesswrong/giftedness...
If it's a math/science/logic topic, it selects for Thinking and iNtuition.
High scores on Raven's matrices select for Thinking, iNtuition. High scores on Working memory components select for Judging. The ACT/SAT additionally select for Conscientiousness
Strong mathematical affinity shifts those on the border of *NTP and *NTJ into *NTJ (people prefer dealing with intellectually ordered systems, even if they have messy rooms and chaotic lifestyles)
A scientific/engineering ideology creates a shift towards the concrete (empirical evidence, practical gains in technology, etc) shifts those on the border of *NTJ and *STJ into ISTJ.
In summary, I think LW and Mensa surveys are attracting a special subset of idea driven and logical people (iNtuitives and Thinkers) and likely to use the internet often/spot the survey. (Introverts)
That's much nicer and much more detailed. Questions this raises:
Might the "deep" internet you refer to be selecting for gifted people? (I think this is likely!)
Do we have figures on personality types and IQs for internet forums in general, not from a biased sample set? These figures would test your theory.
I agree with (1), but would claim that it also selectively attracts introverts (and I'm unsure whether or not it will bias J-P to the P side)
(2) For each of these, I tried not to look at the data after finding the poll. I made predictions first. Just for fun / to correct for hindsight bias, anyone reading might want to do the same. To play, don't click on the link or read my prediction until you make yours. Also, here is some data which claims to represent the general population - http://mbtitruths.blogspot.com/2011/02/real-statistics.html for comparison. I've already seen similar data on another site, so I won't state my predictions on this one.
A website posts stats for people who have taken the test. Unlike the above simple random sample, this selects for internet users.
http://www.personalitypage.com/html/demographics.html
Prediction: I'd consider this "shallow internet", so very weak biases to (I). The general population is (S), I'd expect a weak bias to (N) but not enough to overcome the general population's S centering.
Result: apparently I suck at predictions. In hindsight all the top three would be predicted score high "Fi" on a Jungian cognitive function test, and Fi in theory would be more interested in taking personality tests. But that's hindsight, and I'm not sure if connection between MBTI and Jung hasn't been verified empirically.
Here is a "deep internet" forum that I wouldn't ever visit... Christian singles chat forum! This should not suffer from the sample bias you mentioned earlier (He stated that websites I visit are likely to have users with similar personalities to me [ENTP])
http://christianchat.com/christian-singles-forum/34516-meyers-briggs-type-indicator-mbti-poll.html
Prediction: I tried my best not to look at the data despite the high visual salience as soon as you open that link. Here is my prediction: I'd predict strong biases towards Introversion (because internet), slight biases towards iNtuition (because religion is idea-based), moderate bias to Feeling (I think religious people are illogical) and ... let's say a slight tilt towards Judging. Call it a hunch, life experience says that Si (judging + sensing) is particularly predisposed to religion.
Result: OK, looks like my trends were right but my magnitude was way off. My "hunch" was correct but I didn't listen to it closely enough and vastly underestimated the Judging bias, while my personal prejudice overestimated the Feeling bias. My predictions about intution and introversion were essentially correct though.
http://personalitycafe.com/myers-briggs-forum/28171-mbti-demographics.html
Click the ppt, it has data by education.
Prediction: NT's pursue higher education, SF's do not. Other two dichotomies don't matter as much, but J helps slightly.
Result: seems about right. Eyeballing, J seems not to matter much until college, at which point it prevents dropping out.
For IQ - http://asm.sagepub.com/content/3/3/225.short
Prediction- Strong N, slight T bias. I don't think T actually means "intelligent" as I define it, but I do think it would help on some portions of the IQ test.
Result: N bias only. interesting.
Finally, Scientific aptitude: http://www.amsciepub.com/doi/pdf/10.2466/pr0.1970.26.3.711
Prediction - strong N, moderate T. I'm not sure about J-P. I think people who choose science tracks and go into academia will be P (creative types), whereas kids who get good grades but ultimately do not choose science will be J. I'm not sure which group they are looking at (I didn't permit myself to read it yet, so I'm a bit vague on what exactly they did). I don't think E - I will matter at all.
Result -
NT take high level science a lot more, Introverts take them slightly more. J-P is irrelevant. Intuition really helps in school at all levels. Feeling relates to high GPA in the easy courses but not the hard course (that's pretty unexpected) Introversion relates to high GPA in hard course but not in easy courses. Percievers start out with a pretty big edge both in IQ and GPA in the lower level courses, Judging takes a slight lead in both those metrics in the advanced course. Not sure if this is noise.
Side finding - they also did IQ measurements. Again, only N related to IQ (in fact, F won out over T)...but it did not relate as much in the advanced courses. I think the advanced course chopped off the lower end of the IQ bell curve, leaving only smart Sensors. By the way, Extroverts have an IQ edge, despite getting lower grades and not taking advanced courses as often.
Thoughts? I think in general my ideas about introversion not mattering for intelligence, but mattering a lot for internet use, bear out. Apparently Thinking doesn't really matter either...which I sort of felt was true, but I didn't actually expect the IQ test scores to agree with me on that. It might have to do with self reported vs actual use of logic.
Of course, we are looking at the center of the bell curve, whereas on LW we are (presumably) looking at the far right edge.
EDIT: here is another IQ one with bigger sample size. http://www.psytech.com/Research/Intelligence-2009-08-11.pdf
They say that they found IQ correlates with I, N, T, and P. However, they claim that were surprised about the "I" correlation, because a large number of other studies have found that E is positively correlated. They go on to talk about how different testing conditions might favor E vs I. Some interesting further reading in there...it seems that N only correlates on the verbal reasoning section,
I'd say "LW has about as many gifted people as Reddit (proportionally)" should be a sort of null hypothesis: if this is true, then people on LessWrong are not actually surprisingly smart.
I wouldn't say that's a reasonable null. Reddit has like 8 million users; 2% of the 310m American population is just 6.2m, so it would be difficult for Reddit to be 100% gifted while LW could easily be. The size disparity is so large that such a null seems more than a little weird.
I don't think I understand your objection. If LW were 100% gifted (while Reddit, presumably, is not?) wouldn't that be evidence that there's some sort of IQ selection at work? (or, conceivably, that just being on LW makes people smarter, although I think that's not supposed to be a thing).
I'm saying that we could, just from knowing how big Reddit is, reject out of hand all sorts of proportions of gifted because it would be nigh impossible; a set of nulls (the proportions 0-100%), many of which (all >75%) we can reject before collecting any data is a pretty strange choice to make!
Well, really what I want to ask is: is LW any different, IQ-wise, from a random selection of Redditors of the same size? Possibly stating it in terms of a proportion of "gifted" people is misleading, but that's not as interesting anyway.
I don't see the difference. A random selection of Redditors is going to depend on what Reddit overall looks like...
Well, I don't see the difference either, but I'm still not entirely sure what about this hypothesis seems unreasonable to you, so I was hoping this reformulation would help.
The reasoning behind it is as follows: I figure a generic discussion board on the Internet has roughly the same IQ distribution as Reddit. If LW has a high average IQ, but so does Reddit, then presumably these are both due to the selection effect of "someone who posts on an online discussion board". So to see if LW is genuinely smarter, we should be comparing it to Reddit, not to the Normal(100,15) distribution.
Maybe it just means Reddit-folk are surprisingly smart? I mean, IQ 130 corresponds to 98th percentile. The usual standard for surprise is 95th percentile.
Scores on standardized tests like SAT and ACT can be improved via hard work and lots of practice -- there are abundant practice books out there for such tests. It is entirely conceivable that those self-reported IQs were generated via comparing scores on these standardized tests against IQ-conversion charts. I.e., with very hard work, the apparent IQs are in the 130+ range according to these standardised tests; but when it comes to tests that measure your native intelligence (e.g., iqtest.dk), the scores are significantly lower. In future years, it would be advisable for the questionnaire to ask participants how much time they spent in total to prepare for tests such as SAT and ACT -- and even then you might not get honest answers. That brings me to the point of lying...
Not necessarily true. If the survey results show that LWers generally have IQs in the gifted range, then it allows LWers to signal their intelligence to others just by identifying themselves as LWers. People would assume that you probably have an IQ in the gifted range if you tell them that you read LW. In this case, everyone has an incentive to fudge the numbers.
erratio has also pointed out that participants might have answered those personality tests untruthfully in order to signal intelligence, so I shan't belabour the point here.
Ok, now here is a motive! I still find it difficult to believe that:
Most of 1000 people care so much about status that they're willing to prioritize it over truth, especially since this is LessWrong where we gather around the theme of rationality. If there's anyplace you'd think it would be unlikely to find a lot of people lying about things on a survey, it's here.
The people who take the survey know that their IQ contribution is going to be watered down by the 1000 other people taking the survey. Unless they have collaborated by PM and have made a pact to fudge their IQ test figures, these frequently math oriented people must know that fudging their IQ figure is going to have very, very little impact on the average that Yvain calculates. I do not know why they'd see the extra work as worthwhile considering the expected amount of impact. Thinking that fudging only one of the IQs is going to be worthwhile is essentially falling for a Pascal's mugging.
Registration at LessWrong is free and it's not exclusive. At all. How likely is it, do you think, that this group of rationality-loving people has reasoned that claiming to have joined a group that anybody can join is a good way to brag about their awesomeness?
I suppose you can argue that people who have karma on their accounts can point to that and say "I got karma in a gifted group" but lurkers don't have that incentive. All lurkers can say is "I read LessWrong." but that is harder to prove and even less meaningful than "I joined LessWrong".
Putting the numbers where our mouths are:
If the average IQ for lurkers / people with low karma on LessWrong is pretty close to the average IQ for posters and/or people with karma on LessWrong, would you say that the likelihood of post-making/karma-bearing LessWrongers lying on the survey in order to increase other's status perceptions of them is pretty low?
Do you want to get these numbers? I'll probably get them later if you don't, but I have a pile of LW messages and a bunch of projects going on right now so there will be a delay and a chance that I completely forget.
I have thought of that. But a person who wants to lie about his IQ would think this way: If I lie and other LWers do not, it is true that my impact on the average calculated IQ will be negligible, but at least it will not be negative; but if I lie and most other LWers also lie, then the collective upward bias will lead to a very positive result which would portray me in a good light when I associate myself with other LWers. So there is really no incentive to not lie.
(I'm not saying that they definitely lied; I'm merely pointing out that this is something to think about.)
Fair point; but very often the kind of clubs you join does indicate something about your personality and interests, regardless of whether you are actually an active/contributing member or not. Saying "I read LessWrong" or "I joined LessWrong" certainly signals to me that you are more intelligent than someone who joined, say, Justin Bieber's fan club, or the Twilight fan-fiction club. And if there are numbers showing that LW readers tend to have IQs in the gifted range, naturally I would think that X is probably quite intelligent just by virtue of the fact that X reads LW.
One last point is that LWers might not be deliberately lying: Perhaps they were merely victim to the Dunning-Kruger effect when self-reporting IQs. I am not sure if there are any studies showing that intelligent people are generally less likely to fall prey to the Dunning-Kruger effect.
Last but not least, I would again like to suggest that future surveys include questions asking people how much time they spent on average preparing for exams such as the SAT and the ACT -- as I pointed out previously, scores on such exams can be very significantly improved just by studying hard, whereas tests like iqtest.dk actually measure your native intelligence.
Not true. It would probably take at least 20 minutes to fudge all the stuff that has to be fudged. When you're already fatigued from filling out survey questions, that's even less desirable at that time. At best, this would be falling for a Pascal's mugging. True that some people may. But would the majority of survey participants... at a site about rationality?
They were not asked to assess their own IQ they were asked to report the results of a real assessment. To report something other than the results of a real assessment is a type of lie in this case.
That's a suggestion for Yvain. I don't assist with the surveys.
From the public dataset:
165 out of 549 responses without reported positive karma (30%) self-reported an IQ score; the average response was 138.44.
181 out of 518 responses with reported positive karma (34%) self-reported an IQ score; the average response was 138.25.
One of the curious features of the self-reports is how many of the IQs are divisible by 5. Among lurkers, we had 2 151s, 1 149, and 10 150s.
I think the average self-response is basically worthless, since it's only a third of responders and they're likely to be wildly optimistic.
So, what about the Raven's test? In total, 188 responders with positive karma (36%), and 164 responders without positive karma (30%) took the Raven's test, with averages of 126.9 and 124.4. Noteworthy is the new max and min- the highest scorer on the Raven's test claimed to get 150, and the three sub-100 scores were 3, 18, and 66 (of which I suspect only the last isn't a typo or error of some sort).
Only 121 users both self-reported IQ and took the Raven's test. The correlation between their mean-adjusted self-reported IQ and mean-adjusted Raven's test was an abysmal .2. Among posters with positive karma, the correlation was .45; among posters without positive karma, the correlation was -.11.
Thank you for these numbers, Vaniver! I should have thanked you sooner. I had become quite busy (partly with preparing my new endless September post) so I did not show up to thank you promptly. Sorry about that.
You're welcome!
Alternate possibility: The distribution of personality types in Mensa/LW relative to everyone else is an artifact produced by self-identified smart people trying to signal their intelligence by answering 'yes' to traits that sound like the traits they ought to have.
eg. I know that a number of the T/F questions are along the lines of "I use logic to make decisions (Y/N)", which is a no-brainer if you're trying to signal intelligence.
A hypothetical way to get around this would be to have your partner/family member/best friend next to you as you take the test, ready to call you out when your self-assessment diverges from your actual behaviour ("hold on, what about that time you decided not to go to the concert of [band you love] because you were angry about an unrelated thing?")
Ok, it's possible that all of the following happened:
Most of the 1000 people decided to lie about their IQ on the LessWrong survey.
Most of the liars realized that their personality test results were going to be compared with Mensa's personality type results, and it dawned on them that this would bring their IQ lie into question.
Most of the liars decided that instead of simply skipping the personality test question, or taking it to experience the enjoyment of finding out their type, they were going to fudge the personality test results, too.
Most of the liars actually had the patience to do an additional 72 questions specifically for the purpose of continuing to support a lie when they had just slogged through 100 questions.
Most of the liars did all of that extra work (Researching the IQ correlation with the SAT and the ACT and fudging 72 personality type questions) when it would have been so much easier to put their real IQ in the box, or simply skip the IQ question completely because it is not required.
Most of the liars succeeded in fudging their personality types. This is, of course, possible, but it it is likely to be more complicated than it at first seems. They'd have to be lucky that enough of the questions give away their intelligence correlation in the wording (we haven't verified that). They'd have to have enough of an understanding of what intelligent people are like that they'd choose the right ones. Questions like these are likely to confuse a non-gifted person trying to guess which answers will make them look gifted:
"You are more interested in a general idea than in the details of its realization"
(Do intelligent people like ideas or details more?)
"Strict observance of the established rules is likely to prevent a good outcome"
(Either could be the smarter answer, depending who you ask.)
"You believe the best decision is one that can be easily changed"
(It's smart to leave your options open, but it's also more intellectually self-confident and potentially more rewarding to take a risk based on your decision-making abilities.)
"The process of searching for a solution is more important to you than the solution itself"
(Maybe intelligence makes playing with ideas so enjoyable, gifted people see having the solution as less important.)
"When considering a situation you pay more attention to the current situation and less to a possible sequence of events"
(There are those that would consider either one of these to be the smarter one.)
There were a lot of questions that you could guess are correlated with intelligence on the test, and some of them are no-brainers, but are there enough of those no-brainers with obvious intelligence correlation that a non-gifted person intent on looking as intelligent as possible would be able to successfully fudge their personality type?
The survey is anonymous and we don't even know which people gave which IQ responses, let alone are they likely to receive any sort of reward from fudging their IQ score. Can you explain to me:
What reward would most of LessWrong want to get out of lying about their IQs?
Why, in an anonymous context where they can't even take credit for claiming the IQ score they provided, most of LessWrong is expecting to receive any reward at all?
Can you explain to me why fudged personality type data would match my predictions? Even if they were trying to match them, how would they manage it?