Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

# 2012 Survey Results

80 07 December 2012 09:04PM

Thank you to everyone who took the 2012 Less Wrong Survey (the survey is now closed. Do not try to take it.) Below the cut, this post contains the basic survey results, a few more complicated analyses, and the data available for download so you can explore it further on your own. You may want to compare these to the results of the 2011 Less Wrong Survey.

## Part 1: Population

How many of us are there?

The short answer is that I don't know.

The 2011 survey ran 33 days and collected 1090 responses. This year's survey ran 23 days and collected 1195 responses. The average number of new responses during the last week was about five per day, so even if I had kept this survey open as long as the last one I probably wouldn't have gotten more than about 1250 responses. That means at most a 15% year on year growth rate, which is pretty abysmal compared to the 650% growth rate in two years we saw last time.

About half of these responses were from lurkers; over half of the non-lurker remainder had commented but never posted to Main or Discussion. That means there were only about 600 non-lurkers.

But I am skeptical of these numbers. I hang out with some people who are very closely associated with the greater Less Wrong community, and a lot of them didn't know about the survey until I mentioned it to them in person. I know some people who could plausibly be described as focusing their lives around the community who just never took the survey for one reason or another. One lesson of this survey may be that the community is no longer limited to people who check Less Wrong very often, if at all. One friend didn't see the survey because she hangs out on the #lesswrong channel more than the main site. Another mostly just goes to meetups. So I think this represents only a small sample of people who could justly be considered Less Wrongers.

The question of "how quickly is LW growing" is also complicated by the high turnover. Over half the people who took this survey said they hadn't participated in the survey last year. I tried to break this down by combining a few sources of information, and I think our 1200 respondents include 500 people who took last year's survey, 400 people who were around last year but didn't take the survey for some reason, and 300 new people.

As expected, there's lower turnover among regulars than among lurkers. Of people who have posted in Main, about 75% took the survey last year; of people who only lurked, about 75% hadn't.

This view of a very high-turnover community and lots of people not taking the survey is consistent with Vladimir Nesov's data showing http://lesswrong.com/lw/e4j/number_of_members_on_lesswrong/77xz 1390 people who have written at least ten comments. But the survey includes only about 600 people who have at least commented; 800ish of Vladimir's accounts are either gone or didn't take the census.

## Part 2: Categorical Data

SEX:
Man: 1057, 89.2%
Woman: 120, 10.1%
Other: 2, 0.2%)
No answer: 6, 0.5%

GENDER:
M (cis): 1021, 86.2%
F (cis): 105, 8.9%
M (trans f->m): 3, 0.3%
F (trans m->f): 16, 1.3%
Other: 29, 2.4%
No answer: 11, 0.9%

ORIENTATION:
Heterosexual: 964, 80.7%
Bisexual: 135, 11.4%
Homosexual: 28, 2.4%
Asexual: 24, 2%
Other: 28, 2.4%
No answer: 14, 1.2%

RELATIONSHIP STYLE:

Prefer monogamous: 639, 53.9%
Prefer polyamorous: 155, 13.1%
Uncertain/no preference: 358, 30.2%
Other: 21, 1.8%
No answer: 12, 1%

NUMBER OF CURRENT PARTNERS:
0: 591, 49.8%
1: 519, 43.8%
2: 34, 2.9%
3: 12, 1%
4: 5, 0.4%
6: 1, 0.1%
7, 1, 0.1% (and this person added "really, not trolling")
Confusing or no answer: 20, 1.8%

RELATIONSHIP STATUS:
Single: 628, 53%
Relationship: 323, 27.3%
Married: 220, 18.6%
No answer: 14, 1.2%

RELATIONSHIP GOALS:
Not looking for more partners: 707, 59.7%
Looking for more partners: 458, 38.6%
No answer: 20, 1.7%

COUNTRY:
USA: 651, 54.9%
UK: 103, 8.7%
Canada: 74, 6.2%
Australia: 59, 5%
Germany: 54, 4.6%
Israel: 15, 1.3%
Finland: 15, 1.3%
Russia: 13, 1.1%
Poland: 12, 1%

These are all the countries with greater than 1% of Less Wrongers, but other, more exotic locales included Kenya, Pakistan, and Iceland, with one user each. You can see the full table here.

This data also allows us to calculate Less Wrongers per capita:

Finland: 1/366,666
Australia: 1/389,830
Canada: 1/472,972
USA: 1/483,870
Israel: 1/533,333
UK: 1/603,883
Germany: 1/1,518,518
Poland: 1/3,166,666
Russia: 1/11,538,462

RACE:
White, non-Hispanic 1003, 84.6%
East Asian: 50, 4.2%
Hispanic 47, 4.0%
Indian Subcontinental 28, 2.4%
Black 8, 0.7%
Middle Eastern 4, 0.3%
Other: 33, 2.8%
No answer: 12, 1%

WORK STATUS:
Student: 476, 40.7%
For-profit work: 364, 30.7%
Self-employed: 95, 8%
Unemployed: 81, 6.8%
Academics (teaching): 54, 4.6%
Government: 46, 3.9%
Non-profit: 44, 3.7%
Independently wealthy: 12, 1%
No answer: 13, 1.1%

PROFESSION:
Computers (practical): 344, 29%
Math: 109, 9.2%
Engineering: 98, 8.3%
Computers (academic): 72, 6.1%
Physics: 66, 5.6%
Finance/Econ: 65, 5.5%
Computers (AI): 39, 3.3%
Philosophy: 36, 3%
Psychology: 25, 2.1%
Business: 23, 1.9%
Art: 22, 1.9%
Law: 21, 1.8%
Neuroscience: 19, 1.6%
Medicine: 15, 1.3%
Other social science: 24, 2%
Other hard science: 20, 1.7%
Other: 123, 10.4%
No answer: 27, 2.3%

DEGREE:
Bachelor's: 438, 37%
High school: 333, 28.1%
Master's: 192, 16.2%
Ph.D: 71, 6%
2-year: 43, 3.6%
MD/JD/professional: 24, 2%
None: 55, 4.6%
Other: 15, 1.3%
No answer: 14, 1.2%

POLITICS:
Liberal: 427, 36%
Libertarian: 359, 30.3%
Socialist: 326, 27.5%
Conservative: 35, 3%
Communist: 8, 0.7%
No answer: 30, 2.5%

You can see the exact definitions given for each of these terms on the survey.

RELIGIOUS VIEWS:
Atheist, not spiritual: 880, 74.3%
Atheist, spiritual: 107, 9.0%
Agnostic: 94, 7.9%
Committed theist: 37, 3.1%
Lukewarm theist: 27, 2.3%
Deist/Pantheist/etc: 23, 1.9%
No answer: 17, 1.4%

FAMILY RELIGIOUS VIEWS:
Lukewarm theist: 392, 33.1%
Committed theist: 307, 25.9%
Atheist, not spiritual: 161, 13.6
Agnostic: 149, 12.6%
Atheist, spiritual: 46, 3.9%
Deist/Pantheist/Etc: 32, 2.7%
Other: 84, 7.1%

RELIGIOUS BACKGROUND:
Other Christian: 517, 43.6%
Catholic: 295, 24.9%
Jewish: 100, 8.4%
Hindu: 21, 1.8%
Traditional Chinese: 17, 1.4%
Mormon: 15, 1.3%
Muslim: 12, 1%

Raw data is available here.

MORAL VIEWS:

Consequentialism: 735, 62%
Virtue Ethics: 166, 14%
Deontology: 50, 4.2%
Other: 214, 18.1%
No answer: 20, 1.7%

NUMBER OF CHILDREN
0: 1044, 88.1%
1: 51, 4.3%
2: 48, 4.1%
3: 19, 1.6%
4: 3, 0.3%
5: 2, 0.2%
6: 1, 0.1%
No answer: 17, 1.4%

WANT MORE CHILDREN?

No: 438, 37%
Maybe: 363, 30.7%
Yes: 366, 30.9%
No answer: 16, 1.4%

LESS WRONG USE:
Lurkers (no account): 407, 34.4%
Lurkers (with account): 138, 11.7%
Posters (comments only): 356, 30.1%
Posters (comments + Discussion only): 164, 13.9%
Posters (including Main): 102, 8.6%

SEQUENCES:
Never knew they existed until this moment: 99, 8.4%
Knew they existed; never looked at them: 23, 1.9%
Read < 25%: 227, 19.2%
Read ~ 25%: 145, 12.3%
Read ~ 50%: 164, 13.9%
Read ~ 75%: 203, 17.2%
Read ~ all: 306, 24.9%
No answer: 16, 1.4%

Dear 8.4% of people: there is this collection of old blog posts called the Sequences. It is by Eliezer, the same guy who wrote Harry Potter and the Methods of Rationality. It is really good! If you read it, you will understand what we're talking about much better!

REFERRALS:
Been here since Overcoming Bias: 265, 22.4%
Referred by a link on another blog: 23.5%
Referred by a friend: 147, 12.4%
Referred by HPMOR: 262, 22.1%
No answer: 35, 3%

BLOG REFERRALS:

Common Sense Atheism: 20 people
Hacker News: 20 people
Reddit: 15 people
Unequally Yoked: 7 people
TV Tropes: 7 people
Marginal Revolution: 6 people
gwern.net: 5 people
RationalWiki: 4 people
Shtetl-Optimized: 4 people
XKCD fora: 3 people
Accelerating Future: 3 people

These are all the sites that referred at least three people in a way that was obvious to disentangle from the raw data. You can see a more complete list, including the long tail, here.

MEETUPS:
Never been to one: 834, 70.5%
Have been to one: 320, 27%
No answer: 29, 2.5%

CATASTROPHE:
Pandemic (bioengineered): 272, 23%
Environmental collapse: 171, 14.5%
Unfriendly AI: 160, 13.5%
Nuclear war: 155, 13.1%
Economic/Political collapse: 137, 11.6%
Pandemic (natural): 99, 8.4%
Nanotech: 49, 4.1%
Asteroid: 43, 3.6%

The wording of this question was "which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?"

CRYONICS STATUS:
No, don't want to: 275, 23.2%
No, still thinking: 472, 39.9%
No, procrastinating: 178, 15%
No, unavailable: 120, 10.1%
Yes, signed up: 44, 3.7%
Never thought about it: 46, 3.9%
No answer: 48, 4.1%

VEGETARIAN:
No: 906, 76.6%
Yes: 147, 12.4%
No answer: 130, 11%

For comparison, 3.2% of US adults are vegetarian.

SPACED REPETITION SYSTEMS
Don't use them: 511, 43.2%
Do use them: 235, 19.9%
Never heard of them: 302, 25.5%

Dear 25.5% of people: spaced repetition systems are nifty, mostly free computer programs that allow you to study and memorize facts more efficiently. See for example http://ankisrs.net/

HPMOR:
Never read it: 219, 18.5%
Started, haven't finished: 190, 16.1%
Read all of it so far: 659, 55.7%

Dear 18.5% of people: Harry Potter and the Methods of Rationality is a Harry Potter fanfic about rational thinking written by Eliezer Yudkowsky (the guy who started this site). It's really good. You can find it at http://www.hpmor.com/.

ALTERNATIVE POLITICS QUESTION:

Progressive: 429, 36.3%
Libertarian: 278, 23.5%
Reactionary: 30, 2.5%
Conservative: 24, 2%
Communist: 22, 1.9%
Other: 156, 13.2%

ALTERNATIVE ALTERNATIVE POLITICS QUESTION:
Left-Libertarian: 102, 8.6%
Progressive: 98, 8.3%
Libertarian: 91, 7.7%
Pragmatist: 85, 7.2%
Social Democrat: 80, 6.8%
Socialist: 66, 5.6%
Anarchist: 50, 4.1%
Futarchist: 29, 2.5%
Moderate: 18, 1.5%
Moldbuggian: 19, 1.6%
Objectivist: 11, 0.9%

These are the only ones that had more than ten people. Other responses notable for their unusualness were Monarchist (5 people), fascist (3 people, plus one who was up for fascism but only if he could be the leader), conservative (9 people), and a bunch of people telling me politics was stupid and I should feel bad for asking the question. You can see the full table here.

CAFFEINE:
Never: 162, 13.7%
Rarely: 237, 20%
At least 1x/week: 207, 17.5
Daily: 448, 37.9
No answer: 129, 10.9%

SMOKING:
Never: 896, 75.7%
Used to: 1-5, 8.9%
Still do: 51, 4.3%
No answer: 131, 11.1%

For comparison, about 28.4% of the US adult population smokes

NICOTINE (OTHER THAN SMOKING):
Never used: 916, 77.4%
Rarely use: 82, 6.9%
>1x/month: 32, 2.7%
Every day: 14, 1.2%
No answer: 139, 11.7%

MODAFINIL:
Never: 76.5%
Rarely: 78, 6.6%
>1x/month: 48, 4.1%
Every day: 9, 0.8%
No answer: 143, 12.1%

TRUE PRISONERS' DILEMMA:
Defect: 341, 28.8%
Cooperate: 316, 26.7%
Not sure: 297, 25.1%
No answer: 229, 19.4%

FREE WILL:
Not confused: 655, 55.4%
Somewhat confused: 296, 25%
Confused: 81, 6.8%
No answer: 151, 12.8%

TORTURE VS. DUST SPECKS
Choose dust specks: 435, 36.8%
Choose torture: 261, 22.1%
Not sure: 225, 19%
Don't understand: 22, 1.9%
No answer: 240, 20.3%

SCHRODINGER EQUATION:
Can't calculate it: 855, 72.3%
Can calculate it: 175, 14.8%
No answer: 153, 12.9%

PRIMARY LANGUAGE:
English: 797, 67.3%
German: 54, 4.5%
French: 13, 1.1%
Finnish: 11, 0.9%
Dutch: 10, 0.9%
Russian: 15, 1.3%
Portuguese: 10, 0.9%

These are all the languages with ten or more speakers, but we also have everything from Marathi to Tibetan. You can see the full table here..

NEWCOMB'S PROBLEM
One-box: 726, 61.4%
Two-box: 78, 6.6%
Not sure: 53, 4.5%
Don't understand: 86, 7.3%
No answer: 240, 20.3%

ENTREPRENEUR:
Don't want to start business: 447, 37.8%
Considering starting business: 334, 28.2%
Planning to start business: 96, 8.1%
Already started business: 112, 9.5%
No answer: 194, 16.4%

ANONYMITY:
Post using real name: 213, 18%
Easy to find real name: 256, 21.6%
Hard to find name, but wouldn't bother me if someone did: 310, 26.2%
Anonymity is very important: 170, 14.4%
No answer: 234, 19.8%

HAVE YOU TAKEN A PREVIOUS LW SURVEY?
No: 559, 47.3%
Yes: 458, 38.7%
No answer: 116, 14%

TROLL TOLL POLICY:
Disapprove: 194, 16.4%
Approve: 178, 15%
Haven't heard of this: 375, 31.7%
No opinion: 249, 21%
No answer: 187, 15.8%

MYERS-BRIGGS
INTJ: 163, 13.8%
INTP: 143, 12.1%
ENTJ: 35, 3%
ENTP: 30, 2.5%
INFP: 26, 2.2%
INFJ: 25. 2.1%
ISTJ: 14, 1.2%
No answer: 715, 60%

This includes all types with greater than 10 people. You can see the full table here.

## Part 3: Numerical Data

Except where indicated otherwise, all the numbers below are given in the format:

mean+standard_deviation (25% level, 50% level/median, 75% level) [n = number of data points]

INTELLIGENCE:

IQ (self-reported): 138.7 + 12.7 (130, 138, 145) [n = 382]
SAT (out of 1600): 1485.8 + 105.9 (1439, 1510, 1570) [n = 321]
SAT (out of 2400): 2319.5 + 1433.7 (2155, 2240, 2320)
ACT: 32.7 + 2.3 (31, 33, 34) [n = 207]
IQ (on iqtest.dk): 125.63 + 13.4 (118, 130, 133)   [n = 378]

I am going to harp on these numbers because in the past some people have been pretty quick to ridicule this survey's intelligence numbers as completely useless and impossible and so on.

According to IQ Comparison Site, an SAT score of 1485/1600 corresponds to an IQ of about 144. According to Ivy West, an ACT of 33 corresponds to an SAT of 1470 (and thence to IQ of 143).

So if we consider self-report, SAT, ACT, and iqtest.dk as four measures of IQ, these come out to 139, 144, 143, and 126, respectively.

All of these are pretty close except iqtest.dk. I ran a correlation between all of them and found that self-reported IQ is correlated with SAT scores at the 1% level and iqtest.dk at the 5% level, but SAT scores and IQTest.dk are not correlated with each other.

Of all these, I am least likely to trust iqtest.dk. First, it's a random Internet IQ test. Second, it correlates poorly with the other measures. Third, a lot of people have complained in the comments to the survey post that it exhibits some weird behavior.

But iqtest.dk gave us the lowest number! And even it said the average was 125 to 130! So I suggest that we now have pretty good, pretty believable evidence that the average IQ for this site really is somewhere in the 130s, and that self-reported IQ isn't as terrible a measure as one might think.

AGE:
27.8 + 9.2 (22, 26, 31) [n = 1185]

LESS WRONG USE:
Karma: 1078 + 2939.5 (0, 4.5, 136) [n = 1078]
Months on LW: 26.7 + 20.1 (12, 24, 40) [n = 1070]
Minutes/day on LW: 19.05 + 24.1 (5, 10, 20) [n = 1105]
Wiki views/month: 3.6 + 6.3 (0, 1, 5) [n = 984]
Wiki edits/month: 0.1 + 0.8 (0, 0, 0) [n = 984]

PROBABILITIES:
Many Worlds: 51.6 + 31.2 (25, 55, 80) [n = 1005]
Aliens (universe): 74.2 + 32.6 (50, 90, 99) [n = 1090]
Aliens (galaxy): 42.1 + 38 (5, 33, 80) [n = 1081]
Supernatural: 5.9 + 18.6 (0, 0, 1) [n = 1095]
God: 6 + 18.7 (0, 0, 1) [n = 1098]
Religion: 3.8 + 15.5 (0, 0, 0.8) [n = 1113]
Cryonics: 18.5 + 24.8 (2, 8, 25) [n = 1100]
Antiagathics: 25.1 + 28.6 (1, 10, 35) [n = 1094]
Simulation: 25.1 + 29.7 (1, 10, 50) [n = 1039]
Global warming: 79.1 + 25 (75, 90, 97) [n = 1112]
No catastrophic risk: 71.1 + 25.5 (55, 80, 90) [n = 1095]
Space: 20.1 + 27.5 (1, 5, 30) [n = 953]

CALIBRATION:
Year of Bayes' birth: 1767.5 + 109.1 (1710, 1780, 1830) [n = 1105]
Confidence: 33.6 + 23.6 (20, 30, 50) [n= 1082]

MONEY:
Income/year: 50,913 + 60644.6 (12000, 35000, 74750) [n = 644]
Charity/year: 444.1 + 1152.4 (0, 30, 250) [n = 950]
SIAI/CFAR charity/year: 309.3 + 3921 (0, 0, 0) [n = 961]
Aging charity/year: 13 + 184.9 (0, 0, 0) [n = 953]

TIME USE:
Hours online/week: 42.4 + 30 (21, 40, 59) [n = 944]
Hours reading/week: 30.8 + 19.6 (18, 28, 40) [n = 957]
Hours writing/week: 7.9 + 9.8 (2, 5, 10) [n = 951]

POLITICAL COMPASS:
Left/Right: -2.4 + 4 (-5.5, -3.4, -0.3) [n = 476]
Libertarian/Authoritarian: -5 + 2 (-6.2, -5.2, -4)

BIG 5 PERSONALITY TEST:
Big 5 (O): 60.6 + 25.7 (41, 65, 84) [n = 453]
Big 5 (C): 35.2 + 27.5 (10, 30, 58) [n = 453]
Big 5 (E): 30.3 + 26.7 (7, 22, 48) [n = 454]
Big 5 (A): 41 + 28.3 (17, 38, 63) [n = 453]
Big 5 (N): 36.6 + 29 (11, 27, 60) [n = 449]

These scores are in percentiles, so LWers are more Open, but less Conscientious, Agreeable, Extraverted, and Neurotic than average test-takers. Note that people who take online psychometric tests are probably a pretty skewed category already so this tells us nothing. Also, several people got confusing results on this test or found it different than other tests that they took, and I am pretty unsatisfied with it and don't trust the results.

AUTISM QUOTIENT
AQ: 24.1 + 12.2 (17, 24, 30) [n = 367]

This test says the average control subject got 16.4 and 80% of those diagnosed with autism spectrum disorders get 32+ (which of course doesn't tell us what percent of people above 32 have autism...). If we trust them, most LWers are more autistic than average.

CALIBRATION:

Reverend Thomas Bayes was born in 1701. Survey takers were asked to guess this date within 20 years, so anyone who guessed between 1681 and 1721 was recorded as getting a correct answer. The percent of people who answered correctly is recorded below, stratified by the confidence they gave of having guessed correctly and with the number of people at that confidence level.

0-5: 10% [n = 30]
5-15: 14.8% [n = 183]
15-25: 10.3% [n = 242]
25-35: 10.7% [n = 225]
35-45: 11.2% [n = 98]
45-55: 17% [n = 118]
55-65: 20.1% [n = 62]
65-75: 26.4% [n = 34]
75-85: 36.4% [n = 33]
85-95: 60.2% [n = 20]
95-100: 85.7% [n = 23]

Here's a classic calibration chart. The blue line is perfect calibration. The orange line is you guys. And the yellow line is average calibration from an experiment I did with untrained subjects a few years ago (which of course was based on different questions and so not directly comparable).

The results are atrocious; when Less Wrongers are 50% certain, they only have about a 17% chance of being correct. On this problem, at least, they are as bad or worse at avoiding overconfidence bias as the general population.

My hope was that this was the result of a lot of lurkers who don't know what they're doing stumbling upon the survey and making everyone else look bad, so I ran a second analysis. This one used only the numbers of people who had been in the community at least 2 years and accumulated at least 100 karma; this limited my sample size to about 210 people.

I'm not going to post exact results, because I made some minor mistakes which means they're off by a percentage point or two, but the general trend was that they looked exactly like the results above: atrocious. If there is some core of elites who are less biased than the general population, they are well past the 100 karma point and probably too rare to feel confident even detecting at this kind of a sample size.

I really have no idea what went so wrong.  Last year's results were pretty good - encouraging, even. I wonder if it's just an especially bad question. Bayesian statistics is pretty new; one would expect Bayes to have been born in rather more modern times. It's also possible that I've handled the statistics wrong on this one; I wouldn't mind someone double-checking my work.

Or we could just be really horrible. If we haven't even learned to avoid the one bias that we can measure super well and which is most susceptible to training, what are we even doing here? Some remedial time at PredictionBook might be in order.

HYPOTHESIS TESTING:

I tested a very few of the possible hypothesis that were proposed in the survey design threads.

Are people who understand quantum mechanics are more likely to believe in Many Worlds? We perform a t-test, checking whether one's probability of the MWI being true depends on whether or not one can solve the Schrodinger Equation. People who could solve the equation had on average a 54.3% probability of MWI, compared to 51.3% in those who could not. The p-value is 0.26; there is a 26% probability this occurs by chance. Therefore, we fail to establish that people's probability of MWI varies with understanding of quantum mechanics.

Are there any interesting biological correlates of IQ? We run a correlation between self-reported IQ, height, maternal age, and paternal age. The correlations are in the expected direction but not significant.

Are there differences in the ways men and women interact with the community? I had sort of vaguely gotten the impression that women were proportionally younger, newer to the community, and more likely to be referred via HPMOR. The average age of women on LW is 27.6 compared to 27.7 for men; obviously this difference is not significant. 14% of the people referred via HPMOR were women compared to about 10% of the community at large, but this difference is pretty minor. Women were on average newer to the community - 21 months vs. 39 for men - but to my surprise a t-test was unable to declare this significant. Maybe I'm doing it wrong?

Does the amount of time spent in the community affect one's beliefs in the same way as in previous surveys? I ran some correlations and found that it does. People who have been around longer continue to be more likely to believe in MWI, less likely to believe in aliens in the universe (though not in our galaxy), and less likely to believe in God (though not religion). There was no effect on cryonics this time.

In addition, the classic correlations between different beliefs continue to hold true. There is an obvious cluster of God, religion, and the supernatural. There's also a scifi cluster of cryonics, antiagathics, MWI, aliens, and the Simulation Hypothesis, and catastrophic risk (this also seems to include global warming, for some reason).

Are there any differences between men and women in regards to their belief in these clusters? We run a t-test between men and women. Men and women have about the same probability of God (men: 5.9, women: 6.2, p = .86) and similar results for the rest of the religion cluster, but men have much higher beliefs in for example antiagathics (men 24.3, women: 10.5, p < .001) and the rest of the scifi cluster.

DESCRIPTIONS OF LESS WRONG

Survey users were asked to submit a description of Less Wrong in 140 characters or less. I'm not going to post all of them, but here is a representative sample:

- "Probably the most sensible philosophical resource avaialble."
- "Contains the great Sequences, some of Luke's posts, and very little else."
- "The currently most interesting site I found ont the net."
- "EY cult"
- "How to think correctly, precisely, and efficiently."
- "HN for even bigger nerds."
- "Social skills philosophy and AI theorists on the same site, not noticing each other."
- "Cool place. Any others like it?"
- "How to avoid predictable pitfalls in human psychology, and understand hard things well: The Website."
- "A bunch of people trying to make sense of the wold through their own lens, which happens to be one of calculation and rigor"
- "Nice."
- "A font of brilliant and unconventional wisdom."
- "One of the few sane places on Earth."
- "Robot god apocalypse cult spinoff from Harry Potter."
- "A place to converse with intelligent, reasonably open-minded people."
- "Callahan's Crosstime Saloon"
- "Amazing rational transhumanist calming addicting Super Reddit"
- "Still wrong"
- "A forum for helping to train people to be more rational"
- "A very bright community interested in amateur ethical philosophy, mathematics, and decision theory."
- "Dying. Social games and bullshit now >50% of LW content."
- "The good kind of strange, addictive, so much to read!"
- "Part genuinely useful, part mental masturbation."
- "Mostly very bright and starry-eyed adults who never quite grew out of their science-fiction addiction as adolescents."
- "Less Wrong: Saving the world with MIND POWERS!"
- "Perfectly patternmatches the 'young-people-with-all-the-answers' cliche"
- "Rationalist community dedicated to self-improvement."
- "Sperglord hipsters pretending that being a sperglord hipster is cool." (this person's Autism Quotient was two points higher than LW average, by the way)
- "An interesting perspective and valuable database of mental techniques."
- "A website with kernels of information hidden among aspy nonsense."
- "Exclusive, elitist, interesting, potentially useful, personal depression trigger."
- "A group blog about rationality and related topics. Tends to be overzealous about cryogenics and other pet ideas of Eliezer Yudkowsky."
- "Things to read to make you think better."
- "Excellent rationality. New-age self-help. Worrying groupthink."
- "Not a cult at all."
- "A cult."
- "The new thing for people who would have been Randian Objectivists 30 years ago."
- "Fascinating, well-started, risking bloat and failure modes, best as archive."
- "A fun, insightful discussion of probability theory and cognition."
- "More interesting than useful."
- "The most productive and accessible mind-fuckery on the Internet."
- "A blog for rationality, cognitive bias, futurism, and the Singularity."
- "Robo-Protestants attempting natural theology."
- "Orderly quagmire of tantalizing ideas drawn from disagreeable priors."
- "Analyze everything. And I do mean everything. Including analysis. Especially analysis. And analysis of analysis."
- "Very interesting and sometimes useful."
- "Where people discuss and try to implement ways that humans can make their values, actions, and beliefs more internally consistent."
- "Eliezer Yudkowsky personality cult."
- "It's like the Mormons would be if everyone were an atheist and good at math and didn't abstain from substances."
- "Seems wacky at first, but gradually begins to seem normal."
- "A varied group of people interested in philosophy with high Openness and a methodical yet amateur approach."
- "Less Wrong is where human algorithms go to debug themselves."
- "They're kind of like a cult, but that doesn't make them wrong."
- "A community blog devoted to nerds who think they're smarter than everyone else."
- "90% sane! A new record!"
- "The Sequences are great. LW now slowly degenerating to just another science forum."
- "The meetup groups are where it's at, it seems to me. I reserve judgment till I attend one."
- "All I really know about it is this long survey I took."
- "The royal road of rationality."
- "Technically correct: The best kind of correct!"
- "Full of angry privilege."
- "A sinister instrument of billionaire Peter Thiel."
- "Dangerous apocalypse cult bent on the systematic erasure of traditional values and culture by any means necessary."
- "Often interesting, but I never feel at home."
- "One of the few places I truly feel at home, knowing that there are more people like me."
- "Currently the best internet source of information-dense material regarding cog sci, debiasing, and existential risk."
- "Prolific and erudite writing on practical techniques to enhance the effectiveness of our reason."
- "An embarrassing Internet community formed around some genuinely great blog writings."
- "I bookmarked it a while ago and completely forgot what it is about. I am taking the survey to while away my insomnia."
- "A somewhat intimidating but really interesting website that helps refine rational thinking."
- "A great collection of ways to avoid systematic bias and come to true and useful conclusions."
- "Obnoxious self-serving, foolish trolling dehumanizing pseudointellectualism, aesthetically bankrupt."
- "The cutting edge of human rationality."
- "A purveyor of exceedingly long surveys."

PUBLIC RELEASE

That last commenter was right. This survey had vastly more data than any previous incarnation; although there are many more analyses I would like to run I am pretty exhausted and I know people are anxious for the results. I'm going to let CFAR analyze and report on their questions, but the rest should be a community effort. So I'm releasing the survey to everyone in the hopes of getting more information out of it. If you find something interesting you can either post it in the comments or start a new thread somewhere.

The data I'm providing is the raw data EXCEPT:

- I deleted a few categories that I removed halfway through the survey for various reasons
- I deleted 9 entries that were duplicates of other entries, ie someone pressed 'submit' twice.
- I deleted the timestamp, which would have made people extra-identifiable, and sorted people by their CFAR random number to remove time order information.
- I removed one person whose information all came out as weird symbols.
- I numeralized some of the non-numeric data, especially on the number of months in community question. This is not the version I cleaned up fully, so you will get to experience some of the same pleasure I did working with the rest.
- I deleted 117 people who either didn't answer the privacy question or who asked me to keep them anonymous, leaving 1067 people.

Here it is: Data in .csv format , Data in Excel format

## Comments (640)

Sort By: Best
Comment author: 29 November 2012 07:18:46PM 34 points [-]

Hi Yvain,

please state a definite end date next year. Filling out the survey didn't have a really high priority for me, but knowing that I had "about a month" made me put it off. Had I known that the last possible day was the 26th of November, I probably would have fit it in sometime in between other stuff.

Comment author: 30 November 2012 09:08:56AM 5 points [-]

Hm, could it be that the longer survey format this time around cut down on the number of responses as well?

Comment author: 30 November 2012 03:27:25AM *  26 points [-]

I previously mentioned that item non-response might be a good measure of Conscientiousness. Before doing anything fancy with non-response, I first checked that there was a correlation with the questionnaire reports. The correlation is zero:

R> lwc <- subset(lw, !is.na(as.integer(as.character(BigFiveC))))
R> missing_answers <- apply(lwc, 1, function(x) sum(sapply(x, function(y) is.na(y) || as.character(y)==" ")))
R> cor.test(as.integer(as.character(lwc$BigFiveC)), missing_answers) Pearson's product-moment correlation data: as.integer(as.character(lwc$BigFiveC)) and missing_answers
t = -0.0061, df = 421, p-value = 0.9952
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.09564 0.09505
sample estimates:
cor
-0.0002954
# visualize to see if we made some mistake somewhere
R> plot(as.integer(as.character(lwc$BigFiveC)), missing_answers)  I am completely surprised. The results in the economics paper looked great and the rationale is very plausible. Yet... The 2 sets of data here have the right ranges, there's plenty of variation in both dimension, I'm sure I'm catching most of the item non-responses or NAs given that there are non-responses as high as 34, there's a lot of datapoints, and it's not that the correlation is the opposite direction which might indicate a coding error but that there's none at all. Yvain questions the Big Five results, but otherwise they look exactly as I would've predicted before seeing the results: low C and E and A, high O, medium N. There may be something very odd about LWers and Conscientiousness; when I try C vs Income, there's a almost-zero correlation again: R> cor.test(as.integer(as.character(lwc$BigFiveC)), log1p(as.integer(lwc$Income))) Pearson's product-moment correlation data: as.integer(as.character(lwc$BigFiveC)) and log1p(as.integer(lwc$Income)) t = 0.2178, df = 421, p-value = 0.8277 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.08482 0.10585 sample estimates: cor 0.01061  I guess the next step is a linear model on income vs age, Conscientiousness, and IQ: lwc <- subset(lw, !is.na(as.integer(as.character(BigFiveC))))) lwc <- subset(lw, !is.na(as.integer(as.character(Age)))) lwc <- subset(lw, !is.na(as.integer(as.character(IQ)))) lwc <- subset(lw, !is.na(as.integer(as.character(Income)))) c <- as.integer(as.character(lwc$BigFiveC))
age <- as.integer(as.character(lwc$Age)) iq <- as.integer(as.character(lwc$IQ))
income <- log1p(as.integer(as.character(lwc$Income))) summary(lm(income ~ (age + iq + c))) Call: lm(formula = income ~ (age + iq + c)) Residuals: Min 1Q Median 3Q Max -8.762 -0.849 1.191 2.319 3.644 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.5531 3.5479 -0.16 0.88 age 0.1311 0.0323 4.06 9.5e-05 iq 0.0339 0.0267 1.27 0.21 c 0.0174 0.0121 1.44 0.15 Residual standard error: 3.35 on 106 degrees of freedom (489 observations deleted due to missingness) Multiple R-squared: 0.196, Adjusted R-squared: 0.173 F-statistic: 8.59 on 3 and 106 DF, p-value: 3.73e-05  So all of them combined don't explain much and most of the work is being done by the age variable... There's many high-income LWers, supposedly (in this subset of respondents reporting age, income, IQ, and Conscientiousness, the max is 700,000), so I'd expect a cumulative r^2 of more than 0.173 for all 3 variables; if those aren't governing income, what is? Maybe everyone working with computers is rich and the others poor? Let's look at everyone who submitted salary and profession and see whether the practical computer people are making bank: lwi <- subset(lw, !is.na(as.integer(as.character(Income)))) lwi <- subset(lwi, !is.na(as.character(Profession))) cs <- as.integer(as.character(lwi[as.character(lwi$Profession)=="Computers (practical: IT, programming, etc.)",]$Income)) others <- as.integer(as.character(lwi[as.character(lwi$Profession)!="Computers (practical: IT, programming, etc.)",]$Income)) # ordinary t-test, but we'll exclude anyone with zero income (unemployed?) t.test(cs[cs!=0],others[others!=0]) Welch Two Sample t-test data: cs[cs != 0] and others[others != 0] t = 5.905, df = 309.3, p-value = 9.255e-09 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 22344 44673 sample estimates: mean of x mean of y 76458 42950  Wow. Just wow. 76k vs 43k. I mean, maybe this would go away with enough fiddling (eg. cost-of-living) but it's still dramatic. This suggests a new theory to me: maybe Conscientiousness does correlate with income at its usual high rate for everyone but computer people who are simply in so high demand that lack of Conscientiousness doesn't matter: R> lwi <- subset(lw, !is.na(as.integer(as.character(Income)))) R> lwi <- subset(lwi, !is.na(as.character(Profession))) R> lwi <- subset(lwi, !is.naBigFiveC))))) R> cs <- lwi[as.character(lwi$Profession)=="Computers (practical: IT, programming, etc.)",]
R> others <- lwi[as.character(lwi$Profession)!="Computers (practical: IT, programming, etc.)",] R> cor.test(as.integer(as.character(cs$BigFiveC)), as.integer(as.character(cs$Income))) Pearson's product-moment correlation data: as.integer(as.character(cs$BigFiveC)) and as.integer(as.character(cs$Income)) t = 0.5361, df = 87, p-value = 0.5933 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.1527 0.2625 sample estimates: cor 0.05738 R> cor.test(as.integer(as.character(others$BigFiveC)), as.integer(as.character(others$Income))) Pearson's product-moment correlation data: as.integer(as.character(others$BigFiveC)) and as.integer(as.character(others$Income)) t = 1.997, df = 200, p-value = 0.04721 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: 0.001785 0.272592 sample estimates: cor 0.1398  So for the CS people the correlation is small and non-statistically-significant, for non-CS people the correlation is almost 3x larger and statistically-significant. Comment author: 30 November 2012 04:07:27AM 14 points [-] There is a correlation of 0.13 between non-responses and N. Of course, there's also a correlation of -0.13 between C and the random number generator. Comment author: [deleted] 30 November 2012 10:48:20AM 10 points [-] People who had seen the RNG give a large number were primed to feel unusually reckless when taking the Big 5 test. Duh. (Just kidding.) Comment author: 30 November 2012 05:47:35AM 5 points [-] I am also surprised by this. I wonder about the effect of "I'm taking this survey so I don't have to go to bed / do work / etc.," but I wouldn't have expected that to be as large as the diligence effect. Also, perhaps look at nonresponse by section? I seem to recall the C part being after the personality test, which might be having some selection effects. Comment author: 30 November 2012 05:13:55AM * 4 points [-] Were you expecting that people with high C would or wouldn't skip questions? I can see arguments either way. Conscientious people might skip questions they don't have answers to or that they aren't willing to put the time into to give a good answer, or they might put in the work to have answers they consider good to as many questions as possible. Is it feasible to compare wrong sort of answer with C? Is it possible that the test for C wasn't very good? Comment author: 30 November 2012 05:19:50AM 7 points [-] Were you expecting that people with high C would or wouldn't skip questions? Wouldn't; that was the claim of the linked paper. Is it feasible to compare wrong sort of answer with C? Not really, if it wasn't caught by the no-answer check or the NA check. Is it possible that the test for C wasn't very good? As I said, it came out as expected for LW as a whole, and it did correlate with income once the CS salaries were removed... Hard to know what ground-truth there could be to check the scores against. Comment author: 29 November 2012 03:48:00PM 26 points [-] The calibration question is an n=1 sample on one of the two important axes (those axes being who's answering, and what question they're answering). Give a question that's harder than it looks, and people will come out overconfident on average; give a question that's easier than it looks, and they'll come out underconfident on average. Getting rid of this effect requires a pool of questions, so that it'll average out. Comment author: 29 November 2012 06:26:32PM 8 points [-] Yep. (Or as Yvain suggests, give a question which is likely to be answered with a bias in a particular direction.) It's not clear what you can conclude from the fact that 17% of all people who answered a single question at 50% confidence got it right, but you can't conclude from it that if you asked one of these people a hundred binary questions and they answered "yes" at 50% confidence, that person would only get 17% right. The latter is what would deserve to be called "atrocious"; I don't believe the adjective applies to the results observed in the survey. I'm not even sure that you can draw the conclusion "not everyone in the sample is perfectly calibrated" from these results. Well, the people who were 100% sure they were wrong, and happened to be correct, are definitely not perfectly calibrated; but I'm not sure what we can say of the rest. Comment author: 01 December 2012 09:18:50PM * 5 points [-] I have often pondered this problem with respect to some of the traditional heuristics and biases studies, e.g. the "above-average driver" effect. If people consult their experiences of subjective difficulty at doing a task, and then guess they are above average for the ones that feel easy, and below average for the ones that feel hard, this will to some degree track their actual particular strengths and weaknesses. Plausibly a heuristic along these lines gives overall better predictions than guessing "I am average" about everything. However, if we focus in on activities that happen to be unusually easy-feeling or hard-feeling in general, then we can make the heuristics look bad by only showing their successes and not their failures. Although the name "heuristics and biases" does reflect this notion: we have heuristics because they usually work, but they produce biases in some cases as an acceptable loss. Comment author: [deleted] 29 November 2012 01:22:13PM 23 points [-] I really have no idea what went so wrong [with the question about Bayes' birth year] Note also that in the last two surveys the mean and median answers were approximately correct, whereas this time even the first quartile answer was too late by almost a decade. So it's not just a matter of overconfidence -- there also was a systematic error. Note that Essay Towards Solving a Problem in the Doctrine of Chances was published posthumously when Bayes would have been 62; if people estimated the year it was published and assumed that he had been approximately in his thirties (as I did), that would explain half of the systematic bias. Comment author: 02 December 2012 04:01:53AM 4 points [-] To expand on this: Confidence intervals that are accurate for multiple judgements by the same person may be accurate for the same judgement made by multiple people. Normally, we can group everyone's responses and measure how many people were actually right when they said they were 70% sure. This should average out to 70% is because the error is caused by independent variations in each person's estimate. If there's a systematic error, then even if we all accounted for the systematic error in our confidence levels, we would all still fail at the same time if there was an error. Comment author: 29 November 2012 10:31:24PM 2 points [-] I had a vaguely right idea for the year of publication, and didn't know it was posthumous, but assumed that it was published in his middle-to-old age and so got the question right. Comment author: 29 November 2012 12:50:36PM 17 points [-] But I am skeptical of these numbers. I hang out with some people who are very closely associated with the greater Less Wrong community, and a lot of them didn't know about the survey until I mentioned it to them in person. I know some people who could plausibly be described as focusing their lives around the community who just never took the survey for one reason or another. One lesson of this survey may be that the community is no longer limited to people who check Less Wrong very often, if at all. One friend didn't see the survey because she hangs out on the #lesswrong channel more than the main site. Another mostly just goes to meetups. So I think this represents only a small sample of people who could justly be considered Less Wrongers. Yeah, this also fits my observations--I suspect that reading LW and hanging out with LW types in real life are substitute goods. Comment author: 29 November 2012 09:07:52AM 15 points [-] Some of the 'descriptions of LessWrong' can make for a great quote on the back of Yudkowsky's book. Comment author: 29 November 2012 03:57:54PM 16 points [-] Obnoxious self-serving, foolish trolling dehumanizing pseudointellectualism, aesthetically bankrupt. ;-) Comment author: 30 November 2012 12:25:42AM 9 points [-] Pratchett always includes a quote that calls him a "complete amateur," so there is some precedent for ostentatiously including negative reviews. Comment author: 30 November 2012 05:24:48AM 12 points [-] According to IQ Comparison Site, an SAT score of 1485/1600 corresponds to an IQ of about 144. According to Ivy West, an ACT of 33 corresponds to an SAT of 1470 (and thence to IQ of 143). Only if you took the SAT before 1994. Here's the percentiles for SATs taken in 2012; someone who was 97th percentile would get ~760 on math and ~730 on critical reading, adding up to 1490 (leaving alone the writing section to keep it within 1600), and 97th percentile corresponds to an IQ of 128. Here's a classic calibration chart. An important part of the calibration chart (for people) is the frequency of times that they provide various calibrations. Looking at your table, I would focus on the large frequency between 10% and 30%. I'll also point out that fixed windows are a pretty bad way to do elicitation. I tend to come from the calibration question from the practical side: how do we get useful probabilities out of subject-matter experts without those people being experts at calibration? Adopting those strategies seems more useful than making people experts at calibration. Comment author: 29 November 2012 12:05:17PM * 12 points [-] Or we could just be really horrible. If we haven't even learned to avoid the one bias that we can measure super well and which is most susceptible to training, what are we even doing here? You're fun to read. Posts explaining things and introducing terms that connect subjects and form patterns trigger reward mechanisms in the brain. This is uncorrelated to actually applying any lessons in daily life. Two questions you might want to ask next year is "do you think it is practical and advantageous to reduce people's biases via standardized exercises?" and "Has reading LW inspired you to try and reduce your own biases?" Comment author: 29 November 2012 05:53:17PM 10 points [-] The 2011 survey ran 33 days and collected 1090 responses. This year's survey ran 23 days and collected 1195 responses. Why did you close it early? That seems entirely unnecessary. One friend didn't see the survey because she hangs out on the #lesswrong channel more than the main site. I put a link and exhortation prominently in the #lesswrong topic from the day the survey opened to the day it closed. M (trans f->m): 3, 0.3% / F (trans m->f): 16, 1.3% 3 vs 16 seems like quite a difference, even allowing for the small sample size. Is this consistent with the larger population? Prefer polyamorous: 155, 13.1%...NUMBER OF CURRENT PARTNERS:... [>1 partners = 4.5%] So ~3x more people prefer polyamory than are actually engaged in it... Referred by HPMOR: 262, 22.1% Impressive. gwern.net: 5 people Woot! And I'm not even trying or linking LW especially often. (I am also pleased by the nicotine and modafinil results, although you dropped a number in 'Never: 76.5%') TROLL TOLL POLICY: Disapprove: 194, 16.4% Approve: 178, 15% So more people are against than for. Not exactly a mandate for its use. Are people who understand quantum mechanics are more likely to believe in Many Worlds? We perform a t-test, checking whether one's probability of the MWI being true depends on whether or not one can solve the Schrodinger Equation. People who could solve the equation had on average a 54.3% probability of MWI, compared to 51.3% in those who could not. The p-value is 0.26; there is a 26% probability this occurs by chance. Therefore, we fail to establish that people's probability of MWI varies with understanding of quantum mechanics. Sounds like you did a two-tailed test. shminux's hypothesis, which he has stated several times IIRC, is that people who can solve it will not be taken in by Eliezer's MWI flim-flam, as it were, and would be less likely to accept MWI. So you should've been running a one-tailed t-test to reject the hypothesis that the can-solvers are less MWI'd. The p-value would then be something like 0.13 by symmetry. Comment author: 01 December 2012 04:57:09PM * 7 points [-] 3 vs 16 seems like quite a difference, even allowing for the small sample size. Is this consistent with the larger population? What struck me was not the difference in numbers of FtM and MtF, but the fact that more than ten percent of the survey population identifying as female is MtF. Comment author: 29 November 2012 10:23:55PM * 4 points [-] TROLL TOLL POLICY: Disapprove: 194, 16.4% Approve: 178, 15% So more people are against than for. Not exactly a mandate for its use. Hypothesis: those directly affected by the troll policy (trolls) are more likely to have strong disapproval than those unaffected by the troll policy are to have strong approval. In my opinion, a strong moderation policy should require a plurality vote in the negative (over approval and abstention) to fail a motion to increase security, rather than a direct comparison to the approval. (withdrawn as it applies to LW, whose trolls are apparently less trolly than other sites I'm used to) Comment author: 29 November 2012 11:15:29PM * 16 points [-] Hypothesis: those directly affected by the troll policy (trolls) are more likely to have strong disapproval than those unaffected by the troll policy are to have strong approval. Hypothesis rejected when we operationalize 'trolls' as 'low karma': R> lwtroll <- lw[!is.na(lw$KarmaScore),]
R> lwtroll <- lwtroll[lwtroll$TrollToll=="Agree with toll" | lwtroll$TrollToll=="Disagree with toll",]
R> # disagree=3, agree=2; so:
R> # if positive correlation, higher karma associates with disagreement
R> # if negative correlation, higher karma associates with agreement
R> # we are testing hypothesis higher karma = lower score/higher agreement
R> cor.test(as.integer(lwtroll$TrollToll), lwtroll$KarmaScore, alternative="less")
Pearson's product-moment correlation
data: as.integer(lwtroll$TrollToll) and lwtroll$KarmaScore t = 1.362, df = 315, p-value = 0.9129
alternative hypothesis: true correlation is less than 0 95 percent confidence interval:
-1.0000 0.1679 sample estimates:
cor 0.07653
R> # a log-transform of the karma scores doesn't help:
R> cor.test(as.integer(lwtroll$TrollToll), log1p(lwtroll$KarmaScore), alternative="less")
Pearson's product-moment correlation
data: as.integer(lwtroll$TrollToll) and log1p(lwtroll$KarmaScore) t = 2.559, df = 315, p-value = 0.9945
alternative hypothesis: true correlation is less than 0 95 percent confidence interval:
-1.0000 0.2322 sample estimates:
cor 0.1427


Plots of the scores, regular and log-transformed:

Comment author: 29 November 2012 11:25:52PM 15 points [-]

If this were anywhere but a site dedicated to rationality, I would expect trolls to self-report their karma scores much higher on a survey than they actually are, but that data is pretty staggering. I accept the rejection of the hypothesis, and withdraw my opinion insofar as it applies to this site.

Comment author: 29 November 2012 09:10:32PM 4 points [-]

So ~3x more people prefer polyamory than are actually engaged in it...

I wonder, if you split out poly/mono preference and number of partners, whether the number who prefer poly but have <2 partners would be significantly different from the number who prefer mono but have <1 partner.

Now that I've wondered this out loud, I feel like I should have just asked a computer.

Comment author: 29 November 2012 09:17:22PM 5 points [-]

I was about to reply the same thing. The quoted statement doesn't sound particularly more surprising than "Most people prefer to be in a relationship, but only a fraction of those are actually engaged in one".

Comment author: 29 November 2012 09:30:07PM 4 points [-]

Would it be more surprising to find people that prefer poly relationships, but only have one partner and aren't looking for more, than to find people that prefer mono relationships, but have no partners and aren't looking for any?

Among those with firm mono/poly preferences, there are 15% of the former (24% if we also include people that prefer poly, have no partners, and aren't looking for more) and 14% of the latter.

Comment author: 29 November 2012 09:33:30PM 3 points [-]

Also, roughly 2/7 of people that prefer poly are single, while roughly 3/7 of people that prefer mono are.

Comment author: 29 November 2012 09:37:00PM 2 points [-]

Thanks, computer!

Comment author: 29 November 2012 07:54:25PM 10 points [-]

So ~3x more people prefer polyamory than are actually engaged in it...

I would not describe this as an accurate conclusion. For one thing, I currently have one partner who has other partners, so I think I am unambiguously "currently engaged in polyamory" even though I would have put 1 on the survey.

For another, I think it is reasonable to say that someone who is in a relationship with exactly one other person, but is not monogamous with that person (i.e. is available to enter further relationships) is engaged in polyamory.

Comment author: 29 November 2012 08:54:14PM 6 points [-]

Do you think your situation explains 2/3s of those who prefer polyamory?

Comment author: 29 November 2012 09:07:21PM 2 points [-]

3 vs 16 seems like quite a difference, even allowing for the small sample size. Is this consistent with the larger population?

As I understand it, there isn't good data. Stereotypically, there are more MtF than FtM. But according to Wikipedia, a Swedish study found a ratio of 1.4:1 in favor of MtF for those requesting sexual reassignment surgery, and 1:1 for those going through with it. Of course, this is the sort of Internet community where I'd expect some folks to identify as trans without wanting to go through surgery at all.

Comment author: 29 November 2012 09:23:18PM *  10 points [-]

After I posted my comment, I realized that 3 vs 16 might just reflect the overall gender ratio of LW: if there's no connection between that stuff and finding LW interesting (a claim which may or may not be surprising depending on your background theories and beliefs), then 3 vs 16 might be a smaller version of the larger gender sample of 120 vs 1057. The respective decimals are 0.1875 and 0.1135, which is not dramatic-looking. The statistics for whether membership differs between the two pairs:

R> M <- as.table(rbind(c(120, 1057), c(3,16)))
R> dimnames(M) <- list(status=c("c","t"), gender=c("M","F"))
R> M
gender
status M F
c 120 1057
t 3 16
R> chisq.test(M, simulate.p.value = TRUE, B = 20000000)
Pearson's Chi-squared test with simulated p-value (based on 2e+07 replicates)
data: M X-squared = 0.6342, df = NA, p-value = 0.4346


(So it's not even close to the usual significance level. As intuitively makes sense: remove or add one person in the right category, and the ratio changes a fair bit.)

Comment author: 29 November 2012 09:35:39PM 8 points [-]

After I posted my comment, I realized that 3 vs 16 might just reflect the overall gender ratio of LW

Now I feel dumb for not even noticing that. "In a group where most people were born males, why is it the case that most trans people were born males?" doesn't even seem like a question.

Comment author: 29 November 2012 10:14:27PM *  11 points [-]

Under this theory, it seems (with low statistical confidence of course) that LW-interest is perhaps correlated with biological sex rather than gender identity, or perhaps with assigned-gender-during-childhood. Which is kind of interesting.

Comment author: 30 November 2012 12:59:14PM 6 points [-]

Does anybody know if this holds for other other preferences that tend to vary heavily by gender? Are MtoF transsexuals heavily into say programming, or science fiction? (I know of several transsexual game developers/designers, all MtoF).

Comment author: 30 November 2012 08:43:38PM *  3 points [-]

I don't know of any such data. I'd imagine that there's less of a psychological barrier to engaging in traditionally "gendered" interests for most transgendered people (that is, if you think a lot about gender being a social construct, you're probably going to care less about a cultural distinction between "tv shows for boys" and "tv shows for girls"). Beyond that I can't really speculate.

Edit: here's me continuing to speculate anyway. A transgendered person is more likely than a cisgendered person to have significant periods of their life in which they are perceived as having different genders, and therefore is likely to be more fully exposed to cultural expectations for each.

Comment author: 30 November 2012 09:22:26PM 4 points [-]

FWIW, I have the opposite intuition. Transgendered people (practically by definition) care about gender a lot, so presumably would care more about those cultural distinctions.

Contrast the gender skeptic: "What do you mean, you were assigned male but are really female? There's no 'really' about it - gender is just a social construct, so do whatever you want."

Comment author: [deleted] 01 December 2012 12:21:57AM 8 points [-]

It's more complicated than that. Gender nonconformity in childhood is frequently punished, so a great many trans people have some very powerful incentives to suppress or constrain our interests early in life, or restrict our participation in activities for which an expressed interest earns censure or worse.

Pragmatically, gender is also performed, and there are a lot of subtle little things about it that cisgender people don't necessarily have innately either, but which are learned and transmitted culturally, many of which are the practical aspects of larger stuff (putting on makeup and making it look good is a skill, and it consists of lots of tiny subskills). Due to the aforementioned process, trans people very frequently don't get a chance to acquire those skills during the phase when their cis counterparts are learning them, or face more risks for doing so.

Finally, at least in the West: Trans medical and social access were originally predicated on jumping through an awful lot of very heteronormative hoops, and that framework still heavily influences many trans communities, particularly for older folks. This aspect is changing much faster thanks to the internet, but you still only need to go to the right forum or support group to see this dynamic in action. There's a lot of gender policing, and some subsets of the community who basically insist on an extreme version of this framing as a prerequisite for "authentic" trans identity.

So...when a trans person transitions, very often they are coping with some or all of this, often for the first time, simultaneously, and within a short time frame. We're also under a great deal of pressure about all of it.

"What do you mean, you were assigned male but are really female? There's no 'really' about it - gender is just a social construct, so do whatever you want."

Relevant: http://xkcd.com/592/

Comment author: [deleted] 01 December 2012 12:10:01AM 2 points [-]

It's a common inside joke amongst SF-loving, programmer trans women that there are a lot of SF-loving, programmer trans women, or that trans women are especially and unusually common in those fields. But they usually don't socialize with large swathes of other trans women who come unsorted by any other criterion save "trans and women"; I think this is an availability bias coupled with a bit of "I've found my tribe!" thinking.

Comment author: 29 November 2012 10:03:35PM 2 points [-]

Hmm. Thanks for the link to that wikipedia page. Interesting...

...the definitions given on that wikipedia page seem to imply that I'm strongly queer and/or andro*, at least in terms of my experiences and gender-identity. Had never noticed nor cared (which, apparently, is a component of some variants of andro-somethings). I'm (very visibly) biologically male and "identify" (socially) as male for obvious reasons (AKA don't care if miscategorized, as long as the stereotyping isn't too harmful), and I'm attracted mostly to females because of instinct (I guess?) and practical issues (e.g. disdain of anal sex).

Oh well, one more thing to consider when trying to figure out why people get confused by my behaviors. I've always (in recent years anyway) thought of myself as "human with penis".

Comment author: [deleted] 01 December 2012 12:06:35AM 11 points [-]

I'm attracted mostly to females because of instinct (I guess?) and practical issues (e.g. disdain of anal sex).

If you can't think of practical ways for two people with penises to have sex that don't involve anal, you might just need better porn.

Comment author: [deleted] 26 January 2013 01:29:50PM *  2 points [-]

Same here. (But one of the reasons why I identify as male in spite of being somewhat psychologically androgynous is that I take exception with the notion that if someone doesn't have sufficiently masculine (feminine) traits, he (she) is not a ‘real’ man (woman). And I'm almost exclusively attracted to females, almost exclusively because of ‘instinct’ (a.k.a. males just don't give me a boner; is there a better word than “instinct”?) but also because I'd like to have biological children some day.)

Maybe the next survey should include the Bem Sex Role Inventory. (According to this, I'm slightly above median for both masculinity and femininity, and slightly more feminine than masculine.)

Comment author: 29 November 2012 09:16:03AM 9 points [-]

Thank you for this public service. It seems definitely helpful for the community, and possibly helpful for historians :-)

Comment author: 29 November 2012 12:11:43PM 13 points [-]

and possibly helpful for historians :-)

I now have this mental image of future sociology grad students working on their theses by reading through every article and comment ever posted on Less Wrong, and then analyzing us.

Comment author: 29 November 2012 11:12:54PM 5 points [-]

I'm imagining them being vast posthumans with specialized modalities for it that can't really be called "reading".

Comment author: 29 November 2012 05:28:05PM 10 points [-]

I now have an image of those sociologists giving up on reading everything and writing scripts to do some sort of ngram or inverse-markov analysis, then mis-applying statistics to draw wrong conclusions from it. Am I cynical yet?

Comment author: 05 December 2012 02:42:21PM 3 points [-]

I was actually thinking of the kind of sociology thesis that doesn't use any statistics, and is rather a purely qualitative analysis.

Comment author: 01 December 2012 04:17:06AM 3 points [-]

I now have an image of farther future sociologists writing scathing commentaries on the irony of poorly-used statistical measures of this community.

Comment author: 29 November 2012 08:24:25AM *  9 points [-]

When you discuss the calibration results, could you mention that the surveyors were told what constituted a correct answer? I didn't take the survey and it isn't obvious from reading this post. Also, could you include a plug for PredictionBook around there? You've included lots of other helpful plugs.

Comment author: 29 November 2012 08:30:04AM 5 points [-]

Done.

Comment author: 19 December 2012 05:12:51PM 2 points [-]

Maybe a plug for the Credence Game too? ;) It's less in touch with real life than prediction book, but a lot faster.

Comment author: 02 December 2012 04:28:09AM 8 points [-]

Are people who understand quantum mechanics are more likely to believe in Many Worlds? We perform a t-test, checking whether one's probability of the MWI being true depends on whether or not one can solve the Schrodinger Equation. People who could solve the equation had on average a 54.3% probability of MWI, compared to 51.3% in those who could not. The p-value is 0.26; there is a 26% probability this occurs by chance. Therefore, we fail to establish that people's probability of MWI varies with understanding of quantum mechanics.

Some Bayesian analysis using the BEST MCMC library for normal two-group comparisons:

R> lw <- read.csv("lw-2012.csv")
R> R> lwm <- subset(lw, !(" " == as.character(SchrodingerEquation)))
R> lwm <- subset(lwm, !is.na(as.integer(as.character(PManyWorlds))))
R> mwiyes <- as.integer(as.character(subset(lwm, SchrodingerEquation == "Yes")$PManyWorlds)) R> mwino <- as.integer(as.character(subset(lwm, SchrodingerEquation == "No")$PManyWorlds))
R> R> source("BEST.R")
R> mcmcChain = BESTmcmc(mwino, mwiyes)
R> show(postInfo)
SUMMARY.INFO
PARAMETER mean median mode HDIlow HDIhigh pcgtZero
mu1 51.1693 51.1675 51.1964 48.8152 53.48281 NA
mu2 55.5708 55.5647 55.4871 50.2376 61.03888 NA
muDiff -4.4016 -4.4010 -4.1635 -10.1932 1.52931 7.154
sigma1 30.5558 30.5355 30.4243 28.9332 32.24333 NA
sigma2 32.8187 32.7136 32.5672 29.0621 36.73604 NA
sigmaDiff -2.2629 -2.1800 -2.1232 -6.4056 1.97323 14.226
nu 106.4690 98.6244 84.7466 36.2142 194.12061 NA
nuLog10 1.9929 1.9940 1.9853 1.6566 2.33430 NA
effSz -0.1389 -0.1388 -0.1323 -0.3208 0.04864 7.154


The results are interesting and not quite the same as a t-test:

1. we get estimates of standard deviations, among other things, for free - they look pretty different and there's an 85.8% chance the deviations of the Schrodinger-knowers and not-knowers are different on MWI, suggesting to me a polarizing effect where the more you know, the more extreme your view either for or against, which seems reasonable since the more information you have, the less should your uncertainty be.
2. the difference in means estimate is sharper than the t-test: Yvain's t-test gave a p-value of 0.26 if the null hypothesis were true (he makes the classic error when he says "there is a 26% probability this occurs by chance" - no, there's a 26% chance this happened by chance if one assumes the null hypothesis is true, which says absolutely nothing about whether this happened by chance).

We, however, by using Bayesian techniques can say that given the difference in mean beliefs: there is a 7.2% chance that the null hypothesis (equal belief) or the opposite hypothesis (lower belief) is true in this sample.

We also get an effect-size for free from the difference in means. -0.132 (mode) isn't too impressive, but it's there.

However, both BEST and the t-test are normal tests. The histograms look like the data may be a bimodal distribution: a hump of skeptics at 10%, a hump of believers in the 70%s - and the weirdly low 40s in both groups is just a low point in both? I don't know how much of an issue this is.

Comment author: 06 December 2012 11:23:37PM 3 points [-]

he makes the classic error

For what it's worth, I interpreted his "there is a 26% probability this occurs by chance" exactly as "if there's no real difference, there's a 26% probability of getting this sort of result by chance alone" or equivalently "conditional on the null hypothesis Pr(something at least this good) = 26%". I'd expect that someone who was making the classic error would have said "there is a 26% probability this occurred by chance".

Comment author: [deleted] 02 December 2012 11:36:58AM 7 points [-]

Other Christian: 517, 43.6% Catholic: 295, 24.9%

Now that I think about that, lumping Protestants and Orthodoxes together and keeping Catholics separate is about as bizarre as it gets.

Comment author: 01 December 2012 04:52:52PM 7 points [-]

Pandemic (bioengineered): 272, 23% Environmental collapse: 171, 14.5% Unfriendly AI: 160, 13.5% Nuclear war: 155, 13.1% Economic/Political collapse: 137, 11.6% Pandemic (natural): 99, 8.4% Nanotech: 49, 4.1% Asteroid: 43, 3.6%

This is one question where the results really suprised me. Combining natural and engendered pandemics, almost a third of respondents picked it as the top near term X risk which was almost twice as many as the next highest risk. I wonder if the x risk discussions we tend to have may be somewhat misallocated.

Comment author: 01 December 2012 05:55:19PM 8 points [-]

Note that the question on the survey was not about existential risks:

Type of Global Catastrophic Risk

Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?

I answered bio-engineered pandemics, but would have answered differently for x-risks.

Comment author: [deleted] 02 December 2012 11:29:34AM *  3 points [-]

Note that x-risks as defined by that questions are not the same as x-risks as defined by Bostrom. In principle, a catastrophe might kill 95% of the population but humanity could later recover and colonize the galaxy, or a different type of catastrophe might only kill 5% of the population but permanently prevent humans from creating extraterrestrial settlements, thereby setting a ceiling to economic growth forever.

Comment author: 03 December 2012 08:54:59PM 2 points [-]

So, if extraterrestrial settlements are unlikely to be ever created regardless of any catastrophe, the point is moot.

Comment author: [deleted] 29 November 2012 03:26:04PM *  7 points [-]

Yvain, I rechecked the calibration survey results, and encourage someone to recheck my recheck further:

First, these strata overlap... is 5 in 0-5 or 5-15? The N I doesn't actually match either one get either one when I recheck.

Secondly, I am not sure what program you used to calculate the statistics, but when I checked in excel, some people used percentages that got pulled as numbers less than one. I tried to clean that for these. (also removed someone who answered 150.)

Thirdly, there are 20 people in this N. You can be either 60% correct (12 correct), or 65% correct (13 correct), but 60.2% correct in this line seems weird. 85-95: 60.2% [n = 20]

Here was my attempt at recalculating those figures: N after data cleaning was 998.

0-<5: 9.1% [n = 2/22]

5-<15: 13.7% [n = 25/183]

15-<25: 9.3% [n = 21/226]

25-<35: 10% [n = 20/200]

35-<45: 11.1% [n = 10/90]

45-<55: 17.3% [n = 19/110]

55-<65: 20.8% [n = 11/53]

65-<75: 22.6% [n = 7/31]

75-<85: 36.7% [n = 11/30]

85-<95: 63.2% [n = 12/19]

95-100: 88.2% [n = 30/34]

I express low confidence in these remarks because I haven't rechecked this or gone into detail about data cleaning, but my brief take is:

1: Yes, there were some errors that made it look a bit worse than it was.

2: It's still shows overconfidence. (Edit: see possible caveat below)

Question: Do we have enough data to determine if that hump at near 10% confidence that you are right is significant?

Edit: I'm not a statistician, but I do notice there appears to be substantially more N that answered in the lower confidence ranges. I mean, yes, on average, the people who answered in those high 55-<85 ranges were quite far off, but there were more people than answered in the 15-<25 range then all of those three groups put together.

Comment author: 30 November 2012 01:36:57AM 4 points [-]

I think the calibration data needs additional cleaning. Eyeballing, I see % signs, decimals, and English comments.

Comment author: 29 November 2012 08:59:14AM 19 points [-]

So I suggest that we now have pretty good, pretty believable evidence that the average IQ for this site really is somewhere in the 130s, and that self-reported IQ isn't as terrible a measure as one might think.

This still suffers from selection bias - I'd imagine that people with lower IQ are more likely to leave the field blank than people with higher IQ.

Comment author: 30 November 2012 02:13:41AM *  9 points [-]

This still suffers from selection bias - I'd imagine that people with lower IQ are more likely to leave the field blank than people with higher IQ.

I think this is only true if we're going to also assume that the selection bias is operating on ACT and SAT scores. But we know they correlate with IQ, and quite a few respondents included ACT/SAT1600/SAT2400 data while they didn't include the IQ; so all we have to do is take for each standardized test the subset of people with IQ scores and people without, and see if the latter have lower scores indicating lower IQs. The results seem to indicate that while there may be a small difference in means between the groups on the 3 scores, it's neither of large effect size nor statistical significance.

ACT:

R> lwa <- subset(lw, !is.na(as.integer(ACTscoreoutof36)))
R> lwiq <- subset(lwa, !is.na(as.integer(IQ)))
R> lwiqnot <- subset(lwa, is.na(as.integer(IQ)))
R> t.test(lwiq$ACTscoreoutof36, lwiqnot$ACTscoreoutof36, alternative="less")
Welch Two Sample t-test
data: lwiq$ACTscoreoutof36 and lwiqnot$ACTscoreoutof36 t = 0.5088, df = 141.9, p-value = 0.6942
alternative hypothesis: true difference in means is less than 0 95 percent confidence interval:
-Inf 0.7507 sample estimates:
mean of x mean of y 32.68 32.50


Original SAT:

R> lwa <- subset(lw, !is.na(as.integer(SATscoresoutof1600)))
R> lwiq <- subset(lwa, !is.na(as.integer(IQ)))
R> lwiqnot <- subset(lwa, is.na(as.integer(IQ)))
R> t.test(lwiq$SATscoresoutof1600, lwiqnot$SATscoresoutof1600, alternative="less")
Welch Two Sample t-test
data: lwiq$SATscoresoutof1600 and lwiqnot$SATscoresoutof1600 t = -1.137, df = 237.4, p-value = 0.1284
alternative hypothesis: true difference in means is less than 0 95 percent confidence interval:
-Inf 6.607 sample estimates:
mean of x mean of y 1476 1490


New SAT:

R> lwa <- subset(lw, !is.na(as.integer(SATscoresoutof2400)))
R> lwiq <- subset(lwa, !is.na(as.integer(IQ)))
R> lwiqnot <- subset(lwa, is.na(as.integer(IQ)))
R> t.test(lwiq$SATscoresoutof2400, lwiqnot$SATscoresoutof2400, alternative="less")
Welch Two Sample t-test
data: lwiq$SATscoresoutof2400 and lwiqnot$SATscoresoutof2400 t = -0.9645, df = 129.9, p-value = 0.1683
alternative hypothesis: true difference in means is less than 0 95 percent confidence interval:
-Inf 109.3 sample estimates:
mean of x mean of y 2221 2374


The lack of variation is unsurprising since the (original) SAT and ACT are correlated, after all:

R> lwa <- subset(lw, !is.na(as.integer(ACTscoreoutof36)))
R> lwsat <- subset(lwa, !is.na(as.integer(SATscoresoutof1600)))
R> cor.test(lwsat$SATscoresoutof1600, lwsat$ACTscoreoutof36)
Pearson's product-moment correlation
data: lwsat$SATscoresoutof1600 and lwsat$ACTscoreoutof36 t = 8.839, df = 66, p-value = 8.415e-13
alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval:
0.6038 0.8291 sample estimates:
cor 0.7362

Comment author: 01 December 2012 04:15:47AM 2 points [-]

I'm interested in this analysis but I don't think the results are presented nicely, and I am not THAT interested. If someone else wants to summarize the parent I promise to upvote you.

Comment author: 01 December 2012 04:42:33AM 5 points [-]

I... thought I did summarize it nicely:

But we know they correlate with IQ, and quite a few respondents included ACT/SAT1600/SAT2400 data while they didn't include the IQ; so all we have to do is take for each standardized test the subset of people with IQ scores and people without, and see if the latter have lower scores indicating lower IQs. The results seem to indicate that while there may be a small difference in means between the groups on the 3 scores, it's neither of large effect size nor statistical significance.

Comment author: 01 December 2012 05:03:46AM 4 points [-]

That is actually better than I remembered immediately after reading it; with the data coming after the discussion my brain pattern-completed to expect a conclusion after the data. Also the paragraph is a little bit dense; a paragraph break before the last sentence might make it a little more readable in my mind.

I had already upvoted your post, regardless :)

Comment author: 29 November 2012 08:49:30AM 19 points [-]
• "Robot god apocalypse cult spinoff from Harry Potter."

That should be on a T-shirt.

Comment author: 29 November 2012 08:56:42AM 3 points [-]

I think that's my favorite description on that list.

Comment author: 29 November 2012 09:09:35AM 2 points [-]

I'd buy that shirt. This is instant classic.

Comment author: 05 December 2012 08:43:37PM 6 points [-]
Comment author: 06 December 2012 04:40:46AM 2 points [-]

I wonder whether consequentialism endorsement and possibly some of the probability questions correlate with the two family background questions.

Comment author: 01 December 2012 03:14:55PM *  6 points [-]

ALTERNATIVE POLITICS QUESTION: Progressive: 429, 36.3% Libertarian: 278, 23.5% Reactionary: 30, 2.5% Conservative: 24, 2% Communist: 22, 1.9% Other: 156, 13.2%

I'd like to note that my suggestion as I offered it didn't include an "Other" option -- you added that one by yourself, and it ended up being selected by more people than "Reactionary" "Conservative" and "Communist" combined. My suggested question would have forced the current "Others" to choose between the five options provided or not answer at all.

Comment author: [deleted] 29 November 2012 03:31:24PM 6 points [-]

In the fair coin questions, there were two people answering 49.9, one 49.9999, one 49.999999, and one 51. :-/

Comment author: [deleted] 29 November 2012 03:25:33PM 6 points [-]

These are all the countries with greater than 1% of Less Wrongers,

And they are exactly the non-write-in ones in the survey, except for New Zealand that was there and Poland that wasn't.

Comment author: 29 November 2012 08:21:20AM *  16 points [-]

Before even reading the full details, I want to congratulate to you for the impressive amount of work. The survey period is possibly my favorite time of the year on lesswrong!

EDIT: The links for the raw csv/xls data at the bottom don't seem to work for me.

Comment author: 29 November 2012 09:00:26AM 5 points [-]

Thank you. That should be fixed now.

Comment author: 29 November 2012 10:01:16AM 2 points [-]

It's indeed working, thank you!

Comment author: 30 November 2012 02:19:06AM *  22 points [-]

## On IQ Accuracy:

As Yvain says, "people have been pretty quick to ridicule this survey's intelligence numbers as completely useless and impossible and so on" because if they're true, it means that the average LessWronger is gifted. Yvain added a few questions to the 2012 survey, including the ACT and SAT questions and the Myers-Briggs personality type question that I requested (I'll explain why this is interesting), and that give us a few other things to check against, which has made the figures more believable. The ridicule may be an example of the "virtuous doubt" that Luke warns about in Overconfident Pessimism, so it makes sense to "consider the opposite":

The distribution of Myers-Briggs personality types on LessWrong replicates the Mensa pattern. This is remarkable since the patterns of personality types here are, in many significant ways, the exact opposite of what you'd find in the regular population. For instance, the introverted rationalists and idealists are each about 1% of the population. Here, they are the majority and it's the artisans and guardians who are relegated to 1% or less of our population.

Mensa's personality test results were published in the December 1993 Mensa Bulletin. Their numbers.

So, if you believe that most of the people who took the survey lied about their IQ, you also need to believe all of the following:

• That most of these people also realized they needed to do IQ correlation research and fudge their SAT and ACT scores in order for their IQ lie to be believable.

• Some explanation as to why the average of lurker's IQ scores would come out so close to the average of poster's IQ scores. The lurkers don't have karma to show off, and there's no known incentive good enough to get so many lurkers to lie about their IQ score. Vaniver's figures.

• Some explanation for why the personality type pattern at LessWrong is radically different from the norm and yet very similar to the personality type pattern Mensa published and also matched my predictions. Even if they had knowledge of the Mensa personality test results and decided to fudge their personality type responses, too, they somehow managed to fudge them in such a way that their personality types accidentally matched my predictions.

• That they decided not to cheat when answering the Bayes birthday question even though they were dishonest enough to lie on the IQ question, motivated to look intelligent, and it takes a lot less effort to fudge the Bayes question than the intelligence and personality questions. (This was suggested by ArisKatsaris).

• That both posters and lurkers had some motive strong enough to justify spending 20+ minutes doing the IQ correlation research and fudging personality test questions while probably bored of ticking options after filling out most of a very long survey.

It's easier just to put the real number in the IQ box than do all that work to make it believable, and it's not like the liars are likely to get anything out of boasting anonymously, so the cost-benefit ratio is just not working in favor of the liar explanation.

If you think about it in terms of Occam's razor, what is the better explanation? That most people lied about their IQ, and fudged their SAT, ACT and personality type data to match, or that they're telling the truth?

## Summary of criticism:

Possible Motive to Lie: The desire to be associated with a "gifted" group:

In re to this post, it was argued by NonComposMentis that a potential motive to lie is that if the outside world perceives LessWrong as gifted, then anyone having an account on LessWrong will look high-status. In rebuttal:

• I figure that lurkers would not be motivated to fudge their results because they don't have a bunch of karma on their account to show off and anybody can claim to read LessWrong, so fudging your IQ just to claim that the site you read is full of gifted people isn't likely to be motivating. I suggested that we compare the average IQs of lurkers and others. Vaniver did the math and they are very, very close..

• I argued, among other things, that it would be falling for a Pascal's mugging to believe that investing the extra time (probably at least $5 worth of time for most of us) into fudging the various different survey questions is likely to contribute to a secret conspiracy to inflate LessWrong's average IQ. Did the majority avoid filling out intelligence related questions, letting the gifted skew the results? Short answer: 74% of people answered at least one intelligence related question and since most people filled out only one or two, the fact that the self-report, ACT and SAT score averages are so similar is remarkable. I realized, while reading Vaniver's post that if only 1/3 of the survey participants filled out the IQ score, this may have been due to something which could have skewed the results toward the gifted range, for instance, if more gifted people had been given IQ tests for schooling placement (and the others didn't post their IQ score because they did not know it) or if the amount of pride one has in their IQ score has a significant influence on whether one reported it. So I went through the data and realized that most of the people who filled out the IQ test question did not fill out all the others. That means that 804 people (74% not 33%) answered at least one intelligence related question. As we have seen, the IQ correlations for the IQ, SAT and ACT questions were very close to each other (unsurprisingly, it looks like something's up with the internet test... removing those, it's 63% of survey participants that answered an intelligence related question). It's remarkable in and of itself that each category of test scores generated an average IQ so similar to the others considering that different people filled them out. I mean if 1/3 of the population filled out all of the questions, and the other 2/3 filled out none, we could say "maybe the 1/3 did IQ correlation research and fudged these" but if most of the population fills out one or two, and the averages for each category come out close to the averages for the other categories, why is that? How would that happen if they were fudging? It does look to me like people gave whatever test scores they had and that not all the people had test scores to give but it does not look to me like a greater proportion of the gifted people provided an intelligence related survey answer. Instead it looks like most people provided an intelligence related survey answer and the average LessWronger is gifted. Exploration of personality test fudging: • There are a lot of questions on the personality test that have an obvious intelligence component, so it's possible that people chose the answer they thought was most intelligent. • There are also intelligence related questions where it's not clear which answer is most intelligent. I listed those. • The intelligence questions would mostly influence the sensing/intuition dichotomy and the thinking/feeling dichotomy. This does not explain why the extraversion/introversion and perceiving/judging results were similar to Mensa's. Comment author: 30 November 2012 02:22:27AM 7 points [-] (I believe Mensa's personality test results were published in the December 2006 Mensa newsletter which is, unfortunately, behind a login on the Mensa website, so I can't link to it here.) Make a copy and post it. Most browsers have the ability to print/save pages as PDFs or various forms of HTML. Comment author: 30 November 2012 02:24:37AM * 21 points [-] Ok I managed to dig it up!  E/I | S/N | T/F | J/P (Category) ---------------------------------------------- 75/25 75/25 55/45 50/50 (Overall population) 27/73 10/90 75/25 65/35 (Mensans) 15/85 03/97 88/12 54/46 (LessWrongers) *  From the December 1993 Mensa Bulletin. * The LessWrongers were added by me, using the same calculation method as in the comment where I test my personality type predictions and are based on the 2012 survey results. Comment author: 01 December 2012 10:41:19PM 10 points [-] Alternate possibility: The distribution of personality types in Mensa/LW relative to everyone else is an artifact produced by self-identified smart people trying to signal their intelligence by answering 'yes' to traits that sound like the traits they ought to have. eg. I know that a number of the T/F questions are along the lines of "I use logic to make decisions (Y/N)", which is a no-brainer if you're trying to signal intelligence. A hypothetical way to get around this would be to have your partner/family member/best friend next to you as you take the test, ready to call you out when your self-assessment diverges from your actual behaviour ("hold on, what about that time you decided not to go to the concert of [band you love] because you were angry about an unrelated thing?") Comment author: 01 December 2012 11:17:13PM * 4 points [-] Ok, it's possible that all of the following happened: • Most of the 1000 people decided to lie about their IQ on the LessWrong survey. • Most of the liars realized that their personality test results were going to be compared with Mensa's personality type results, and it dawned on them that this would bring their IQ lie into question. • Most of the liars decided that instead of simply skipping the personality test question, or taking it to experience the enjoyment of finding out their type, they were going to fudge the personality test results, too. • Most of the liars actually had the patience to do an additional 72 questions specifically for the purpose of continuing to support a lie when they had just slogged through 100 questions. • Most of the liars did all of that extra work (Researching the IQ correlation with the SAT and the ACT and fudging 72 personality type questions) when it would have been so much easier to put their real IQ in the box, or simply skip the IQ question completely because it is not required. • Most of the liars succeeded in fudging their personality types. This is, of course, possible, but it it is likely to be more complicated than it at first seems. They'd have to be lucky that enough of the questions give away their intelligence correlation in the wording (we haven't verified that). They'd have to have enough of an understanding of what intelligent people are like that they'd choose the right ones. Questions like these are likely to confuse a non-gifted person trying to guess which answers will make them look gifted: "You are more interested in a general idea than in the details of its realization" (Do intelligent people like ideas or details more?) "Strict observance of the established rules is likely to prevent a good outcome" (Either could be the smarter answer, depending who you ask.) "You believe the best decision is one that can be easily changed" (It's smart to leave your options open, but it's also more intellectually self-confident and potentially more rewarding to take a risk based on your decision-making abilities.) "The process of searching for a solution is more important to you than the solution itself" (Maybe intelligence makes playing with ideas so enjoyable, gifted people see having the solution as less important.) "When considering a situation you pay more attention to the current situation and less to a possible sequence of events" (There are those that would consider either one of these to be the smarter one.) There were a lot of questions that you could guess are correlated with intelligence on the test, and some of them are no-brainers, but are there enough of those no-brainers with obvious intelligence correlation that a non-gifted person intent on looking as intelligent as possible would be able to successfully fudge their personality type? • The massive fudging didn't create some totally unexpected personality type pattern. For instance, most people are extraverted. Would they realize the intelligence implications and fudge enough extravert questions to replicate Mensa's introverted pattern? Would they know to choose the judging questions over the perceiving questions would make them look like Mensans? It makes sense that the thinking vs. feeling and intuiting vs sensing metrics would use questions that would be of the type you'd obviously need to fudge, but why would they also choose introvert and judging answers? The survey is anonymous and we don't even know which people gave which IQ responses, let alone are they likely to receive any sort of reward from fudging their IQ score. Can you explain to me: • What reward would most of LessWrong want to get out of lying about their IQs? • Why, in an anonymous context where they can't even take credit for claiming the IQ score they provided, most of LessWrong is expecting to receive any reward at all? • Can you explain to me why fudged personality type data would match my predictions? Even if they were trying to match them, how would they manage it? Comment author: 02 December 2012 08:44:05AM * 7 points [-] That most people lied about their IQ, and fudged their SAT, ACT and personality type data to match, or that they're telling the truth? Scores on standardized tests like SAT and ACT can be improved via hard work and lots of practice -- there are abundant practice books out there for such tests. It is entirely conceivable that those self-reported IQs were generated via comparing scores on these standardized tests against IQ-conversion charts. I.e., with very hard work, the apparent IQs are in the 130+ range according to these standardised tests; but when it comes to tests that measure your native intelligence (e.g., iqtest.dk), the scores are significantly lower. In future years, it would be advisable for the questionnaire to ask participants how much time they spent in total to prepare for tests such as SAT and ACT -- and even then you might not get honest answers. That brings me to the point of lying... it's not like the liars are likely to get anything out of boasting anonymously Not necessarily true. If the survey results show that LWers generally have IQs in the gifted range, then it allows LWers to signal their intelligence to others just by identifying themselves as LWers. People would assume that you probably have an IQ in the gifted range if you tell them that you read LW. In this case, everyone has an incentive to fudge the numbers. erratio has also pointed out that participants might have answered those personality tests untruthfully in order to signal intelligence, so I shan't belabour the point here. Comment author: 02 December 2012 07:59:37PM * 2 points [-] People would assume that you probably have an IQ in the gifted range if you tell them that you read LW. In this case, everyone has an incentive to fudge the numbers. Ok, now here is a motive! I still find it difficult to believe that: 1. Most of 1000 people care so much about status that they're willing to prioritize it over truth, especially since this is LessWrong where we gather around the theme of rationality. If there's anyplace you'd think it would be unlikely to find a lot of people lying about things on a survey, it's here. 2. The people who take the survey know that their IQ contribution is going to be watered down by the 1000 other people taking the survey. Unless they have collaborated by PM and have made a pact to fudge their IQ test figures, these frequently math oriented people must know that fudging their IQ figure is going to have very, very little impact on the average that Yvain calculates. I do not know why they'd see the extra work as worthwhile considering the expected amount of impact. Thinking that fudging only one of the IQs is going to be worthwhile is essentially falling for a Pascal's mugging. 3. Registration at LessWrong is free and it's not exclusive. At all. How likely is it, do you think, that this group of rationality-loving people has reasoned that claiming to have joined a group that anybody can join is a good way to brag about their awesomeness? I suppose you can argue that people who have karma on their accounts can point to that and say "I got karma in a gifted group" but lurkers don't have that incentive. All lurkers can say is "I read LessWrong." but that is harder to prove and even less meaningful than "I joined LessWrong". Putting the numbers where our mouths are: If the average IQ for lurkers / people with low karma on LessWrong is pretty close to the average IQ for posters and/or people with karma on LessWrong, would you say that the likelihood of post-making/karma-bearing LessWrongers lying on the survey in order to increase other's status perceptions of them is pretty low? Do you want to get these numbers? I'll probably get them later if you don't, but I have a pile of LW messages and a bunch of projects going on right now so there will be a delay and a chance that I completely forget. Comment author: 02 December 2012 09:12:04PM * 10 points [-] From the public dataset: 165 out of 549 responses without reported positive karma (30%) self-reported an IQ score; the average response was 138.44. 181 out of 518 responses with reported positive karma (34%) self-reported an IQ score; the average response was 138.25. One of the curious features of the self-reports is how many of the IQs are divisible by 5. Among lurkers, we had 2 151s, 1 149, and 10 150s. I think the average self-response is basically worthless, since it's only a third of responders and they're likely to be wildly optimistic. So, what about the Raven's test? In total, 188 responders with positive karma (36%), and 164 responders without positive karma (30%) took the Raven's test, with averages of 126.9 and 124.4. Noteworthy is the new max and min- the highest scorer on the Raven's test claimed to get 150, and the three sub-100 scores were 3, 18, and 66 (of which I suspect only the last isn't a typo or error of some sort). Only 121 users both self-reported IQ and took the Raven's test. The correlation between their mean-adjusted self-reported IQ and mean-adjusted Raven's test was an abysmal .2. Among posters with positive karma, the correlation was .45; among posters without positive karma, the correlation was -.11. Comment author: 09 December 2012 11:49:43PM 2 points [-] Thank you for these numbers, Vaniver! I should have thanked you sooner. I had become quite busy (partly with preparing my new endless September post) so I did not show up to thank you promptly. Sorry about that. Comment author: 09 December 2012 01:03:48AM 3 points [-] Was Mensa's test conducted on the internet? The internet has a systematic bias in personalities. For example, reddit subscriptions to each personality type reddit favor Introversion and Intuition 4,828 INTJ 4,457 INTP 1,817 INFP 1,531 INFJ Comment author: [deleted] 09 December 2012 03:50:40AM * 4 points [-] IAWYC, but "the internet" is way too broad for what you actually mean -- ISTM that a supermajority of teenagers and young adults in developed countries uses it daily, though plenty of them mostly use it for Facebook, YouTube and similar and probably have never heard of Reddit. (Even I never use Reddit unless I'm following a link to a particular thread from somewhere else -- but the first letter of my MBTI is E so this kind of confirms your point.) Comment author: 09 December 2012 01:18:44AM * 2 points [-] I don't have any more data than that, sorry. To suggest that people on the internet may have certain personality types is a good suggestion, but it raises two questions: • Might your example of Reddit be similar to LW because LW gets lots of users from Reddit? (Or put another way, if the average LessWronger is gifted, maybe "the apple doesn't fall far from the tree" and Reddit has lots of gifted people, too.) • Might gifted people gather in large numbers on the internet because it's easier to find people with similar interests? (Just because people on the internet tend to have those personality types, it doesn't mean they're not gifted.) As for "the internet" having a systematic bias in personalities, I would like to see the evidence of this that's not based on a biased sample. It's likely that the places you go to find people like you will, well, have people like you, so even if you (or somebody else on one of those sites) observed a pattern in personality types across sites they hang out on, the sample is likely to be biased. Comment author: 30 November 2012 03:00:03PM * 3 points [-] Thanks for the analysis. I agree with your conclusion. On a less relevant note, it does feel good to see more evidence that the community we hang out with is smart and awesome. Comment author: 30 November 2012 09:01:29PM * 16 points [-] This also explains a lot of things. People regard IQ as if it is meaningless, just a number, and they often get defensive when intellectual differences are acknowledged. I spent a lot of time doing research on adult giftedness (though I'm most interested in highly gifted+ adults) and, assuming the studies were done in a way that is useful (I've heard there are problems with this), and my personal experiences talking to gifted adults are halfway decent as representations of the gifted adult population, there are a plethora of differences that gifted adults have. For instance, in "You're Calling Who A Cult Leader?" Eliezer is annoyed with the fact that people assume that high praise is automatic evidence that a person has joined a cult. What he doesn't touch on is that there are very significant neurological differences between people in just about every way you could think of, including emotional excitability. People assume that others are like themselves, and this causes all manner of confusion. Eliezer is clearly gifted and intense and he probably experiences admiration with a higher level of emotional intensity than most. If the readers of LessWrong and Hacker News are gifted, same goes for many of them. To those who feel so strongly, excited praise may seem fairly normal. To all those who do not, it probably looks crazy. I explained more about excitability in the comments. I also want to say (without getting into the insane amount of detail it would take to justify this to the LW crowd - maybe I will do that later, but one bit at a time) that in my opinion, as a person who has done lots of reading about giftedness and has a lot of experience interacting with gifted people and detecting giftedness, the idea that most survey respondents are giving real answers on the IQ portion of the survey seems very likely to me. I feel 99% sure that LessWrong's average IQ really is in the gifted range, and I'd even say I'm 90%+ sure that the ballpark hit on by the surveys is right. (In other words, they don't seem like a group of predominantly exceptionally or profoundly gifted Einsteins or Stephen Hawkings, or just talented people at the upper ends of the normal range with IQs near 115, but that an average IQ in the 130's / 140's range does seem appropriate.) This says nothing about the future though... The average IQ has been decreasing on each survey for an average of about two points per year. If the trend continues, then in as many years as LessWrong has been around, LessWrong may trend so far toward the mean that LessWrong will not be gifted anymore (by all IQ standards that is, it would still be gifted by some definitions and IQ standards but not others). I will be writing a post about the future of LessWrong very soon. Comment author: 02 December 2012 03:58:58AM 7 points [-] Eliezer is clearly gifted and intense and he probably experiences admiration with a higher level of emotional intensity than most. If the readers of LessWrong and Hacker News are gifted, same goes for many of them. To those who feel so strongly, excited praise may seem fairly normal. To all those who do not, it probably looks crazy. Would you predict then that people who're not gifted are in general markedly less inclined to praise things with a high level of intensity? This seems to me to be falsified by everyday experience. See fan reactions to Twilight, for a ready-to-hand example. Comment author: 02 December 2012 12:45:22PM * 15 points [-] My hypothesis would simply be that different people experience emotional intensity as a reaction to different things. Thus, some think we are crazy and cultish, while also totally weird for getting excited about boring and dry things like math and rationality... while some of us think that certain people who are really interested in the lives of celebrities are crazy and shallow, while also totally weird for getting excited about boring and bad things like Twilight. This also leads each group to think that the other doesn't get similar levels of emotional intensity, because only the group's own type of "emotional intensity" is classified as valid intensity and the other group's intensity is classified as madness, if it's recognized at all. I've certainly made the mistake of assuming that other people must live boring and uninteresting lives, simply because I didn't realize that they genuinely felt very strongly about the things that I considered boring. (Obligatory link.) (Of course, I'm not denying there being variation in the "emotional intensity" trait in general, but I haven't seen anything to suggest that the median of this trait would be considerably different in gifted and non-gifted populations.) Comment author: 01 December 2012 05:14:21PM 6 points [-] If the trend continues we will all be brain-dead in 70 years. Comment author: 30 November 2012 09:37:08PM * 3 points [-] Looks like Aumann at work. My own readings, though more specifically on teenage giftedness in the 145+ range, along with stuff on ASD and asperger, heavily corroborate with this. When I was 17, my (direct) family and I had strong suspicions that I was in this range of giftedness - suspicions which were never reliably tested, and thus neither confirmed nor infirmed. It's still up in the air and I still don't know whether I fit into some category of gifted or special individuals, but at some point I realized that it wasn't all that important and that I just didn't care. I might have to explore the question a bit more in depth if I decide to return into the official educational system at some point (I mean, having a paper certifying that you're a genius would presumably kind of help when making a pitch at university to let you in without the prerequisite college credit because you already know the material). Just mentioning all of the above to explain a bit where my data comes from. Both of my parents and myself were all reading tons of books, references, papers and other information along with several interviews with various psychology professionals for around three months. Also, and this may be another relevant point, the only recognized, official IQ test I ever took was during that time, and I had a score of "above 130"² (verbal statement) and reportedly placed in the 98th and 99th percentiles on the two sections of a modified WAIS test. The actual normalized score was not included in the report (that psychologist(?¹) sucked, and also probably couldn't do the statistics involved correctly in the first place). However, I was warned that the test lost statistical significance / representativeness / whatever above 125, and so that even if I had an IQ of 170+ that test wouldn't have been able to tell - it had been calibrated for mentally deficient teenagers and very low IQ scores (and was only a one-hour test, and only ten of the questions were written, the rest dynamic or verbal with the psychologist). Later looking-up-stats-online also revealed that the test result distributions were slightly skewed, and that a resulting converted "IQ" of "130" on this particular test was probably more rare in the general population than an IQ of 130 normally represents, because of some statistical effects I didn't understand at the time and thus don't remember at all. Where I'm going with this is that this doesn't seem like an isolated effect at all. In fact, it seems like most of North America in general pays way more attention to mentally deficient people and low IQs than to high-IQs and gifted individuals. Based on this, I have a pretty high current prior that many on LW will have received scores suffering from similar effects if they didn't specifically seek the sorts of tests recommended by Mensa or the likes, and perhaps even then. Based on this, I would expect such effects to compensate or even overcompensate for any upward nudging in the self-reporting. ===== 1. I don't know if it was actually a consulting psychologist. I don't remember the title she had (and it was all done in French). She was "officially" recognized to be in legal capacity to administrate IQ tests in Canada, though, so whatever title is normally in charge of that is probably the right one. 2. Based on this, the other hints I mention in the text, and internet-based IQ tests consistently giving me 150-ish numbers when at peak performance and 135-ish when tired (I took those a bit later on, perhaps six months after the official one), 135 is the IQ I generally report (including in the LW survey) when answering forms that ask for it and seems like a fairly accurate guess in terms of how I usually interact with people of various IQ levels. Comment author: [deleted] 29 November 2012 03:00:50PM 5 points [-] Note that people who take online psychometric tests are probably a pretty skewed category already so this tells us nothing. What? They calibrated the test using the people who took it online? Comment author: 29 November 2012 11:39:57PM 2 points [-] I'm fairly sure the Big Five wasn't calibrated on an online sample, but I have no idea about iqtest.dk. Comment author: [deleted] 29 November 2012 10:54:40PM * 12 points [-] Top 100 Users' Data, aka Karma 1000+ I was thinking about the fact that there is probably a difference between active LWers versus lurkers or newbies. So I looked at the data for the Top 100 users (actually Top 107, because there was a tie). This happily coincided with the nice Schelling point of 1000 karma. (make sense, because people are likely to round to that number.) To me, this reads as "has been actively posting for at least a year". So, some data on 1000+ karma people: Slightly more likely to be male: 92.5% Male, 7.4% Female NOTE: First percentage is for 1000+ users, second number is for all survey respondents Much more likely to be polyamorous: Prefer mono 36% v. 54% Prefer poly 24% v. 13% Uncertain 33% v. 30% Other 4% v. 2% About the same Age: average 28.6 v. 27.8 About as likely to be single 51% v. 53% Equally likely to be vegetarian 12% Much more likely to use modafinil at least once per month: 15% v. 4% About equal on intelligence tests SAT out of 1600: 1509 v. 1486 SAT out of 2400: 2260 v. 2319 Self reported IQ: 138.5 v. 138.7 online IQ test: 127 v. 126 ACT score: 33.3 v. 32.7 Similar income 50k Slightly lower Autism quotient: average 22 v. 24 More likely to choose torture Torture: 42% v. 22% Dust Specks: 29% v. 37% More likely to cooperate in a Prisoner's Dilemma: Cooperate: 36% v. 27% Defect: 20% v. 29% Some notes: Yes, I realize my data analysis methods are not the best. Namely, that instead of comparing the people with >1000 karma to the people with <100 karma, which would have been more accurate, I just compared them to the overall results (which includes their answers). I did this because it takes much less time. Also, a hint for other people playing with the data in Excel format: A lot of the numbers are in text format, and a pain to convert to numeric format in a way that allows you to manipulate them. The easiest work around (so long as you don't want to do anything complicated) is to just paste the needed columns either into google spreadsheet, or into another Excel sheet that's been formatted numerically. If you want to do something complicated you probably need to find the "right" way to fix it. Comment author: 30 November 2012 12:57:23PM 2 points [-] Also, a hint for other people playing with the data in Excel format: A lot of the numbers are in text format, and a pain to convert to numeric format in a way that allows you to manipulate them. The easiest work around (so long as you don't want to do anything complicated) is to just paste the needed columns either into google spreadsheet, or into another Excel sheet that's been formatted numerically. If you want to do something complicated you probably need to find the "right" way to fix it. multiplying text by 1 or adding zero can often force auto conversion in excel. You can do this by past as values multiply. Shortcut keys are copy 1 then highlight data ALT+e s then v m enter. Comment author: 29 November 2012 07:14:05PM 11 points [-] If we haven't even learned to avoid the one bias that we can measure super well and which is most susceptible to training, what are we even doing here? This sounds like a job for cognitive psychology! "Well-calibrated" should probably be improved to "well-calibrated about X"-- it's plausible that people have better and worse calibration about different subjects, and the samples in the survey only explored a tiny part of calibration space. Comment author: 14 December 2012 08:18:46PM 4 points [-] I didn't do this myself because I didn't trust my statistical ability enough, and I forgot to mention it on the original post, but... Can someone check for birth order effects? Whether Less Wrongers are more likely to be first-borns than average? Preferably someone who's read Judith Rich Harris' critique of why most birth order effect analyses are hopelessly wrong? Or Gwern? I would trust Gwern on this. Comment author: 17 December 2012 06:25:04AM * 7 points [-] I don't know Harris's critique, but here are some numbers. Out of survey respondents who reported that they have 1 sibling (n=453), 76% said that they were the oldest (i.e., 0 older siblings). By chance, you'd expect 50% to be oldest. Of those with 2 siblings, 50% are the oldest (vs. 33% expected by chance), n=240. Of those with 3 siblings, 45% are the oldest (vs. 25% expected by chance), n=120. Of those with 4 or more siblings, 50% are the oldest (vs. under 20% expected by chance), n=58. Of those with 0 siblings, 100% are the oldest (vs. 100% expected by chance), n=163. Overall, 69% of those who answered the "number of older siblings" question are the oldest. Those look like big effects, unlikely to be explained by whatever artifacts Harris has found. There are a handful of people who left the number of older siblings blank but did report a total number of siblings), or who reported a non-integer number of siblings (half-siblings), but they are too few to make much difference in the numbers. This doesn't seem to vary by degree of involvement in LW; overall 71% of those in the top third of LW exposure (based on sequence-reading, karma, etc.) are the oldest. Here is a little table with the breakdown for them; it shows the percent of people who are the oldest, by number of siblings, for all respondents vs. the highest third in LW exposure. n all high-LW 0 100 100 1 76 80 2 50 45 3 45 51 4+ 50 62 That 62% is 8/13, so not very meaningful. Comment author: 14 November 2014 11:11:42AM 9 points [-] There seems to be a pretty big potential confounder: age. Many respondents' younger siblings are too young to be contributing to this site, while no one's older siblings are too old (unless they're dead, but since ~98% of the community is under age 60 that's not a significant concern). Comment author: 14 November 2014 10:42:16PM 9 points [-] You're saying that if we randomly picked 22-31 year-olds, a disproportionate member would be eldest children? For that to work, there'd have to be more eldest children in that age-range than youngest. Given the increase in population, that is certainly plausible. You would expect more younger families than older families, which means that within an age range there would be a disproportionate number of older siblings (unless it's so young that not all of the younger siblings have been born yet) but it doesn't seem like it would be nearly that significant. Many respondents' younger siblings are too young to be contributing to this site, while no one's older siblings are too old The fact that most of the respondents are eldest children is a confounder for this. (unless they're dead, but since ~98% of the community is under age 60 that's not a significant concern). In that case, wouldn't people over 60 also be too old? Comment author: 11 December 2012 10:43:58PM 4 points [-] Not a survey response but too good to omit: The website you mentioned, namely lesswrong.com is typical of the rationalists. If you read the articles posted on the site, you can see how they push their rationalist methods even beyond the limits of reason. From the Islamic point of view, one may say that their overdependence on rationality amounts to a sort of self-sufficiency and arrogance which smacks of nothing less than the Satanic, as Satan or Iblis is the epitome of arrogance. http://www.onislam.net/english/ask-about-islam/islam-and-the-world/worldview/460333-fiction-depiction-allegory-.html Comment author: 09 December 2012 10:58:44AM * 4 points [-] These are the results of the CFAR questions; I have also posted this as its own Discussion section post. SUMMARY: The CFAR questions were all adapted from the heuristics and biases literature, based on five different cognitive biases or reasoning errors. LWers, on the whole, showed less bias than is typical in the published research (on all 4 questions where this was testable), but did show clear evidence of bias on 2-3 of those 4 questions. Further, those with closer ties to the LW community (e.g., those who had read more of the sequences) showed significantly less bias than those with weaker ties (on 3 out of 4-5 questions where that was testable). These results all held when controlling for measures of intelligence. METHOD & RESULTS Being less susceptible to cognitive biases or reasoning errors is one sign of rationality (see the work of Keith Stanovich & his colleagues, for example). You'd hope that a community dedicated to rationality would be less prone to these biases, so I selected 5 cognitive biases and reasoning errors from the heuristics & biases literature to include on the LW survey. There are two possible patterns of results which would point in this direction: • high scores: LWers show less bias than other populations that have answered these questions (like students at top universities) • correlation with strength of LW exposure: those who have read the sequences (have been around LW a long time, have high karma, attend meetups, make posts) score better than those who have not. The 5 biases were selected in part because they can be tested with everyone answering the same questions; I also preferred biases that haven't been discussed in detail on LW. On some questions there is a definitive wrong answer and on others there is reason to believe that a bias will tend to lead people towards one answer (so that, even though there might be good reasons for a person to choose that answer, in the aggregate it is evidence of bias if more people choose that answer). This is only one quick, rough survey. If the results are as predicted, that could be because LW makes people more rational, or because LW makes people more familiar with the heuristics & biases literature (including how to avoid falling for the standard tricks used to test for biases), or because the people who are attracted to LW are already unusually rational (or just unusually good at avoiding standard biases). Susceptibility to standard biases is just one angle on rationality. Etc. Here are the question-by-question results, in brief. The comment below contains the exact text of the questions, and more detailed explanations. Question 1 was a disjunctive reasoning task, which had a definitive correct answer. Only 13% of undergraduates got the answer right in the published paper that I took it from. 46% of LWers got it right, which is much better but still a very high error rate. Accuracy was 58% for those high in LW exposure vs. 31% for those low in LW exposure. So for this question, that's: 1. LWers biased: yes 2. LWers less biased than others: yes 3. Less bias with more LW exposure: yes Question 2 was a temporal discounting question; in the original paper about half the subjects chose money-now (which reflects a very high discount rate). Only 8% of LWers did; that did not leave much room for differences among LWers (and there was only a weak & nonsignificant trend in the predicted direction). So for this question: 1. LWers biased: not really 2. LWers less biased than others: yes 3. Less bias with more LW exposure: n/a (or no) Question 3 was about the law of large numbers. Only 22% got it right in Tversky & Kahneman's original paper. 84% of LWers did: 93% of those high in LW exposure, 75% of those low in LW exposure. So: 1. LWers biased: a bit 2. LWers less biased than others: yes 3. Less bias with more LW exposure: yes Question 4 was based on the decoy effect aka asymmetric dominance aka attraction effect (but missing a control condition). I don't have numbers from the original study (and there is no correct answer) so I can't really answer 1 or 2 for this question, but there was a difference based on LW exposure: 57% vs. 44% selecting the less bias related answer. 1. LWers biased: n/a 2. LWers less biased than others: n/a 3. Less bias with more LW exposure: yes Question 5 was an anchoring question. The original study found an effect (measured by slope) of 0.55 (though it was less transparent about the randomness of the anchor; transparent studies w. other questions have found effects around 0.3 on average). For LWers there was a significant anchoring effect but it was only 0.14 in magnitude, and it did not vary based on LW exposure (there was a weak & nonsignificant trend in the wrong direction). 1. LWers biased: yes 2. LWers less biased than others: yes 3. Less bias with more LW exposure: no One thing you might wonder: how much of this is just intelligence? There were several questions on the survey about performance on IQ tests or SATs. Controlling for scores on those tests, all of the results about the effects of LW exposure held up nearly as strongly. Intelligence test scores were also predictive of lower bias, independent of LW exposure, and those two relationships were almost the same in magnitude. If we extrapolate the relationship between IQ scores and the 5 biases to someone with an IQ of 100 (on either of the 2 IQ measures), they are still less biased than the participants in the original study, which suggests that the "LWers less biased than others" effect is not based solely on IQ. Comment author: 09 December 2012 11:05:06AM * 8 points [-] MORE DETAILED RESULTS There were 5 questions related to strength of membership in the LW community which I standardized and combined into a single composite measure of LW exposure (LW use, sequence reading, time in community, karma, meetup attendance); this was the main predictor variable I used (time per day on LW also seems related, but I found out while analyzing last year's survey that it doesn't hang together with the others or associate the same way with other variables). I analyzed the results using a continuous measure of LW exposure, but to simplify reporting, I'll give the results below by comparing those in the top third on this measure of LW exposure with those in the bottom third. There were 5 intelligence-related measures which I combined into a single composite measure of Intelligence (SAT out of 2400, SAT out of 1600, ACT, previously-tested IQ, extra credit IQ test); I used this to control for intelligence and to compare the effects of LW exposure with the effects of Intelligence (for the latter, I did a similar split into thirds). Sample sizes: 1101 people answered at least one of the CFAR questions; 1099 of those answered at least one LW exposure question and 835 of those answered at least one of the Intelligence questions. Further details about method available on request. Here are the results, question by question. Question 1: Jack is looking at Anne, but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person? • Yes • No • Cannot be determined This is a "disjunctive reasoning" question, which means that getting the correct answer requires using "or". That is, it requires considering multiple scenarios. In this case, either Anne is married or Anne is unmarried. If Anne is married then married Anne is looking at unmarried George; if Anne is unmarried then married Jack is looking at unmarried Anne. So the correct answer is "yes". A study by Toplak & Stanovich (2002) of students at a large Canadian university (probably U. Toronto) found that only 13% correctly answered "yes" while 86% answered "cannot be determined" (2% answered "no"). On this LW survey, 46% of participants correctly answered "yes"; 54% chose "cannot be determined" (and 0.4% said"no"). Further, correct answers were much more common among those high in LW exposure: 58% of those in the top third of LW exposure answered "yes", vs. only 31% of those in the bottom third. The effect remains nearly as big after controlling for Intelligence (the gap between the top third and the bottom third shrinks from 27% to 24% when Intelligence is included as a covariate). The effect of LW exposure is very close in magnitude to the effect of Intelligence; 60% of those in the top third in Intelligence answered correctly vs. 37% of those in the bottom third. original study: 13% weakly-tied LWers: 31% strongly-tied LWers: 58% Question 2: Would you prefer to receive$55 today or $75 in 60 days? This is a temporal discounting question. Preferring$55 today implies an extremely (and, for most people, implausibly) high discount rate, is often indicative of a pattern of discounting that involves preference reversals, and is correlated with other biases. The question was used in a study by Kirby (2009) of undergraduates at Williams College (with a delay of 61 days instead of 60; I took it from a secondary source that said "60" without checking the original), and based on the graph of parameter values in that paper it looks like just under half of participants chose the larger later option of $75 in 61 days. LW survey participants almost uniformly showed a low discount rate: 92% chose$75 in 61 days. This is near ceiling, which didn't leave much room for differences among LWers, and in fact there were not statistically significant differences. For LW exposure, top third vs. bottom third was 93% vs. 90%, and for Intelligence it was 96% vs. 91%.

original study: ~47%
weakly-tied LWers: 90%
strongly-tied LWers: 93%

Question 3: A certain town is served by two hospitals. In the larger hospital, about 45 babies are born each day. In the smaller one, about 15 babies are born each day. Although the overall proportion of girls is about 50%, the actual proportion at either hospital may be greater or less on any day. At the end of a year, which hospital will have the greater number of days on which more than 60% of the babies born were girls?

• The larger hospital
• The smaller hospital
• Neither - the number of these days will be about the same

This is a statistical reasoning question, which requires applying the law of large numbers. In Tversky & Kahneman's (1974) original paper, only 22% of participants correctly chose the smaller hospital; 57% said "about the same" and 22% chose the larger hospital.

On the LW survey, 84% of people correctly chose the smaller hospital; 15% said "about the same" and only 1% chose the larger hospital. Further, this was strongly correlated with strength of LW exposure: 93% of those in the top third answered correctly vs. 75% of those in the bottom third. As with #1, controlling for Intelligence barely changed this gap (shrinking it from 18% to 16%), and the measure of Intelligence produced a similarly sized gap: 90% for the top third vs. 79% for the bottom third.

original study: 22%
weakly-tied LWers: 75%
strongly-tied LWers: 93%

(continued below, due to restrictions on comment length)

Comment author: 09 December 2012 11:05:45AM *  7 points [-]

(more detailed results, continued)

Question 4: Imagine that you are a doctor, and one of your patients suffers from migraine headaches that last about 3 hours and involve intense pain, nausea, dizziness, and hyper-sensitivity to bright lights and loud noises. The patient usually needs to lie quietly in a dark room until the headache passes. This patient has a migraine headache about 100 times each year. You are considering three medications that you could prescribe for this patient. The medications have similar side effects, but differ in effectiveness and cost. The patient has a low income and must pay the cost because her insurance plan does not cover any of these medications. Which medication would you be most likely to recommend?

• Drug A: reduces the number of headaches per year from 100 to 30. It costs $350 per year. • Drug B: reduces the number of headaches per year from 100 to 50. It costs$100 per year.
• Drug C: reduces the number of headaches per year from 100 to 60. It costs $100 per year. This question is based on research on the decoy effect (aka "asymmetric dominance" or the "attraction effect"). Drug C is obviously worse than Drug B (it is strictly dominated by it) but it is not obviously worse than Drug A, which tends to make B look more attractive by comparison. This is normally tested by comparing responses to the three-option question with a control group that gets a two-option question (removing option C), but I cut a corner and only included the three-option question. The assumption is that more-biased people would make similar choices to unbiased people in the two-option question, and would be more likely to choose Drug B on the three-option question. The model behind that assumption is that there are various reasons for choosing Drug A and Drug B; the three-option question gives biased people one more reason to choose Drug B but other than that the reasons are the same (on average) for more-biased people and unbiased people (and for the three-option question and the two-option question). Based on the discussion on the original survey thread, this assumption might not be correct. Cost-benefit reasoning seems to favor Drug A (and those with more LW exposure or higher intelligence might be more likely to run the numbers). Part of the problem is that I didn't update the costs for inflation - the original problem appears to be from 1995 which means that the real price difference was over 1.5 times as big then. I don't know the results from the original study; I found this particular example online (and edited it heavily for length) with a reference to Chapman & Malik (1995), but after looking for that paper I see that it's listed on Chapman's CV as only a "published abstract". 49% of LWers chose Drug A (the one that is more likely for unbiased reasoners), vs. 50% for Drug B (which benefits from the decoy effect) and 1% for Drug C (the decoy). There was a strong effect of LW exposure: 57% of those in the top third chose Drug A vs. only 44% of those in the bottom third. Again, this gap remained nearly the same when controlling for Intelligence (shrinking from 14% to 13%), and differences in Intelligence were associated with a similarly sized effect: 59% for the top third vs. 44% for the bottom third. original study: ?? weakly-tied LWers: 44% strongly-tied LWers: 57% Question 5: Get a random three digit number (000-999) from http://goo.gl/x45un and enter the number here. Treat the three digit number that you just wrote down as a length, in feet. Is the height of the tallest redwood tree in the world more or less than the number that you wrote down? What is your best guess about the height of the tallest redwood tree in the world (in feet)? This is an anchoring question; if there are anchoring effects then people's responses will be positively correlated with the random number they were given (and a regression analysis can estimate the size of the effect to compare with published results, which used two groups instead of a random number). Asking a question with the answer in feet was a mistake which generated a great deal of controversy and discussion. Dealing with unfamiliar units could interfere with answers in various ways so the safest approach is to look at only the US respondents; I'll also see if there are interaction effects based on country. The question is from a paper by Jacowitz & Kahneman (1995), who provided anchors of 180 ft. and 1200 ft. to two groups and found mean estimates of 282 ft. and 844 ft., respectively. One natural way of expressing the strength of an anchoring effect is as a slope (change in estimates divided by change in anchor values), which in this case is 562/1020 = 0.55. However, that study did not explicitly lead participants through the randomization process like the LW survey did. The classic Tversky & Kahneman (1974) anchoring question did use an explicit randomization procedure (spinning a wheel of fortune; though it was actually rigged to create two groups) and found a slope of 0.36. Similarly, several studies by Ariely & colleagues (2003) which used the participant's Social Security number to explicitly randomize the anchor value found slopes averaging about 0.28. There was a significant anchoring effect among US LWers (n=578), but it was much weaker, with a slope of only 0.14 (p=.0025). That means that getting a random number that is 100 higher led to estimates that were 14 ft. higher, on average. LW exposure did not moderate this effect (p=.88); looking at the pattern of results, if anything the anchoring effect was slightly higher among the top third (slope of 0.17) than among the bottom third (slope of 0.09). Intelligence did not moderate the results either (slope of 0.12 for both the top third and bottom third). It's not relevant to this analysis, but in case you're curious, the median estimate was 350 ft. and the actual answer is 379.3 ft. (115.6 meters). Among non-US LWers (n=397), the anchoring effect was slightly smaller in magnitude compared with US LWers (slope of 0.08), and not significantly different from the US LWers or from zero. original study: slope of 0.55 (0.36 and 0.28 in similar studies) weakly-tied LWers: slope of 0.09 strongly-tied LWers: slope of 0.17 If we break the LW exposure variable down into its 5 components, every one of the five is strongly predictive of lower susceptibility to bias. We can combine the first four CFAR questions into a composite measure of unbiasedness, by taking the percentage of questions on which a person gave the "correct" answer (the answer suggestive of lower bias). Each component of LW exposure is correlated with lower bias on that measure, with r ranging from 0.18 (meetup attendance) to 0.23 (LW use), all p < .0001 (time per day on LW is uncorrelated with unbiasedness, r=0.03, p=.39). For the composite LW exposure variable the correlation is 0.28; another way to express this relationship is that people one standard deviation above average on LW exposure 75% of CFAR questions "correct" while those one standard deviation below average got 61% "correct". Alternatively, focusing on sequence-reading, the accuracy rates were: 75% Nearly all of the Sequences (n = 302) 70% About 75% of the Sequences (n = 186) 67% About 50% of the Sequences (n = 156) 64% About 25% of the Sequences (n = 137) 64% Some, but less than 25% (n = 210) 62% Know they existed, but never looked at them (n = 19) 57% Never even knew they existed until this moment (n = 89) Another way to summarize is that, on 4 of the 5 questions (all but question 4 on the decoy effect) we can make comparisons to the results of previous research, and in all 4 cases LWers were much less susceptible to the bias or reasoning error. On 1 of the 5 questions (question 2 on temporal discounting) there was a ceiling effect which made it extremely difficult to find differences within LWers; on 3 of the other 4 LWers with a strong connection to the LW community were much less susceptible to the bias or reasoning error than those with weaker ties. REFERENCES Ariely, Loewenstein, & Prelec (2003), "Coherent Arbitrariness: Stable demand curves without stable preferences" Chapman & Malik (1995), "The attraction effect in prescribing decisions and consumer choice" Jacowitz & Kahneman (1995), "Measures of Anchoring in Estimation Tasks" Kirby (2009), "One-year temporal stability of delay-discount rates" Toplak & Stanovich (2002), "The Domain Specificity and Generality of Disjunctive Reasoning: Searching for a Generalizable Critical Thinking Skill" Tversky & Kahneman's (1974), "Judgment under Uncertainty: Heuristics and Biases" Comment author: [deleted] 09 December 2012 01:02:16PM * 2 points [-] There was a strong effect of LW exposure: 57% of those in the top third chose Drug A vs. only 44% of those in the bottom third. I think this might just be due to the fact that the meme that “time is money” has been repeatedly expounded on LW, rather than long-time LWers are less prone to the decoy effect. All the rot13ed discussions about that question immediately identified Drug C as a decoy and focused on whether a low-income person should be willing to pay$12.50 to be spared a three-hour headache, with a sizeable minority arguing that they shouldn't. I'd look at the income and country of people who chose each drug -- I guess the main effect is what each responded took “low income” to mean.

Comment author: 09 December 2012 11:19:35AM 2 points [-]

continued below, due to restrictions on comment length

A hint that this analysis is worth a top-level post, perhaps?

Comment author: 09 December 2012 12:20:08PM 2 points [-]

I think you're right; I've posted it to the discussion section (I guess I'll leave it here too).

Comment author: 08 December 2012 12:46:12AM 4 points [-]

I had no knowledge of such a survey. These might be more efficient if they were posted in a blatantly obvious manner, like on the banner.

Comment author: 29 November 2012 08:00:18PM 15 points [-]

"Eliezer Yudkowsky personality cult."
"The new thing for people who would have been Randian Objectivists 30 years ago."
"A sinister instrument of billionaire Peter Thiel."

Nope, no one guessed whose sinister instrument this site is. Muaha.

Comment author: 29 November 2012 10:14:33PM 3 points [-]

Fishing for correlations is a statistically dubious practice, but also fun. Some interesting ones (none were very high, except e.g. Father Age and Mother Age):

• IQ and Hours Writing have correlation 0.26 (75 degrees), which is the only interesting IQ correlation.
• Siblings and Older siblings have correlation 0.48 (61 degrees), which isn't too surprising , but makes me wonder: do we expect this correlation to be 0.5 in general?
• Most of the Big Five answers are slightly correlated (around +/-0.25, or 90+/-15 degrees) with each other, but not with anything else except the Autism Score. Shouldn't well-designed personality traits be orthogonal, ideally?
• CFAR question 7 (guess of height of redwood) was negatively correlated with Height (-0.23, or 103 degrees). No notable correlation with the random number, though.
Comment author: 09 December 2012 10:26:06PM 7 points [-]

CFAR question 7 (guess of height of redwood) was negatively correlated with Height (-0.23, or 103 degrees). No notable correlation with the random number, though.

I looked at this with the data set that I used for my CFAR analyses, and this correlation did not show up; r=-.02 (p=.58). On closer inspection, the correlation is present in the complete un-cleaned-up data set (r=-.21), but it is driven entirely by a single outlier who listed their own height as 13 cm and the height of the tallest redwood as 10,000 ft.

(In my analyses of the anchoring question I had excluded the data of 2 outliers who listed redwood heights of 10,000 ft. and 1 ft. Before running this correlation with Height, I checked the distribution and excluded everyone who listed a height under 100 cm, since those probably represent units confusions.)

Comment author: 29 November 2012 10:18:54PM 5 points [-]

Most of the Big Five answers are slightly correlated (around +/-0.25, or 90+/-15 degrees) with each other, but not with anything else except the Autism Score. Shouldn't well-designed personality traits be orthogonal, ideally?

It might just pick out the cluster of "Less Wrong personality type".

CFAR question 7 (guess of height of redwood) was negatively correlated with Height

Obviously it's a matter of perspective. Tall people just tower over those redwoods.

Comment author: 29 November 2012 10:26:43PM 3 points [-]

It might just pick out the cluster of "Less Wrong personality type".

In that case, it says something about the cluster as well. For example, Openness and Extraversion wouldn't be positively correlated just because most LWers are both open and extraverted (or because most LWers are closed and introverted). We'd have to have something that specifically makes "open and extraverted" more likely to happen together than individually.

Comment author: [deleted] 30 November 2012 12:26:04AM 3 points [-]

Something like Berkson's paradox (people who are neither open nor introverted are unlikely to read LW)?

Comment author: 30 November 2012 01:21:41AM 2 points [-]

Good point. Objection retracted (in the conversational sense).

Comment author: 29 November 2012 10:09:12PM *  3 points [-]

I think you missed some duplicates in for_public.csv: Rows 26, 30, 761 and 847 are identical to their preceding one.

Comment author: 29 November 2012 07:33:39PM 3 points [-]

How well calibrated were the prediction book users?

Comment author: 29 November 2012 08:00:40PM 2 points [-]

Unfortunately we lacked a question to track prediction book users.

Comment author: 29 November 2012 02:41:35PM 6 points [-]

This survey looks like it was a massive amount of work to analyse. Three cheers for Yvain!

Comment author: 29 November 2012 09:08:06PM 5 points [-]

I was surprised to see that LW has almost as many socialists as libertarians. I had thought due to anecdotal evidence that the site was libertarian-dominated.

I was also surprised that a plurality of people preferred dust specks to torture, given that it appears to be just a classic problem of scope insensitivity, which this site talks about repeatedly.

I was happy to see that we have more vegetarians and fewer smokers than the general population.

Comment author: 30 November 2012 02:51:21PM 16 points [-]

Generally, half the time we get visiting leftwingers accusing us of being rightwing reactionaries, and the other half of the time we get visiting rightwingers accusing us of being leftwing sheep.

So if you thought that the site was libertarian-dominated, I'm hereby making a prediction with 75% certainty that you consider yourself a left winger. Am I right?

Comment author: 30 November 2012 09:01:36PM *  15 points [-]

There are a number of old posts from the Overcoming Bias days in which EY comments that the audience is primarily libertarian- which makes sense for the blog of a GMU economist. A partial explanation might be people reading that and assuming he's talking about the modern population distribution of LW.

Comment author: 02 December 2012 09:37:36PM *  7 points [-]

Related analysis on the public dataset:

1045 responders supplied a political orientation; they're 30% Libertarian, 3.1% Conservative, 37% Liberal, 29% Socialist, and 0.5% Communist.

226 responders supplied a political orientation and have been around since OB; they're 42% Libertarian, 3.5% Conservative, 31% Liberal, 23.5% Socialist, and 0% Communist.

242 responders supplied a political orientation and were referred from HPMoR; they're 30% Libertarian, 2.5% Conservative, 37% Liberal, 30% Socialist, and 0.4% Communist.

Note that analysis of current LW users who have been here since OB is not the same as OB users several years ago, but they are still significantly more libertarian than the current mix.

Comment author: 02 December 2012 10:51:52PM 4 points [-]

Also interesting that the HPMoR distribution almost exactly equals the current mix.

Comment author: 03 December 2012 01:00:19AM *  6 points [-]

Oh yes, that reminds me - I've always wondered if MoR was a waste of time or not in terms of community-building. So let's divide the dataset into people who were referred to LW by MoR and people who weren't...

Summary: they are younger, lower karma, lower karma per month participating (karma log-transformed or not), more likely to be students; but they have the same IQ (self-report & test) as the rest.

So, Eliezer is successfully corrupting the youth, but it's not clear they are contributing very much yet.

R> lw <- read.csv("lw-survey/2012.csv")
R> hpmor <- lw[as.character(lw$Referrals) == "Referred by Harry Potter and the Methods of Rationality",] R> hpmor <- lw[as.character(lw$Referrals) != "Referred by Harry Potter and the Methods of Rationality",]
R> t.test(hpmor$IQ, hpmor$IQ)
Welch Two Sample t-test
data: hpmor$IQ and hpmor$IQ
t = 0.5444, df = 99.28, p-value = 0.5874
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-2.614 4.591
sample estimates:
mean of x mean of y
139.1 138.1
R> t.test(as.integer(as.character(hpmor$IQTest)), as.integer(as.character(hpmor$IQTest)))
Welch Two Sample t-test
data: as.integer(as.character(hpmor$IQTest)) and as.integer(as.character(hpmor$IQTest))
t = -0.0925, df = 264.8, p-value = 0.9264
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-2.802 2.551
sample estimates:
mean of x mean of y
125.6 125.8
R> t.test(as.numeric(as.character(hpmor$Income)), as.numeric(as.character(hpmor$Income)))
Welch Two Sample t-test
data: as.numeric(as.character(hpmor$Income)) and as.numeric(as.character(hpmor$Income))
t = -4.341, df = 314.3, p-value = 1.917e-05
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-29762 -11197
sample estimates:
mean of x mean of y
33948 54427
R> t.test(hpmor$Age, hpmor$Age)
Welch Two Sample t-test
data: hpmor$Age and hpmor$Age
t = -7.033, df = 484.4, p-value = 6.93e-12
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-5.318 -2.995
sample estimates:
mean of x mean of y
24.51 28.67
R> t.test(as.character(hpmor$WorkStatus) == "Student", as.character(hpmor$WorkStatus) == "Student")
Welch Two Sample t-test
data: as.character(hpmor$WorkStatus) == "Student" and as.character(hpmor$WorkStatus) == "Student"
t = 4.154, df = 389.8, p-value = 4.018e-05
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
0.0791 0.2213
sample estimates:
mean of x mean of y
0.5224 0.3723
R> hpmortime <- hpmor$KarmaScore / as.numeric(as.character(hpmor$TimeinCommunity))
R> hpmortime <- hpmortime[!is.na(hpmortime) & !is.nan(hpmortime) & !is.infinite(hpmortime) ]
R> hpmortime <- hpmor$KarmaScore / as.numeric(as.character(hpmor$TimeinCommunity))
R> hpmortime <- hpmortime[!is.na(hpmortime) & !is.nan(hpmortime) & !is.infinite(hpmortime) ]
R> t.test(hpmortime, hpmortime)
Welch Two Sample t-test
data: hpmortime and hpmortime
t = 1.05, df = 642.7, p-value = 0.2942
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-4.257 14.036
sample estimates:
mean of x mean of y
17.69 12.80
R> hpmortime <- log1p(hpmor$KarmaScore / as.numeric(as.character(hpmor$TimeinCommunity)))
R> hpmortime <- hpmortime[!is.na(hpmortime) & !is.nan(hpmortime) & !is.infinite(hpmortime) ]
R> hpmortime <- log1p(hpmor$KarmaScore / as.numeric(as.character(hpmor$TimeinCommunity)))
R> hpmortime <- hpmortime[!is.na(hpmortime) & !is.nan(hpmortime) & !is.infinite(hpmortime) ]
R> t.test(hpmortime, hpmortime)
Welch Two Sample t-test
data: hpmortime and hpmortime
t = 2.263, df = 396.9, p-value = 0.02416
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
0.03366 0.47878
sample estimates:
mean of x mean of y
1.1978 0.9415

Comment author: 03 December 2012 01:59:08AM 4 points [-]

The interesting question might be whether people whose primary interest is HPMOR are understanding and using ideas about rationality from it.

Comment author: 03 December 2012 02:24:28AM 2 points [-]

Not sure how one would test that, aside from the CFAR questions which I don't know how to use.

Comment author: 09 December 2012 11:49:56AM 3 points [-]

Looking at the four CFAR questions (described here), accuracy rates were:

74% OB folks ("Been here since it was started in the Overcoming Bias days", n=253)
64% MoR folks ("Referred by Harry Potter and the Methods of Rationality", n=253)
66% everyone else

So the original OB folks did better, but Methods influx is as good as the other sources of new readers. Breaking it down by question:

Question 1: disjunctive reasoning
OB: 52%
MoR: 42%
Other: 44%

Question 2: temporal discounting
OB: 94%
MoR: 89%
Other: 91%

Question 3: law of large numbers
OB: 92%
MoR: 85%
Other: 81%

Question 4: decoy effect
OB: 57%
MoR: 41%
Other: 49%

Comment author: 03 December 2012 03:26:35AM 2 points [-]

One possibility would be for Eliezer to ask people about it in his author's notes when he updates HPMOR.

On the second reading, I realize that I'm asking about HPMOR and spreading rationality rather than HPMOR and community building.

Comment author: 03 December 2012 01:37:27AM 4 points [-]

Mean karma doesn't seem like the relevant metric; that reflects something like the contributions of the typical MoR user, which seems less important to me than the contributions of the top MoR users. The top users in a community generally contribute disproportionately, so a more relevant metric might be the proportion of top users who were referred here from MoR.

Comment author: 03 December 2012 01:48:56AM 4 points [-]

The average user matters a lot, I think... But since you insist, here's the top 10% of each category:

R> sort(hpmor$KarmaScore, decreasing=T)[1:25] [1] 9122 6815 4887 4500 2782 2600 2545 2117 2000 1800 1300 1017 1000 1000 858 771 694 575 560 [20] 443 425 422 350 285 274 R> sort(other$KarmaScore, decreasing=T)[1:83]
[1] 47384 32394 27418 15000 12200 11094 11000 10000 9000 8799 8000 8000 8000 6164 5000 5000
[17] 5000 5000 4658 4000 4000 4000 3960 3800 3693 3600 3500 3500 3500 3353 3300 3000
[33] 3000 3000 3000 3000 3000 3000 3000 2700 2500 2486 2400 2300 2204 2200 2100 2000
[49] 2000 2000 2000 2000 1977 1975 1900 1800 1800 1800 1750 1700 1653 1650 1648 1600
[65] 1590 1540 1520 1500 1500 1500 1500 1500 1500 1400 1253 1250 1200 1200 1115 1095
[81] 1044 1000 1000


The top MoR referral user is somewhere around 10th place in the other group (which is 3.3x larger).

Comment author: 03 December 2012 01:54:37AM *  2 points [-]

The average user that sticks around might matter a lot, but people with low karma are probably less likely to stick around so they'll have less of an impact (positive or negative) on the community. So maybe look at the distribution of karma, but among veteran users resp. veteran MoR users?

Comment author: 03 December 2012 02:53:19AM 2 points [-]

What's 'veteran'? (And how many ways do you want to slice the data anyway...)

Comment author: 03 December 2012 03:49:33PM 2 points [-]

I imagine that when you divide karma by months in the community (while still restricting yourself to the top ten percent of absolute karma) the MoR contributors will look better. I'll do it tonight if you don't.

Comment author: 03 December 2012 06:25:30PM *  2 points [-]

They do a bit better at the top; the sample size at "top 10%" is getting small enough that tests are losing power, though:

R> lw <- read.csv("lw-survey/2012.csv")
R>
R> hpmor <- lw[as.character(lw$Referrals) == "Referred by Harry Potter and the Methods of Rationality",] R> other <- lw[as.character(lw$Referrals) != "Referred by Harry Potter and the Methods of Rationality",]
R>
R> hpmor <- hpmor[order(hpmor$KarmaScore, decreasing=TRUE),][1:25,] R> other <- other[order(other$KarmaScore, decreasing=TRUE),][1:83,]
R>
R> hpmortime <- hpmor$KarmaScore / as.numeric(as.character(hpmor$TimeinCommunity))
R> hpmortime <- hpmortime[!is.na(hpmortime) & !is.nan(hpmortime) & !is.infinite(hpmortime) ]
R> othertime <- other$KarmaScore / as.numeric(as.character(other$TimeinCommunity))
R> othertime <- othertime[!is.na(othertime) & !is.nan(othertime) & !is.infinite(othertime) ]
R>
R> sort(hpmortime, decreasing=TRUE)
[1] 506.78 300.00 283.96 203.62 138.46 133.95 117.61 115.92 72.22 66.67 59.09 50.00 36.92
[14] 35.05 35.00 33.90 28.60 26.67 24.82 23.96 20.36 19.28 17.71 11.91
R> sort(othertime, decreasing=TRUE)
[1] 1895.36 647.88 456.97 338.89 263.16 250.00 250.00 235.71 184.90 183.33 173.91 166.67
[13] 165.00 146.93 146.65 145.83 142.86 133.33 133.33 133.33 125.00 125.00 125.00 116.45
[25] 102.73 100.00 97.22 84.38 83.33 83.33 83.33 83.33 75.00 75.00 74.51 72.00
[37] 69.60 68.88 66.67 66.67 63.33 61.11 60.71 60.34 58.33 57.14 55.95 53.43
[49] 52.17 50.00 50.00 50.00 50.00 48.48 46.46 44.12 43.75 41.67 41.43 40.00
[61] 39.66 36.36 35.91 33.33 33.33 31.67 31.32 30.00 30.00 30.00 27.50 27.47
[73] 26.95 25.33 25.00 25.00 24.06 23.33 22.73 18.25 16.67 16.67
R>
R> t.test(hpmortime,othertime)
Welch Two Sample t-test
data: hpmortime and othertime
t = -0.544, df = 72.4, p-value = 0.5881
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-87.52 49.99
sample estimates:
mean of x mean of y
98.44 117.20

Comment author: 01 December 2012 10:11:25AM *  8 points [-]

Generally, half the time we get visiting leftwingers accusing us of being rightwing reactionaries, and the other half of the time we get visiting rightwingers accusing us of being leftwing sheep.

I think the site is clearly left wing slanted if you look at the demographics. Two thirds are liberal, communist or socialist with the remainder being libertarian. Conservative users especially are incredibly under-represented compared to the general population or even the university educated population.

It may however be noticeably less left wing on economic questions than similar high brow sites.

Comment author: 30 November 2012 09:55:16PM 4 points [-]

Generally, half the time we get visiting leftwingers accusing us of being rightwing reactionaries, and the other half of the time we get visiting rightwingers accusing us of being leftwing sheep.

The first one surprises me because hardly anyone on LW seems conservative (and the polls confirm this).

I'm hereby making a prediction with 75% certainty that you consider yourself a left winger. Am I right?

I'm definitely a non-libertarian, so that may be it.

Comment author: 30 November 2012 10:00:47PM *  6 points [-]

The first one surprises me because hardly anyone on LW seems conservative (and the polls confirm this).

Libertarians count as right-wing by most left-wing standards, even far right. And then we've got a small but vocal faction of neoreactionary/Moldbugger types, who don't fit cleanly into any modern political typologies but who tend to look extra-super right-wing++ through leftist eyes.

Comment author: 02 December 2012 05:43:29PM *  9 points [-]

The first one surprises me because hardly anyone on LW seems conservative (and the polls confirm this).

Just in the last two or three months I remember there was one guy that accused us of being the right-wing conspiracy of multibillionaire Peter Thiel (because he has donated money to the Singularity Instittue), a few who accused the whole site of transphobia (for not banning a regular user for a bigoted joke he made on a different location, not actually in this forum), one who called us the equivalent of fascist sheep (for having more people here read Mencius Moldbug on average than I suppose is the internet forum median)...

Fanatics view tolerance of the enemy as enemy action. So, yeah, I think leftwing fanatics will view anything even tolerant of either reaction or libertarianism as their enemy -- even as they don't notice that similar tolerance is extended to positions to the left of them.

Comment author: 30 November 2012 10:05:00PM 9 points [-]

The first one surprises me because hardly anyone on LW seems conservative (and the polls confirm this).

However, there are a few fairly common (or at least it seems so to me) opinions on LW which are distinctively un-Left: democracy is bad, there are racial differences in important traits, and women complain way too much about how men treat them. We'll see how that last one plays out.

Comment author: [deleted] 03 December 2012 04:59:22PM *  7 points [-]

there are a few fairly common (or at least it seems so to me)

I think that they appear to be more common than they actually are because their proponents are much louder than everyone else.

democracy is bad, there are racial differences in important traits, and women complain way too much about how men treat them

One of those is a factual question, not a policy question. (Also, there are plenty of left-wingers who wouldn't throw a fit at “it appears that black people and Native Americans have lower average IQ than white people, whereas East Asians and Ashkenazi Jews have higher average IQ; the differences between the group averages are comparable with the standard deviations within each group; it's not yet fully clear how much the differences between the group averages are due to genetics and how much they are due to socio-economic conditions”, at least outside liberal arts departments.)

Comment author: 01 December 2012 10:30:41AM *  5 points [-]

democracy is bad

For the last year or so, I've been thinking that a "real" (read pre-WW2) democracy is not just bad but very much right-wing (see Corey Robin's writings on libertarianism, "democratic feudalism", etc).

Like some other nebulous concepts, e.g. multiculturalism, I see it as grafted onto the "real" corpus of Left ideas - liberty, equality, fraternity, hard-boiled egg, etc - as a consequence of political maneuvering and long-time coalitions, without due reflection. Think of it: today the more popular anti-Left/anti-progressive positions are not monarchism/neo-reaction/etc but right-wing libertarianism and fascist populism, which often invoke democratic slogans.

there are racial differences in important traits

Like Konkvistador already said a few times, eugenics started out as a left-wing/progressive movement, and many old-time progressives - including even American abolitionists - were outright racist.

(Metacontrarianism, hell yeah!)

Comment author: 01 December 2012 10:13:25AM *  4 points [-]

They are also practically non-existent in right wing parties in the West. While being contrarian is a bad sign, getting people from all mainstream political positions to go into sputtering apoplexy with the same input can be a good sign.

Comment author: 01 December 2012 10:52:10AM *  2 points [-]

We'll see how that last one plays out.

Can you please elaborate on what you meant by this? The way you said it made me feel rather uncomfortable.

Comment author: 03 December 2012 05:21:12PM *  3 points [-]

I wasn't intending to make you feel uncomfortable. On the other hand, I don't think dark arts require a lot of intent.

Anyway, I believe that anti-racism/some parts of current feminism are an emotionally abusive attempt to address real issues.

Most of the anti-racists here have not been abusive, but imagine a social environment where this is the dominant tone.

The emotional abuse leads to a lot of resistance and avoidance, but the issues being real has its own pull.

I've seen people (arguably including me) who were very unfond of the emotional abuse still come to believe that at least some of the issues are valid and worthy of being addressed. What's more, I'm reasonably certain that at least some of those people don't realize they've changed their minds.

I don't know where you personally will end up on these issues (it wouldn't surprise me if the discussion of gender prejudice brings in substantial amounts about racism and possibly ablism), but I expect that LW will be taken pretty far towards believing that (many) men mistreat women in ways that ought to be corrected. It wouldn't surprise me if (this being LW) there will also be more clarity about ways that women could and should treat men better.

Lessening Inferential Distance is only the first post in a series. I'm expecting that harder issues will be brought up in later posts.

Comment author: 01 December 2012 11:22:33AM *  3 points [-]

I believe that, with your linked comment getting 32 points, you are making Nancy rather uncomfortable in turn.

I'm fairly certain that we're all suffering from the hostile media effect; e.g. you keep saying how there's creeping censorship of right-wing ideas on LW, while I'm disturbed by such complaints getting karma and support :)

Comment author: 02 December 2012 03:02:13AM *  3 points [-]

you keep saying how there's creeping censorship of right-wing ideas on LW,

Consider the way this post was down-voted, along with some of the discussion, particularly here, as exhibit A.

Comment author: 01 December 2012 05:18:29PM *  2 points [-]

I believe that, with your linked comment getting 32 points, you are making Nancy rather uncomfortable in turn.

The comment you are referencing was written in disappointment over a discussion with hundreds of posts and a Main level article at 50+ karma.

you keep saying how there's creeping censorship of right-wing ideas on LW, while I'm disturbed by such complaints getting karma and support :)

I think this may be true to an extent, but this isn't my perception alone, several LWers have complained about this in the past year or so.

What disturbs you about this specifically?

Comment author: 01 December 2012 07:03:27PM *  6 points [-]

What disturbs you about this specifically?

Like I already said a few times, nearly all the highly upvoted posts and comments that explicitly bring up ideology - like yours - appear to come from the right. Duh, you'll say, if most of the LW stuff is implicitly liberal/progressive, then of course what's going to stand out is (intelligently argued) contrarianism. But the disturbing thing to me is that the mainstream doesn't seem to react to the challenge.

What I have in mind is not some isolated insightful comments e.g. criticizing moldbuggery, defending egalitarianism or feminism or something like that - they do appear - but an acknowledgement of LW's underlying ideological non-neutrality. E.g. this post by Eliezer, or this one by Luke would've hardly been received well without the author and the audience sharing Enlightenment/Universalist values; both the tone and the message rely on an ideological foundation (one that I desire to analyze and add to - not deconstruct).

Yet there's not enough acknowledgement and conscious defense of those values, so when such content is challenged from an alt-right perspective, the attacking side ends up with the last word in the discussion. So to me it feels, subjectively, as if an alien force is ripping whole chunks out of the comfortable "default" memeplex, and no-one on the "inside" is willing or able to counterattack!

Comment author: 02 December 2012 10:44:14AM *  8 points [-]

The thing is right wing thinkers who end up on LessWrong and stay in the community should be comforting to you, these are the people who believe engaging in dialogue and common goals is possible. And I would argue they empower all members of the community by contributing to the explicit goal of refining human rationality or FAI design (though they might undermine some other implicit goals).

Compare this to the idea of right wing thinkers that take what they can from rationality and the alt right and then seeing they are not accepted in the nominally rationalist community leave for the world. Even as individuals that should concern you, but imagine a right wing community forming powered by the best tools from here. Somehow it seems its left wing only counterpart would be weaker.

Comment author: 29 November 2012 09:18:05PM 7 points [-]

It's not exactly libertarian-dominated. More that there are far more libertarians here than in real life (and more socialists, too, likely as not. It's the "normal" political positions that are underrepresented)

Comment author: 30 November 2012 08:49:58PM 7 points [-]

and more socialists, too, likely as not.

If you break down political orientation by country, you get around 50% socialists among europeans (which may be a bit higher than the population), and around 20% socialists among americans.

Comment author: 30 November 2012 03:26:27PM 11 points [-]

I was also surprised that a plurality of people preferred dust specks to torture, given that it appears to be just a classic problem of scope insensitivity, which this site talks about repeatedly.

I was surprised as well, but I disagree that it is necessarily scope insensitivity - believing utility is continuously additive requires choosing torture. But some people take that as evidence that utility is not additive - more technically, evidence that utility is not the appropriate analysis of morality (aka picking deontology or virtue ethics or somesuch).

More specific analysis here and more generally here.

Comment author: 30 November 2012 04:09:53PM 6 points [-]

In support of this, 435 people chose specks, and 430 chose virtue ethics, deontology, or other.

Comment author: 01 December 2012 02:34:46PM *  3 points [-]

That's only weak evidence about the correlation between non-consequentialism and dust specking. If we had 670 consequentialists, 50 deontologists, 180 virtue ethicists, and 200 others, and 40% of each chose dust specks, we'd get numbers like yours even though there wouldn't be a correlation.

I did a crosstab, which should be more informative:

 | Torturevs.DustSpecks
----------------+------------------+------+---------+--------+------
MoralViews | don't understand | dust | torture | unsure | <NA>
----------------+------------------+------+---------+--------+------
consequential | 10 | 228 | 188 | 134 | 109
deontology | 1 | 24 | 3 | 8 | 6
other/none | 6 | 68 | 38 | 33 | 47
virtue | 1 | 75 | 12 | 28 | 36
<NA> | 0 | 2 | 2 | 0 | 8


I get different totals for the number of speckers (397) and non-consequentialists (386), though. Maybe my copy of the data's messed up? (Gnumeric complains the XLS might be corrupt.)

Anyway, I do see a correlation between specks & moral paradigm. My dust speck percentages:

• 41% for consequentialism (N = 560)
• 67% for deontology (N = 36)
• 47% for other/none (N = 145)
• 65% for virtue ethics (N = 116)

leaving out people who didn't answer. Consequentialists chose dust specks at a lower rate than each other group (which chi-squared tests confirm is statistically significant). But 41% of our consequentialists did still choose dust specks.

[Edit: "indentation is preserved", my arse. I am not a Markdown fan.]

Comment author: 30 November 2012 04:12:07PM *  2 points [-]

I̶ ̶t̶h̶i̶n̶k̶ ̶w̶e̶'̶v̶e̶ ̶f̶o̶u̶n̶d̶ ̶o̶u̶r̶ ̶a̶n̶s̶w̶e̶r̶,̶ ̶t̶h̶e̶n̶.̶

ETA: Really nice work from satt to prove I was jumping to conclusions here.

Comment author: 01 December 2012 05:16:50AM 3 points [-]

I was surprised to see that LW has almost as many socialists as libertarians. I had thought due to anecdotal evidence that the site was libertarian-dominated.

I suspect you'd see a higher percentage of libertarians if you restricted to non-lurkers, and even higher if you restricted by karma, or how often they post.

Comment author: 01 December 2012 09:57:18PM *  4 points [-]

## IQ Trend Analysis:

The self-reported IQ results on these surveys have been, to use Yvain's wording, "ridiculed" because they'd mean that the average LessWronger is gifted. Various other questions were added to the survey this time which gives us things to check against, and the results of these other questions have made the IQ figures more believable.

Summary:

LessWrong has lost IQ points on the self-reported scores every year for a total of 7.18 IQ points in 3.7 years or about 2 points per year. If LessWrong began with 145.88 IQ points in May 2009, then LessWrong has lost over half of it's giftedness (using IQ 132 as the definition, explained below).

The self-reported figures for each year:

IQ on 03/12/2009: 145.88

IQ on 00/00/2010: Unknown*

IQ on 12/05/2011: 140

IQ on 11/29/2012: 138.7

IQ points lost each year:

2.94 IQ point drop for 2010 (Estimated*)

2.94 IQ point drop for 2011 (Estimated*)

1.30 IQ point drop for 2012

Analysis:

Average IQ points lost per year: 1.94

Total IQ points lost: 7.18 in 3.7 years

Total IQ points LessWrong had above the gifted line: 13.88 (145.88 - 132*)

Percent less giftedness on the last survey result: 52% (7.18 / 13.88)

Footnotes:

* Unknown 2010 figures: There was no 2010 survey. The first line of the 2011 survey proposition mentions that.

* Estimated IQ point drops for 2010 and 2011: I divided the 2011 IQ drop by 2 and distributed it across 10/11.

* IQ 132 significance: IQ 132 is the top 2% (This may vary a little bit from one IQ test to another) which would qualify one as gifted by every IQ-based definition I know of. It is also (roughly) Mensa's entrance requirement (depending on the test) though Mensa does not dictate the legal or psychologist's definitions of giftedness. They are a club, not a developmental psychology authority.

Comment author: 02 December 2012 05:26:14PM *  8 points [-]

As I mentioned previously, and judging from the graphs, the standard deviations of the IQs are obviously mixed up, because they were not determined in the questionnaire, and probably people who answered are not educated about them either. Including IQs in s.d. 24 with those in s.d. 16 and 15 is bound to inflate the average IQ. The top scores in that graph, or at the very least some of them, are in s.d. 24, which means that they would be a lot lower in s.d. 15. IQ 132 is the cutoff for s.d. 16, while s.d. 15 is the one most adopted in recent scientific literature. For s.d. 24, it is 148. Mensa and often people on the press like to use s.d. 24 to sound more impressive to amateurs.

This probably makes tests like the SAT more reliable as an estimation, because they have the same standard for all who submitted their scores, although in this case the ceiling effect would become apparent, because perfect or nearly-perfect scores wouldn't go upwards of a certain IQ.

Comment author: 02 December 2012 08:23:31PM 2 points [-]

Ooh, you bring up good points. These are a source of noise, for sure.

Now I'm wondering if there are any clever ways to compensate for any of these and remove that noise from the survey...

Comment author: [deleted] 01 December 2012 10:16:37PM 8 points [-]

Error bars, please!

Comment author: 01 December 2012 11:24:31PM *  4 points [-]

The summary data:

1. 2009: n=67, 145.88(14.02)
2. 2011: n=331; 140.10(13.07)
3. 2012: n=346; 138.30(12.58); graphed:

The basic formula for a confidence interval of a population is: mean ± (z-score of confidence × (standard deviation / √n)). So for z-score=95%=1.96:

1. $145.88 \pm 1.96 \times \frac{14.02}{\sqrt{67}}$ = the range 142.5-149.2
2. $140.10 \pm 1.96 \times \frac{13.07}{\sqrt{331}}$ = the range 141.5-138.7
3. $138.30 \pm 1.96 \times \frac{12.58}{\sqrt{346}}$ = the range 137-139.6

Or to run the usual t-tests and look at the confidence interval they calculate for the difference; for 2009 & 2012, the 95% CI for the difference in mean IQ is 3.563-10.578:

R> lw2009 <- read.csv("lw-2009.csv")
R> lw2011 <- read.csv("lw-2011.csv")
R> lw2012 <- read.csv("lw-2012.csv")
R> # lwi2009 <- lw2009$IQ[!is.na(lw2009$IQ)]
R> # hand-cleaned:
R> lwi2009 <- c(120,125,128,129,130,130,130,130,130,130,130,130,130,131,132,132,133,134,136,138,138,139,139,140,
140,140,140,140,140,140,140,140,140,141,142,144,145,145,145,148,148,150,150,150,150,152,154,154,
155,155,155,155,156,158,158,160,160,160,160,162,163,164,165,166,170,171,173,180)
R> lwi2011 <- lw2011$IQ[!is.na(lw2011$IQ)]
R> lwi2012 <- lw2012$IQ[!is.na(lw2012$IQ)]
R>
R> t.test(lwi2009, lwi2012)
Welch Two Sample t-test
data: lwi2009 and lwi2012
t = 4.004, df = 91.49, p-value = 0.0001264
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
3.563 10.578
sample estimates:
mean of x mean of y
145.4 138.3
R> t.test(lwi2009, lwi2011)
Welch Two Sample t-test
data: lwi2009 and lwi2011
t = 2.968, df = 94.8, p-value = 0.003791
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
1.752 8.830
sample estimates:
mean of x mean of y
145.4 140.1
R> t.test(lwi2011, lwi2012)
Welch Two Sample t-test
data: lwi2011 and lwi2012
t = 1.804, df = 670.4, p-value = 0.07174
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.1578 3.7174
sample estimates:
mean of x mean of y
140.1 138.3

Comment author: 02 December 2012 12:56:26AM *  2 points [-]

To add a linear model (for those unfamiliar, see my HPMoR examples) which will really just recapitulate the simple averages calculation:

R> lw2009 <- read.csv("lw-2009.csv")
R> lw2011 <- read.csv("lw-2011.csv")
R> lw2012 <- read.csv("lw-2012.csv")
R>
R> # lwi2009 <- lw2009$IQ[!is.na(lw2009$IQ)]
R> # hand-cleaned:
R> lwi2009 <- c(120,125,128,129,130,130,130,130,130,130,130,130,130,131,132,132,133,134,136,138,138,139,139,140,
R> 140,140,140,140,140,140,140,140,141,142,144,145,145,145,148,148,150,150,150,150,152,154,154,
R> 155,155,155,156,158,158,160,160,160,160,162,163,164,165,166,170,171,173,180)
R> lwi2011 <- lw2011$IQ[!is.na(lw2011$IQ)]
R> lwi2012 <- lw2012$IQ[!is.na(lw2012$IQ)]
R>
R> xs <- c(rep(as.Date("2009-03-01"), length(lwi2009)), rep(as.Date("2011-11-01"), length(lwi2011)), rep(as.Date("2012-11-01"), length(lwi2012)))
R> ys <- c(lwi2009, lwi2011, lwi2012)
R> model <- lm(ys ~ xs)
R> summary(model)
Call:
lm(formula = ys ~ xs)
Residuals:
Min 1Q Median 3Q Max
-38.29 -8.29 -0.29 6.73 63.81
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 219.49064 19.42751 11.30 < 2e-16
xs -0.00519 0.00126 -4.11 4.5e-05
Residual standard error: 12.9 on 741 degrees of freedom
Multiple R-squared: 0.0222, Adjusted R-squared: 0.0209
F-statistic: 16.9 on 1 and 741 DF, p-value: 4.48e-05


Comment author: 02 December 2012 10:21:25AM 4 points [-]

Note that Epiphany dates the 2009 survey to around March, while the other two surveys happened around November, so inputting the survey dates just as years lowballs the time gap between the first & second surveys. Your linear trend'll be a bit exaggerated.

Comment author: 02 December 2012 06:55:26PM 4 points [-]

I've fixed it as appropriate.

Your linear trend'll be a bit exaggerated.

Before, the slope per year was -2.24 (minus 2.25 points a year), now the slope spits out as -0.00519 but if I'm understanding my changes right, the unit has switched from per year to per day and 365.25 times -0.005 IQ points per day is -1.896 per year.

2.25 vs 1.9 is fairly different.

Comment author: 02 December 2012 09:21:46PM 3 points [-]

This comment is relevant; we have a dataset of users who both took the Raven's test and self-reported IQ. The means of the group that did both was rather close to the means of the group that did each separately, but the correlation between the tests was low at .2. If you looked just at responders with positive karma, the correlation increased to a more respectable .45; if you looked just responders without positive karma, the correlation was -.11. This was a small fraction of responders as a whole, and the average IQ is already tremendously inflated by nonresponse. (If we assumed that, on average, people who didn't self-report an IQ were IQ 100, then the LW average would be only 112!)

Comment author: 29 November 2012 09:47:07AM 4 points [-]

It could be that many people self-reported IQ based off of their SAT or ACT scores, which would explain away the correlation. How many people reported both SAT and ACT scores?

Comment author: 29 November 2012 11:37:40PM 3 points [-]

You mean either of the SATs?

R> length(lw[(!is.na(lw$SATscoresoutof2400) | !is.na(lw$SATscoresoutof2400)) & !is.na(lw$ACTscoreoutof36),]) [1] 106  Comment author: 02 December 2012 04:40:42PM * 2 points [-] If many people used the same formula to convert their SAT score to an IQ score, I expected the line would jump out, but I don't see anything like that on the scatterplot. IQs are often multiples of 5. I think that is the result of IQ tests that do not aim for precision and that conversion charts would end in any digit. 56% of survey IQs are multiples of 5. For those reporting both IQ and SAT, the number is 59%, so it is not depressed (or inflated) by those doing conversions. If we remove the multiples of 5, the correlation drops to .2 and stops being statistically significant. But both scatterplots look pretty similar. Comment author: 30 November 2012 07:09:06AM * 2 points [-] Alternate Explanations for LW's Calibration Atrociousness: Maybe a lot of the untrained people simply looked up the answer to the question. If you did not rule that out with your study methods, then consider seeing whether a suspiciously large number of them entered the exact right year? Maybe LWers were suffering from something slightly different from the overconfidence bias you're hoping to detect: difficulty admitting that they have no idea when Thomas Bayes was born because they feel they should really know that. Comment author: 30 November 2012 10:35:25PM * 6 points [-] The mean was 1768, the median 1780, and the mode 1800. Only 169 of 1006 people who answered the question got an answer within 20 years of 1701. Moreover, the three people that admitted to looking it up (and therefore didn't give a calibration) all gave incorrect answers: 1750, 1759, and 1850. So it seems like your first explanation can't be right. After trying a bunch of modifications to the data, it seems like the best explanation is that the poor calibration happened because people didn't think about the error margin carefully enough. If we change the error margin to 80 years instead of 20, then the responses seem to look roughly like the untrained example from the graph in Yvain's analysis. Another observation is that after we drop the 45 people who gave confidence levels >85% (and in fact, 89% of them were right), the remaining data is absolutely abysmal: the remaining answers are essentially uncorrelated with the confidence levels. This suggests that there were a few pretty knowledgeable people who got the answer right and that was that. Everyone else just guessed and didn't know how to calibrate; this may correspond to your second explanation. Comment author: [deleted] 01 December 2012 03:15:49PM 5 points [-] Another thing I have noticed is that I tend to pigeonhole stuff into centuries; for example, once in a TV quiz there was a question “which of these pairs of people could have met” (i.e. their lives overlapped), I immediately thought “It can't be Picasso and van Gogh: Picasso lived in the 20th century, whereas van Gogh lived in the 19th century.” I was wrong. Picasso was born in 1881 and van Gogh died in 1890. If other people also have this bias, this can help explain why so many more people answered 17xx than 16xx, thereby causing the median answer to be much later than the correct answer. Comment author: 30 November 2012 12:18:49AM * 2 points [-] Women were on average newer to the community - 21 months vs. 39 for men - but to my surprise a t-test was unable to declare this significant. Maybe I'm doing it wrong? Well, possibly. The t-distribution is used for "estimating the mean of a normally distributed population," (yay wikipedia) and you're trying to estimate the mean of a slanted-uniformly-distributed-with-a-spike-at-the-beginning population. But there is another important consideration, which is that applying more scrutiny to unexpected results gives you systematic error (confirmation bias), and that's bad. To avoid this big problem, any increase in test quality should probably be part of a wholesale reanalysis, i.e. prolly not gonna happen. But there is another route, which is just accepting that your results are imperfect and widening your mental error bars. After all, where does this systematic error come from when you re-analyze unexpected results? It comes from you making mistakes on other things too, but not re-analyzing them! So once you know about the systematic error, you also know about all these other mistakes you have on average made :P Comment author: 30 November 2012 01:30:52AM * 3 points [-] Well, possibly. The t-distribution is used for "estimating the mean of a normally distributed population," (yay wikipedia) and you're trying to estimate the mean of a slanted-uniformly-distributed-with-a-spike-at-the-beginning population. Yeah, it'd have to be some combination of a uniform Poisson (since we don't seem to be growing a lot, per Yvain) and an exponential distribution (constant mortality of users). If we graph histograms, either blunt or finegrained, it looks like that but also with weird huge spikes besides the original OB->LW spike: R> hist(as.numeric(as.character(lw$TimeinCommunity)))


R> hist(as.numeric(as.character(lw$TimeinCommunity)), breaks=50)  But on the plus side, if we look at the genders as a box plot, we discover why the mean is lower for women but there's not significance: R> lwm <- subset(lw, as.character(Gender)=="M (cisgender)") R> lwf <- subset(lw, as.character(Gender)=="F (cisgender)") R> boxplot(as.numeric(lwm$TimeinCommunity), as.numeric(lwf$TimeinCommunity))  There are, after all, many fewer women. Comment author: 29 November 2012 09:57:56PM 2 points [-] Any results for the calibration IQ? Comment author: 30 November 2012 12:15:47AM * 2 points [-] The original question: What do you think is the probability that the IQ you gave earlier in the survey is greater than the IQ of over 50% of survey respondents? Well, the predictions spread the usual range and look OK to me: R> lwci <- as.numeric(as.character(lw$CalibrationIQ))
R> lwci <- lwci[!is.na(lwci)]
R> # convert tiny decimals to percentages & put a ceiling of 100 (thanks to Mr. 1700...)
R> lwci <- sapply(lwci, function(x) if (x<=1.00) { x*100 } else { if(x>100) { 100 } else { x }})
R> summary(lwci)
Min. 1st Qu. Median Mean 3rd Qu. Max. 0.0 20.0 50.0 44.8 70.0 100.0

Comment author: 29 November 2012 08:54:52PM *  3 points [-]

Well-educated atheist American white men in their mid 20s with no children who work with computers.

"The new thing for people who would have been Randian Objectivists 30 years ago."

The demographics are essentially the same except LW is probably more than 2:1 politically left vs. right. Objectivists are probably more than 2:1 in the other direction.

Since when did people like us decide it is OK to be liberal/socialist?

Comment author: 29 November 2012 10:56:54PM 2 points [-]

I think there is a significant correlation between Objectivism/hardcore libertarianism and the described demographics, but it does not mean that all or even most people of that demographic have that ideology; it just means that this demographic is much more likely to have this ideology that a random person is.

Also, while it is true that there are more LWers that are atheist than theist, male than female, white than other races, etc, it is at the same time very unlikely that most LWers have all those characteristics. (Being typical in all respects is very atypical). And just having one of those characteristics different might make the correlation with Objectivism/hardcore libertarianism reduce a lot.

Comment author: 01 December 2012 04:41:39AM 4 points [-]

Also, while it is true that there are more LWers that are atheist than theist, male than female, white than other races, etc, it is at the same time very unlikely that most LWers have all those characteristics.

Given that we are 86.2% male cisgender, 84.3% caucasion (non-hispanic), and 83.3% atheist (spiritual or not) that means a minimum of 53% of LWers are all three; probably the actual number is over 60%.

In answer to the parent, atheism in America may have started becoming a more liberal pursuit somewhere around 30 years ago when the Republican party started being substantially more religious and dismissive of atheism and science.

Comment author: 01 December 2012 04:43:26PM 4 points [-]

Out of the 1067 people who made their responses public, 694 are all three, which is 65%.

Comment author: 30 November 2012 02:07:17AM *  2 points [-]

Problem:

The line: "This includes all types with greater than 10 people. You can see the full table here." links to a gif that is inaccurate, has no key to explain oddities, and is of such poor graphical quality that parts of it are actually unreadable.

It may be that the reason that invalid personality types like "INNJ" are listed is due to typos on the part of the survey participants. If so, then great! But it may also be that the person who constructed this graphic put typos in (I consider this fairly likely due to the fact that the graphical quality is so low that some of it's not readable. For instance, the number of INTPs is so unclear I can't even tell what it says - it looks like 113 but your results in the post claim 143). It isn't obvious why the invalid types are there, so a key or note would be nice.

Also, some of the participants had a good idea: if one of your personality dimension letters changes when taking the test multiple times, you can fill it out with an X. Can we add an instruction for them to do this on the next survey?

Comment author: 30 November 2012 04:21:12AM 5 points [-]

The graphic was automatically generated by a computer program, so there's no chance that typos were introduced. There's no key to explain oddities because I have no way of knowing the explanation any better than you. When in doubt, blame survey takers being trolls.

But I do apologize for the poor graphic quality.

Comment author: 30 November 2012 03:11:17PM *  2 points [-]

Also, some of the participants had a good idea: if one of your personality dimension letters changes when taking the test multiple times, you can fill it out with an X. Can we add an instruction for them to do this on the next survey?

I don't take this test all too often (in fact, didn't take the one in the survey IIRC), but if we can do this, here's my personality type: IXXX. Oh wait.

(Yes, seriously, if I take an online MBTI test several times at evenly spaced time intervals within the same month, the first varies between .6 and .95 towards I, and the others just jump around in a manner I can't predict (yet, anyway, probably could eventually if I did more timewasting internet-test-taking))

I predict similar (perhaps less pronounced?) variation would be present in around 30% of LWers (not too confident in this number), and that we could reduce the variation dramatically by eliminating confused questions and tabooing ambiguous or vague words / phrases, replacing them with multiple questions containing various common meanings, and an even greater (bitwise) reduction by giving more contextual information from which the respondent can infer or judge values and weight variables on "It depends, but I suppose most of the time I would..." -type answers. (much more confident in these last two predictions than the first)

Comment author: 30 November 2012 12:58:55AM 2 points [-]

As for the IQ question and especially the self-reported IQ, it did not take into account that IQ should come at least with standard deviation. Otherwise it's like asking for a height number without saying if it is in centimeters, meters, or feet. It's understandable that people who didn't study psychometrics with some depth don't know this, though.

IQ can be a ratio IQ or a deviation IQ. In the first case it is mental age divided by actual age, with the normalcy as 100. This is still used mostly for children, but it's still possible to see such scores. Deviation IQ is more common and it's supposed to measure one's intelligence according to rarity in a population.

Sometimes these tests are standardized for certain countries, in which case an IQ score only has relevance in relation to that country's population, but generally the standard is the population of England or the USA, with its average being 100. Other countries have averages ranging from about 67 to 107 (s.d. 15), compared to it. The average IQ score of the world is estimated at about 90, but there are also differences in standard deviation among different populations, some have bigger variation than others, and also between the sexes (men have a slightly higher standard deviation).

Standard deviations used are 15, 16, and 24. For instance, an IQ score one standard deviation above 100 could be 115, 116, or 124. An IQ of 163 in s.d. 15 corresponds to an IQ of 167 in s.d. 16, or an IQ of 200 in s.d. 24, which, in average, correspond to a ratio IQ of 185. When estimating the true world rarity of IQ scores, though, very lengthy and complex estimations would need to be made, otherwise the scores only reflect the rarity in England or in the USA, and not in the world. When it comes to scores higher than two or three standard deviations above the average, most IQ tests are inadequate and insufficiently standardized to measure them and their rarity well.

This information is for your curiosity. The relevant point is that the self-reported IQ scores quite possibly were stated in differing standard deviations.

Comment author: 29 November 2012 07:50:13PM *  2 points [-]

Are people who understand quantum mechanics are more likely to believe in Many Worlds? We perform a t-test, checking whether one's probability of the MWI being true depends on whether or not one can solve the Schrodinger Equation. People who could solve the equation had on average a 54.3% probability of MWI, compared to 51.3% in those who could not. The p-value is 0.26; there is a 26% probability this occurs by chance. Therefore, we fail to establish that people's probability of MWI varies with understanding of quantum mechanics.

Just wanted to point out a few fallacies in the above:

• "can solve the Schrodinger Equation" means nothing or less without specifying the problem you are solving. The two simplest problems taught in a modern physics course, the free particle and a one-dimensional infinite square well are hardly comparable with, say, calculating the MRI parameters.

• self-reporting "can solve the Schrodinger Equation" does not mean one actually can.

• even then, "can solve the Schrodinger Equation" does not mean "understand quantum mechanics", as it does not require one to understand measurement and decoherence, which is what motivates MWI in the first place.

• there are many versions of MWI, from literal ("the Universe split into two or more every time something happens") to Platonic ("Mathematical Universe").

Basically, I hope that you realize that this is a prime example of "garbage in, garbage out". I suppose it's a good thing that there was no correlation, otherwise one might draw some unwarranted conclusions from this.

Comment author: 30 November 2012 12:07:19AM 7 points [-]

I'm assuming that the question was meant as a simple and polite proxy for "Does your knowledge of quantum mechanics include some actual mathematical content, or is it just taken from popular science books and articles?"

Comment author: 30 November 2012 12:14:18AM 3 points [-]

Probably. The reason he mentioned the Schrodinger equation was likely an attempt to quantify it. I am arguing that the threshold is set too low to be useful.

Comment author: [deleted] 30 November 2012 12:37:57AM 4 points [-]

The question was specifically about the SE for a hydrogen atom. But I agree that having good PDE-fu isn't necessarily a good proxy for anything else.

Comment author: 30 November 2012 04:25:54AM 3 points [-]

The actual survey specified "can solve the Schrodinger equation for a hydrogen atom". Although it is not exactly synonymous with "understands quantum mechanics", you would expect them to be highly correlated.

Comment author: 30 November 2012 04:49:31PM 2 points [-]

The actual survey specified "can solve the Schrodinger equation for a hydrogen atom".

Right, sorry, I forgot that qualifier since the time I took the survey. It does imply more familiarity with the underlying math than the simplest possible cases. Still, I recall that when I was at that level, I was untroubled by the foundational issues, just being happy to have mastered the math.

Although it is not exactly synonymous with "understands quantum mechanics", you would expect them to be highly correlated

I wonder if there is a way to test this assertion. One would presumably start by defining what "understands quantum mechanics" means.

Comment author: 01 December 2012 04:50:27PM *  3 points [-]

When I was learning to solve the hydrogen atom, they didn't even talk about the foundational issues, just waved it off with some wave-particle duality nonsense. But still, it seems like as good a criterion as you're going to get, unless you want to ask if people have a Master in Physics (Quantum).

Comment author: 01 December 2012 07:18:13PM 2 points [-]

I suppose that a better question would be related to the EPR paradox, but I'm not sure what academic course would cover it.

Comment author: 29 November 2012 11:55:06PM 6 points [-]

If the correlation had come out the other way, you'd be jumping on it as proof of your thesis that LWers favor MWI because they are sheepishly following Eliezer. In what universe where they are indeed sheepishly and ignorantly following him does a question like that show nothing whatsoever?

Comment author: 30 November 2012 12:05:44AM 2 points [-]

If the correlation had come out the other way, you'd be jumping on it as proof of your thesis that LWers favor MWI because they are sheepishly following Eliezer.

Probably (though not a proof, just one piece of evidence). I suspect that "garbage in" is the reason why we don't see it, but I do not have a convincing argument either way, short of asking Eliezer to post an insincere message "I no longer believe in MWI", take the survey soon after, then have him retract the retraction. This would, however, be rather damaging to his credibility in general.

Comment author: 01 December 2012 05:10:18AM *  2 points [-]

I suspect asking about density matrices might be a better test.