Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

2014 Survey Results

87 Post author: Yvain 05 January 2015 07:36PM

Thanks to everyone who took the 2014 Less Wrong Census/Survey. Extra thanks to Ozy, who did a lot of the number crunching work.

This year's results are below. Some of them may make more sense in the context of the original survey questions, which can be seen here. Please do not try to take the survey as it is over and your results will not be counted.

I. Population

There were 1503 respondents over 27 days. The last survey got 1636 people over 40 days. The last four full days of the survey saw nineteen, six, and four responses, for an average of about ten. If we assume the next thirteen days had also gotten an average of ten responses - which is generous, since responses tend to trail off with time - then we would have gotten about as many people as the last survey. There is no good evidence here of a decline in population, although it is perhaps compatible with a very small decline.

II. Demographics

Female: 179, 11.9%
Male: 1311, 87.2%

F (cisgender): 150, 10.0%
F (transgender MtF): 24, 1.6%
M (cisgender): 1245, 82.8%
M (transgender FtM): 5, 0.3%
Other: 64, 4.3%

Sexual Orientation
Asexual: 59, 3.9%
Bisexual: 216, 14.4%
Heterosexual: 1133, 75.4%
Homosexual: 47, 3.1%
Other: 35, 2.3%

[This question was poorly worded and should have acknowledged that people can both be asexual and have a specific orientation; as a result it probably vastly undercounted our asexual readers]

Relationship Style
Prefer monogamous: 778, 51.8%
Prefer polyamorous: 227, 15.1%
Uncertain/no preference: 464, 30.9%
Other: 23, 1.5%

Number of Partners
0: 738, 49.1%
1: 674, 44.8%
2: 51, 3.4%
3: 17, 1.1%
4: 7, 0.5%
5: 1, 0.1%
Lots and lots: 3, 0.2%

Relationship Goals
Currently not looking for new partners: 648, 43.1%
Open to new partners: 467, 31.1%
Seeking more partners: 370, 24.6%

[22.2% of people who don’t have a partner aren’t looking for one.]

Relationship Status
Married: 274, 18.2%
Relationship: 424, 28.2%
Single: 788, 52.4%

[6.9% of single people have at least one partner; 1.8% have more than one.]

Living With
Alone: 345, 23.0%
With parents and/or guardians: 303, 20.2%
With partner and/or children: 411, 27.3%
With roommates: 428, 28.5%

0: 1317, 81.6%
1: 66, 4.4%
2: 78, 5.2%
3: 17, 1.1%
4: 6, 0.4%
5: 3, 0.2%
6: 1, 0.1%
Lots and lots: 1, 0.1%

Want More Children?
Yes: 549, 36.1%
Uncertain: 426, 28.3%
No: 516, 34.3%

[418 of the people who don’t have children don’t want any, suggesting that the LW community is 27.8% childfree.]

United States, 822, 54.7%
United Kingdom, 116, 7.7%
Canada, 88, 5.9%
Australia: 83, 5.5%
Germany, 62, 4.1%
Russia, 26, 1.7%
Finland, 20, 1.3%
New Zealand, 20, 1.3%
India, 17, 1.1%
Brazil: 15, 1.0%
France, 15, 1.0%
Israel, 15, 1.0%

Lesswrongers Per Capita
Finland: 1/271,950
New Zealand: 1/223,550
Australia: 1/278,674
United States: 1/358,390
Canada: 1/399,545
Israel: 1/537,266
United Kingdom: 1/552,586
Germany: 1/1,290,323
France: 1/ 4,402,000
Russia: 1/ 5,519,231
Brazil: 1/ 13,360,000
India: 1/ 73,647,058

Asian (East Asian): 59. 3.9%
Asian (Indian subcontinent): 33, 2.2%
Black: 12. 0.8%
Hispanic: 32, 2.1%
Middle Eastern: 9, 0.6%
Other: 50, 3.3%
White (non-Hispanic): 1294, 86.1%

Work Status
Academic (teaching): 86, 5.7%
For-profit work: 492, 32.7%
Government work: 59, 3.9%
Homemaker: 8, 0.5%
Independently wealthy: 9, 0.6%
Nonprofit work: 58, 3.9%
Self-employed: 122, 5.8%
Student: 553, 36.8%
Unemployed: 103, 6.9%

Art: 22, 1.5%
Biology: 29, 1.9%
Business: 35, 4.0%
Computers (AI): 42, 2.8%
Computers (other academic): 106, 7.1%
Computers (practical): 477, 31.7%
Engineering: 104, 6.1%
Finance/Economics: 71, 4.7%
Law: 38, 2.5%
Mathematics: 121, 8.1%
Medicine: 32, 2.1%
Neuroscience: 18, 1.2%
Philosophy: 36, 2.4%
Physics: 65, 4.3%
Psychology: 31, 2.1%
Other: 157, 10.2%
Other “hard science”: 25, 1.7%
Other “social science”: 34, 2.3%

None: 74, 4.9%
High school: 347, 23.1%
2 year degree: 64, 4.3%
Bachelors: 555, 36.9%
Masters: 278, 18.5%
JD/MD/other professional degree: 44, 2.9%
PhD: 105, 7.0%
Other: 24, 1.4%

III. Mental Illness

535 answer “no” to all the mental illness questions. Upper bound: 64.4% of the LW population is mentally ill.
393 answer “yes” to at least one mental illness question. Lower bound: 26.1% of the LW population is mentally ill. Gosh, we have a lot of self-diagnosers.

Yes, I was formally diagnosed: 273, 18.2%
Yes, I self-diagnosed: 383, 25.5%
No: 759, 50.5%

Yes, I was formally diagnosed: 30, 2.0%
Yes, I self-diagnosed: 76, 5.1%
No: 1306, 86.9%

Autism spectrum

Yes, I was formally diagnosed: 98, 6.5%
Yes, I self-diagnosed: 168, 11.2%
No: 1143, 76.0%


Yes, I was formally diagnosed: 33, 2.2%
Yes, I self-diagnosed: 49, 3.3%
No: 1327, 88.3%

Anxiety disorder
Yes, I was formally diagnosed: 139, 9.2%
Yes, I self-diagnosed: 237, 15.8%
No: 1033, 68.7%

Yes, I was formally diagnosed: 5, 0.3%
Yes, I self-diagnosed: 19, 1.3%
No: 1389, 92.4%


Yes, I was formally diagnosed: 7, 0.5%
Yes, I self-diagnosed: 7, 0.5%
No: 1397, 92.9%

IV. Politics, Religion, Ethics

Communist: 9, 0.6%
Conservative: 67, 4.5%
Liberal: 416, 27.7%
Libertarian: 379, 25.2%
Social Democratic: 585, 38.9%

[The big change this year was that we changed "Socialist" to "Social Democratic". Even though the description stayed the same, about eight points worth of Liberals switched to Social Democrats, apparently more willing to accept that label than "Socialist". The overall supergroups Libertarian vs. (Liberal, Social Democratic) vs. Conservative remain mostly unchanged.]

Politics (longform)
Anarchist: 40, 2.7%
Communist: 9, 0.6%
Conservative: 23, 1.9%
Futarchist: 41, 2.7%
Left-Libertarian: 192, 12.8%
Libertarian: 164, 10.9%
Moderate: 56, 3.7%
Neoreactionary: 29, 1.9%
Social Democrat: 162, 10.8%
Socialist: 89, 5.9%

[Amusing politics answers include anti-incumbentist, having-well-founded-opinions-is-hard-but-I’ve-come-to-recognize-the-pragmatism-of-socialism-I-don’t-know-ask-me-again-next-year, pirate, progressive social democratic environmental liberal isolationist freedom-fries loving pinko commie piece of shit, republic-ist aka read the federalist papers, romantic reconstructionist, social liberal fiscal agnostic, technoutopian anarchosocialist (with moderate snark), whatever it is that Scott is, and WHY ISN’T THERE AN OPTION FOR NONE SO I CAN SIGNAL MY OBVIOUS OBJECTIVITY WITH MINIMAL EFFORT. Ozy would like to point out to the authors of manifestos that no one will actually read their manifestos except zir, and they might want to consider posting them to their own blogs.]

American Parties
Democratic Party: 221, 14.7%
Republican Party: 55, 3.7%
Libertarian Party: 26, 1.7%
Other party: 16, 1.1%
No party: 415, 27.6%
Non-Americans who really like clicking buttons: 415, 27.6%


Yes: 881, 58.6%
No: 444, 29.5%
My country doesn’t hold elections: 5, 0.3%


Atheist and not spiritual: 1054, 70.1%
Atheist and spiritual: 150, 10.0%
Agnostic: 156, 10.4%
Lukewarm theist: 44, 2.9%
Deist/pantheist/etc.: 22,, 1.5%
Committed theist: 60, 4.0%

Religious Denomination
Christian (Protestant): 53, 3.5%
Mixed/Other: 32, 2.1%
Jewish: 31, 2.0%
Buddhist: 30, 2.0%
Christian (Catholic): 24, 1.6%
Unitarian Universalist or similar: 23, 1.5%

[Amusing denominations include anti-Molochist, CelestAI, cosmic engineers, Laziness, Thelema, Resimulation Theology, and Pythagorean. The Cultus Deorum Romanorum practitioner still needs to contact Ozy so they can be friends.]

Family Religion
Atheist and not spiritual: 213, 14.2%
Atheist and spiritual: 74, 4.9%
Agnostic: 154. 10.2%
Lukewarm theist: 541, 36.0%
Deist/Pantheist/etc.: 28, 1.9%
Committed theist: 388, 25.8%

Religious Background
Christian (Protestant): 580, 38.6%
Christian (Catholic): 378, 25.1%
Jewish: 141, 9.4%
Christian (other non-protestant): 88, 5.9%
Mixed/Other: 68, 4.5%
Unitarian Universalism or similar: 29, 1.9%
Christian (Mormon): 28, 1.9%
Hindu: 23, 1.5%’

Moral Views
Accept/lean towards consequentialism: 901, 60.0%
Accept/lean towards deontology: 50, 3.3%
Accept/lean towards natural law: 48, 3.2%
Accept/lean towards virtue ethics: 150, 10.0%
Accept/lean towards contractualism: 79, 5.3%
Other/no answer: 239, 15.9%

Constructivism: 474, 31.5%
Error theory: 60, 4.0%
Non-cognitivism: 129, 8.6%
Subjectivism: 324, 21.6%
Substantive realism: 209, 13.9%

V. Community Participation

Less Wrong Use
Lurker: 528, 35.1%
I’ve registered an account: 221, 14.7%
I’ve posted a comment: 419, 27.9%
I’ve posted in Discussion: 207, 13.8%
I’ve posted in Main: 102, 6.8%

Never knew they existed until this moment: 106, 7.1%
Knew they existed, but never looked at them: 42, 2.8%
Some, but less than 25%: 270, 18.0%
About 25%: 181, 12.0%
About 50%: 209, 13.9%
About 75%: 242, 16.1%
All or almost all: 427, 28.4%

Yes, regularly: 154, 10.2%
Yes, once or a few times: 325, 21.6%
No: 989, 65.8%


Yes, all the time: 112, 7.5%
Yes, sometimes: 191, 12.7%
No: 1163, 77.4%

Yes: 82, 5.5%
I didn’t meet them through the community but they’re part of the community now: 79, 5.3%
No: 1310, 87.2%

CFAR Events
Yes, in 2014: 45, 3.0%
Yes, in 2013: 60, 4.0%
Both: 42, 2.8%
No: 1321, 87.9%

CFAR Workshop
Yes: 109, 7.3%
No: 1311, 87.2%

[A couple percent more people answered 'yes' to each of meetups, physical interactions, CFAR attendance, and romance this time around, suggesting the community is very very gradually becoming more IRL. In particular, the number of people meeting romantic partners through the community increased by almost 50% over last year.]

Yes: 897, 59.7%
Started but not finished: 224, 14.9%
No: 254, 16.9%

Referred by a link: 464, 30.9%
HPMOR: 385, 25.6%
Been here since the Overcoming Bias days: 210, 14.0%
Referred by a friend: 199, 13.2%
Referred by a search engine: 114, 7.6%
Referred by other fiction: 17, 1.1%

[Amusing responses include “a rationalist that I follow on Tumblr”, “I’m a student of tribal cultishness”, and “It is difficult to recall details from the Before Time. Things were brighter, simpler, as in childhood or a dream. There has been much growth, change since then. But also loss. I can't remember where I found the link, is what I'm saying.”]

Blog Referrals
Slate Star Codex: 40, 2.6%
Reddit: 25, 1.6%
Common Sense Atheism: 21, 1.3%
Hacker News: 20, 1.3%
Gwern: 13, 1.0%

VI. Other Categorical Data

Cryonics Status
Don’t understand/never thought about it: 62, 4.1%
Don’t want to: 361, 24.0%
Considering it: 551, 36.7%
Haven’t gotten around to it: 272, 18.1%
Unavailable in my area: 126, 8.4%
Yes: 64, 4.3%

Type of Global Catastrophic Risk
Asteroid strike: 64, 4.3%
Economic/political collapse: 151, 10.0%
Environmental collapse: 218, 14.5%
Nanotech/grey goo: 47, 3.1%
Nuclear war: 239, 15.8%
Pandemic (bioengineered): 310, 20.6%
Pandemic (natural): 113. 7.5%
Unfriendly AI: 244, 16.2%

[Amusing answers include ennui/eaten by Internet, Friendly AI, “Greens so weaken the rich countries that barbarians conquer us”, and Tumblr.]

Effective Altruism (do you self-identify)
Yes: 422, 28.1%
No: 758, 50.4%

[Despite some impressive outreach by the EA community, numbers are largely the same as last year]

Effective Altruism (do you participate in community)
Yes: 191, 12.7%
No: 987, 65.7%

Vegan: 31, 2.1%
Vegetarian: 114, 7.6%
Other meat restriction: 252, 16.8%
Omnivore: 848, 56.4%

Paleo Diet

Yes: 33, 2.2%
Sometimes: 209, 13.9%
No: 1111, 73.9%

Food Substitutes
Most of my calories: 8. 0.5%
Sometimes: 101, 6.7%
Tried: 196, 13.0%
No: 1052, 70.0%

Gender Default
I only identify with my birth gender by default: 681, 45.3%
I strongly identify with my birth gender: 586, 39.0%

<5: 198, 13.2%
5 - 10: 384, 25.5%
10 - 20: 328, 21.8%
20 - 50: 264, 17.6%
50 - 100: 105, 7.0%
> 100: 49, 3.3%

Birth Month
Jan: 109, 7.3%
Feb: 90, 6.0%
Mar: 123, 8.2%
Apr: 126, 8.4%
Jun: 107, 7.1%
Jul: 109, 7.3%
Aug: 120, 8.0%
Sep: 94, 6.3%
Oct: 111, 7.4%
Nov: 102, 6.8%
Dec: 106, 7.1%

[Despite my hope of something turning up here, these results don't deviate from chance]

Right: 1170, 77.8%
Left: 143, 9.5%
Ambidextrous: 37, 2.5%
Unsure: 12, 0.8%

Previous Surveys
Yes: 757, 50.7%
No:  598, 39.8%

Favorite Less Wrong Posts (all > 5 listed)
An Alien God: 11
Joy In The Merely Real: 7
Dissolving Questions About Disease: 7
Politics Is The Mind Killer: 6
That Alien Message: 6
A Fable Of Science And Politics: 6
Belief In Belief: 5
Generalizing From One Example: 5
Schelling Fences On Slippery Slopes: 5
Tsuyoku Naritai: 5

VII. Numeric Data

Age: 27.67 + 8.679 (22, 26, 31) [1490]
IQ: 138.25 + 15.936 (130.25, 139, 146) [472]
SAT out of 1600: 1470.74 + 113.114 (1410, 1490, 1560) [395]
SAT out of 2400: 2210.75 + 188.94 (2140, 2250, 2320) [310]
ACT out of 36: 32.56 + 2.483 (31, 33, 35) [244]
Time in Community: 2010.97 + 2.174 (2010, 2011, 2013) [1317]
Time on LW: 15.73 + 95.75 (2, 5, 15) [1366]
Karma Score: 555.73 + 2181.791 (0, 0, 155) [1335]

P Many Worlds: 47.64 + 30.132 (20, 50, 75) [1261]
P Aliens: 71.52 + 34.364 (50, 90, 99) [1393]
P Aliens (Galaxy): 41.2 + 38.405 (2, 30, 80) [1379]
P Supernatural: 6.68 + 20.271 (0, 0, 1) [1386]
P God: 8.26 + 21.088 (0, 0.01, 3) [1376]
P Religion: 4.99 + 18.068 (0, 0, 0.5) [1384]
P Cryonics: 22.34 + 27.274 (2, 10, 30) [1399]
P Anti-Agathics: 24.63 + 29.569 (1, 10, 40) [1390]
P Simulation 24.31 + 28.2 (1, 10, 50) [1320]
P Warming 81.73 + 24.224 (80, 90, 98) [1394]
P Global Catastrophic Risk 72.14 + 25.620 (55, 80, 90) [1394]
Singularity: 2143.44 + 356.643 (2060, 2090, 2150) [1177]

[The mean for this question is almost entirely dependent on which stupid responses we choose to delete as outliers; the median practically never changes]

Abortion: 4.38 + 1.032 (4, 5, 5) [1341]
Immigration: 4 + 1.078 (3, 4, 5) [1310]
Taxes : 3.14 + 1.212 (2, 3, 4) [1410] (from 1 - should be lower to 5 - should be higher)
Minimum Wage: 3.21 + 1.359 (2, 3, 4) [1298] (from 1 - should be lower to 5 - should be higher)
Feminism: 3.67 + 1.221 (3, 4, 5) [1332]
Social Justice: 3.15 + 1.385 (2, 3, 4) [1309]
Human Biodiversity: 2.93 + 1.201 (2, 3, 4) [1321]
Basic Income: 3.94 + 1.087 (3, 4, 5) [1314]
Great Stagnation: 2.33 + .959 (2, 2, 3) [1302]
MIRI Mission: 3.90 + 1.062 (3, 4, 5) [1412]
MIRI Effectiveness: 3.23 + .897 (3, 3, 4) [1336]

[Remember, all of these are asking you to rate your belief in/agreement with the concept on a scale of 1 (bad) to 5 (great)]

Income: 54129.37 + 66818.904 (10,000, 30,800, 80,000) [923]
Charity: 1996.76 + 9492.71 (0, 100, 800) [1009]
MIRI/CFAR: 511.61 + 5516.608 (0, 0, 0) [1011]
XRisk: 62.50 + 575.260 (0, 0, 0) [980]
Older siblings: 0.51 + .914 (0, 0, 1) [1332]
Younger siblings: 1.08 + 1.127 (0, 1, 1) [1349]
Height: 178.06 + 11.767 (173, 179, 184) [1236]
Hours Online: 43.44 + 25.452 (25, 40, 60) [1221]
Bem Sex Role Masculinity: 42.54 + 9.670 (36, 42, 49) [1032]
Bem Sex Role Femininity: 42.68 + 9.754 (36, 43, 50) [1031]
Right Hand: .97 + 0.67 (.94, .97, 1.00)
Left Hand: .97 + .048 (.94, .97, 1.00)

VIII. Fishing Expeditions

[correlations, in descending order]

SAT Scores out of 1600/SAT Scores out of 2400 .844 (59)
P Supernatural/P God .697 (1365)
Feminism/Social Justice .671 (1299)
P God/P Religion .669 (1367)
P Supernatural/P Religion .631 (1372)
Charity Donations/MIRI and CFAR Donations .619 (985)
P Aliens/P Aliens 2 .607 (1376)
Taxes/Minimum Wage .587 (1287)
SAT Score out of 2400/ACT Score .575 (89)
Age/Number of Children .506 (1480)
P Cryonics/P Anti-Agathics .484 (1385)
SAT Score out of 1600/ACT Score .480 (81)
Minimum Wage/Social Justice .456 (1267)
Taxes/Social Justice .427 (1281)
Taxes/Feminism .414 (1299)
MIRI Mission/MIRI Effectiveness .395 (1331)
P Warming/Taxes .385 (1261)
Taxes/Basic Income .383 (1285)
Minimum Wage/Feminism .378 (1286)
P God/Abortion -.378 (1266)
Immigration/Feminism .365 (1296)
P Supernatural/Abortion -.362 (1276)
Feminism/Human Biodiversity -.360 (1306)
MIRI and CFAR Donations/Other XRisk Charity Donations .345 (973)
Social Justice/Human Biodiversity -.341 (1288)
P Religion/Abortion -.326 (1275)
P Warming/Minimum Wage .324 (1248)
Minimum Wage/Basic Income .312 (1276)
P Warming/Basic Income .306 (1260)
Immigration/Social Justice .294 (1278)
P Anti-Agathics/MIRI Mission .293 (1351)
P Warming/Feminism .285 (1281)
P Many Worlds/P Anti-Agathics .276 (1245)
Social Justice/Femininity .267 (990)
Minimum Wage/Human Biodiversity -.264 (1274)
Immigration/Human Biodiversity -.263 (1286)
P Many Worlds/MIRI Mission .263 (1233)
P Aliens/P Warming .262 (1365)
P Warming/Social Justice .257 (1262)
Taxes/Human Biodiversity -.252 (1291)
Social Justice/Basic Income .251 (1281)
Feminism/Femininity .250 (1003)
Older Siblings/Younger Siblings -.243 (1321)
Charity Donations/Other XRisk Charity Donations .240 (957
P Anti-Agathics/P Simulation .238 (1312)
Abortion/Minimum Wage .229 (1293)
Feminism/Basic Income .227 (1297)
Abortion/Feminism .226 (1321)
P Cryonics/MIRI Mission .223 (1360)
Immigration/Basic Income .208 (1279)
P Many Worlds/P Cryonics .202 (1251)
Number of Current Partners/Femininity: .202 (1029)
P Warming/Immigration .202 (1260)
P Warming/Abortion .201 (1289)
Abortion/Taxes .198 (1304)
Age/P Simulation .197 (1313)
Political Interest/Masculinity .194 (1011)
P Cryonics/MIRI Effectiveness .191 (1285)
Abortion/Social Justice .191 (1301)
P Simulation/MIRI Mission .188 (1290)
P Many Worlds/P Warming .188 (1240)
Age/Number of Current Partners .184 (1480)
P Anti-Agathics/MIRI Effectiveness .183 (1277)
P Many Worlds/P Simulation .181 (1211)
Abortion/Immigration .181 (1304)
Number of Current Partners/Number of Children .180 (1484)
P Cryonics/P Simulation .174 (1315)
P Global Catastrophic Risk/MIRI Mission -.174 (1359)
Minimum Wage/Femininity .171 (981)
Abortion/Basic Income .170 (1302)
Age/P Cryonics -.165 (1391)
Immigration/Taxes .165 (1293)
P Warming/Human Biodiversity -.163 (1271)
P Aliens 2/Warming .160 (1353)
Abortion/Younger Siblings -.155 (1292)
P Religion/Meditate .155 (1189)
Feminism/Masculinity -.155 (1004)
Immigration/Femininity .155 (988)
P Supernatural/Basic Income -.153 (1246)
P Supernatural/P Warming -.152 (1361)
Number of Current Partners/Karma Score .152 (1332)
P Many Worlds/MIRI Effectiveness .152 (1181)
Age/MIRI Mission -.150 (1404)
P Religion/P Warming -.150 (1358)
P Religion/Basic Income -.146 (1245)
P God/Basic Income -.146 (1237)
Human Biodiversity/Femininity -.145 (999)
P God/P Warming -.144 (1351)
Taxes/Femininity .142 (987)
Number of Children/Younger Siblings .138 (1343)
Number of Current Partners/Masculinity: .137 (1030)
P Many Worlds/P God -.137 (1232)
Age/Charity Donations .133 (1002)
P Anti-Agathics/P Global Catastrophic Risk -.132 (1373)
P Warming/Masculinity -.132 (992)
P Global Catastrophic Risk/MIRI and CFAR Donations -.132 (982)
P Supernatural/Singularity .131 (1148)
God/Taxes -.130 (1240)
Age/P Anti-Agathics -.128 (1382)
P Aliens/Taxes .127(1258)
Feminism/Great Stagnation -.127 (1287)
P Many Worlds/P Supernatural -.127 (1241)
P Aliens/Abortion .126 (1284)
P Anti-Agathics/Great Stagnation -.126 (1248)
P Anti-Agathics/P Warming .125 (1370)
Age/P Aliens .124 (1386)
P Aliens/Minimum Wage .124 (1245)
P Aliens/P Global Catastrophic Risk .122 (1363)
Age/MIRI Effectiveness -.122 (1328)
Age/P Supernatural .120 (1370)
P Supernatural/MIRI Mission -.119 (1345)
P Many Worlds/P Religion -.119 (1238)
P Religion/MIRI Mission -.118 (1344)
Political Interest/Social Justice .118 (1304)
P Anti-Agathics/MIRI and CFAR Donations .118 (976)
Human Biodiversity/Basic Income -.115 (1262)
P Many Worlds/Abortion .115 (1166)
Age/Karma Score .114 (1327)
P Aliens/Feminism .114 (1277)
P Many Worlds/P Global Catastrophic Risk -.114 (1243)
Political Interest/Femininity .113 (1010)
Number of Children/P Simulation -.112 (1317)
P Religion/Younger Siblings .112 (1275)
P Supernatural/Taxes -.112 (1248)
Age/Masculinity .112 (1027)
Political Interest/Taxes .111 (1305)
P God/P Simulation .110 (1296)
P Many Worlds/Basic Income .110 (1139)
P Supernatural/Younger Siblings .109 (1274)
P Simulation/Basic Income .109 (1195)
Age/P Aliens 2 .107 (1371)
MIRI Mission/Basic Income .107 (1279)
Age/Great Stagnation .107 (1295)
P Many Worlds/P Aliens .107 (1253)
Number of Current Partners/Social Justice .106 (1304)
Human Biodiversity/Great Stagnation .105 (1285)
Number of Children/Abortion -.104 (1337)
Number of Current Partners/P Cryonics -.102 (1396)
MIRI Mission/Abortion .102 (1305)
Immigration/Great Stagnation -.101 (1269)
Age/Political Interest .100 (1339)
P Global Catastrophic Risk/Political Interest .099 (1295)
P Aliens/P Religion -.099 (1357)
P God/MIRI Mission -.098 (1335)
P Aliens/P Simulation .098 (1308)
Number of Current Partners/Immigration .098 (1305)
P God/Political Interest .098 (1274)
P Warming/P Global Catastrophic Risk .096 (1377)

In addition to the Left/Right factor we had last year, this data seems to me to have an Agrees with the Sequences Factor-- the same people tend to believe in many-worlds, cryo, atheism, simulationism, MIRI’s mission and effectiveness, anti-agathics, etc. Weirdly, belief in global catastrophic risk is negatively correlated with most of the Agrees with Sequences things. Someone who actually knows how to do statistics should run a factor analysis on this data.

IX. Digit Ratios

After sanitizing the digit ratio numbers, the following correlations came up:

Digit ratio R hand was correlated with masculinity at a level of -0.180 p < 0.01
Digit ratio L hand was correlated with masculinity at a level of -0.181 p < 0.01
Digit ratio R hand was slightly correlated with femininity at a level of +0.116 p < 0.05

Holy #@!$ the feminism thing ACTUALLY HELD UP. There is a 0.144 correlation between right-handed digit ratio and feminism, p < 0.01. And an 0.112 correlation between left-handed digit ratio and feminism, p < 0.05.

The only other political position that correlates with digit ratio is immigration. There is a 0.138 correlation between left-handed digit ratio and believe in open borders p < 0.01, and an 0.111 correlation between right-handed digit ratio and belief in open borders, p < 0.05.

No digit correlation with abortion, taxes, minimum wage, social justice, human biodiversity, basic income, or great stagnation.

Okay, need to rule out that this is all confounded by gender. I ran a few analyses on men and women separately.

On men alone, the connection to masculinity holds up. Restricting sample size to men, left-handed digit ratio corresponds to masculinity with at -0.157, p < 0.01. Left handed at -0.134, p < 0.05. Right-handed correlates with femininity at 0.120, p < 0.05. The feminism correlation holds up. Restricting sample size to men, right-handed digit ratio correlates with feminism at a level of 0.149, p < 0.01. Left handed just barely fails to correlate. Both right and left correlate with immigration at 0.135, p < 0.05.

On women alone, the Bem masculinity correlation is the highest correlation we're going to get in this entire study. Right hand is -0.433, p < 0.01. Left hand is -0.299, p < 0.05. Femininity trends toward significance but doesn't get there. The feminism correlation trends toward significance but doesn't get there. In general there was too small a sample size of women to pick up anything but the most whopping effects.

Since digit ratio is related to testosterone and testosterone sometimes affects risk-taking, I wondered if it would correlate with any of the calibration answers. I selected people who had answered Calibration Question 5 incorrectly and ran an analysis to see if digit ratio was correlated with tendency to be more confident in the incorrect answer. No effect was found.

Other things that didn't correlate with digit ratio: IQ, SAT, number of current partners, tendency to work in mathematical professions.

...I still can't believe this actually worked. The finger-length/feminism connection ACTUALLY WORKED. What a world. What a world. Someone may want to double-check these results before I get too excited.

X. Calibration

There were ten calibration questions on this year's survey. Along with answers, they were:

1. What is the largest bone in the body? Femur
2. What state was President Obama born in? Hawaii
3. Off the coast of what country was the battle of Trafalgar fought? Spain
4. What Norse God was called the All-Father? Odin
5. Who won the 1936 Nobel Prize for his work in quantum physics? Heisenberg
6. Which planet has the highest density? Earth
7. Which Bible character was married to Rachel and Leah? Jacob
8. What organelle is called "the powerhouse of the cell"? Mitochondria
9. What country has the fourth-highest population? Indonesia
10. What is the best-selling computer game? Minecraft

I ran calibration scores for everybody based on how well they did on the ten calibration questions. These failed to correlate with IQ, SAT, LW karma, or any of the things you might expect to be measures of either intelligence or previous training in calibration; they didn't differ by gender, correlates of community membership, or any mental illness [deleted section about correlating with MWI and MIRI, this was an artifact].

Your answers looked like this:

The red line represents perfect calibration. Where answers dip below the line, it means you were overconfident; when they go above, it means you were underconfident.

It looks to me like everyone was horrendously underconfident on all the easy questions, and horrendously overconfident on all the hard questions. To give an example of how horrendous, people who were 50% sure of their answers to question 10 got it right only 13% of the time; people who were 100% sure only got it right 44% of the time. Obviously those numbers should be 50% and 100% respectively.

This builds upon results from previous surveys in which your calibration was also horrible. This is not a human universal - people who put even a small amount of training into calibration can become very well calibrated very quickly. This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.

XI. Wrapping Up

To show my appreciation for everyone completing this survey, including the arduous digit ratio measurements, I have randomly chosen a person to receive a $30 monetary prize. That person is...the person using the public key "The World Is Quiet Here". If that person tells me their private key, I will give them $30.

I have removed 73 people who wished to remain private, deleted the Private Keys, and sanitized a very small amount of data. Aside from that, here are the raw survey results for your viewing and analyzing pleasure:

(as Excel)

(as SPSS)

(as CSV)

Comments (279)

Comment author: epursimuove 05 January 2015 08:36:33PM 22 points [-]

The number of Asians (both East and South) among American readers is pretty surprisingly low - 43/855 ~= 5%. This despite Asians being, e.g., ~15% of the Ivy League student body (it'd be much higher without affirmative action), and close to 50% of Silicon Valley workers.

Comment author: someonewrongonthenet 06 January 2015 11:35:06PM *  23 points [-]

Being south asian myself - I suspect that the high achieving immigrant-and-immigrant-descended populations gravitate towards technical fields and Ivy leagues for different reasons than American whites do. Coming from hardship and generally being less WEIRD, they psychologically share more in common with the middle class and even blue collar workers than the Ivy League upper class - they see it as a path to success rather than some sort of grand purposeful undertaking. (One of the Asian Professional community I participated in articulated this and other differences in attitude as a reason that Asians often find themselves getting passed over for higher level management positions, as something to be overcome).

Lesswrong tends to appeal to abstract, starry-eyed types. I hate to use the word "privilege", but there is some hard to quantify things, like degree of time talking about lesswrong-y key words like "free will" or "utilitarianism", which are going to influence the numbers here. (Not that asians don't like chatting about philosophy, but they certainly have less time for it and also they tend to focus on somewhat different topics during philosophical discussions and use different words. They've got a somewhat separate religious-philosophical tradition)

Another possibility that might make an even bigger difference is that, lacking an organized religion to revolt against, Asians may less often be militant atheists and skeptics. Lesswrong and Overcoming Bias owe part of their heritage to the skeptic blogosphere.


Comment author: laofmoonster 11 January 2015 07:49:07AM 3 points [-]

East Asian - mostly agreed. I think WEIRDness is the biggest factor. WEIRD thought emphasizes precision and context-independent formalization. I am pretty deracinated myself, but my thinking style is low-precision, tolerant of apparent paradoxes and context-sensitive. The difference is much like the analytic-continental divide in Western philosophy. I recommend Richard Nisbett's book The Geography of Thought, which contrasts WEIRD thought with East Asian thought.

37 Ways Words Can Be Wrong (and LW as a whole) is important because of how brittle WEIRD concepts can be. (I have some crackpot ideas about maps and territories inspired by Jean Baudrillard. He's French, of course...)

Comment author: skeptical_lurker 05 January 2015 11:32:56PM 3 points [-]

Is affirmative action being used against Asian even though they are a minority?

Comment author: epursimuove 05 January 2015 11:59:20PM *  12 points [-]

There's pretty unambiguous statistical evidence that it happens. The Asian Ivy League percentage has remained basically fixed for 20 years despite the college-age Asian population doubling (and Asian SAT scores increasing slightly).

Comment author: Nornagest 05 January 2015 11:43:46PM *  11 points [-]

"Used against", to me, implies active planning that may or may not exist here; but the pragmatic effects of the policy as implemented in American universities do seem to negatively affect Asians.

Comment author: skeptical_lurker 06 January 2015 12:06:47AM 4 points [-]

"Used against", to me, implies active planning that may or may not exist here;

Ahh, the old 'malicious or incompetent' dichotomy.

Comment author: Nornagest 06 January 2015 12:13:41AM 2 points [-]

I'm a big believer in Hanlon's razor, especially as it applies to policy.

Comment author: Vaniver 06 January 2015 12:43:07AM 1 point [-]

I've noticed this for a while. Might be interesting to look at this by referral source?

Comment author: Vaniver 04 January 2015 11:50:14PM *  17 points [-]

Calibration Score

Using a log scoring rule, I calculated a total accuracy+calibration score for the ten questions together. There's an issue that this assumes the questions are binary when they're not- someone who is 0% sure that Thor is the right answer to the mythology question gets the same score (0) as the person who is 100% sure that Odin is the right answer to the mythology question. I ignored infinitely low scores for the correlation part.

I replicated the MWI correlation, but I noticed something weird- all of the really low scorers gave really low probabilities to MWI. The worst scorer had a score of -18, which corresponds to giving about 1.6% probability to the right answer. What appears to have happened is they misunderstood the survey, and answered in fractions instead of percents- they got 9 out of 10 questions right, but lost 2 points every time they assigned 1% or slightly less than 1% to the right answer (i.e. they mean to express almost certainty by saying 1 or 0.99) and only lost 0.0013 points when they assigned 0.3% probability to a wrong answer.

When I drop the 30 lowest scorers, the direction of the relationship flips- now, people with better log scores (i.e. closer to 0) give lower probabilities for MWI (with a text answer counting as a probability of 0, as most were complaints that asking for a number didn't make sense).

What about Tragic Mistakes? These are people that assign 0% probability to a correct answer, or 100% probability to a wrong one, and under a log scoring rule lose infinite points. Checking those showed both errors, as well as highlighting that several of the 'wrong' answers were spelling mistakes- I probably would have accepted "Oden" and "Mitocondria."

(Amusingly, the person with the most tragic mistakes- 9 of them- supplied a probability for their answers instead of an answer, so they were 100% sure that the battle of Trafalgar was fought off the coast of 100, which was the state where Obama was born.)

There's a tiny decline in tragic mistakes as P(MWI) increases, but I don't think I'd be confident in drawing conclusions from this data.

Comment author: Luke_A_Somers 05 January 2015 11:31:56PM 1 point [-]

I've always wanted to visit 100.

Can you show the distribution of overall calibration scores? You only talked about the extreme cases and the differences across P(MWI), but you clearly have it.

Comment author: Vaniver 06 January 2015 12:13:48AM *  2 points [-]

Can you show the distribution of overall calibration scores? You only talked about the extreme cases and the differences across P(MWI), but you clearly have it.

Picture included, tragic mistakes excluded*. The percentage at the bottom is a mapping from the score to probabilities using the inverse of "if you had answered every question right with probability p, what score would you have?", and so is not anything like the mean probability given. Don't take either of the two perfect scores seriously; as mentioned in the grandparent, this scoring rule isn't quite right because it counts answering incorrectly with 0% probability as the same as answering correctly with 100% probability. (One answered 'asdf' to everything with 0% probability, the other left 9 blank with 0% probability and answered Odin with 100% probability.) Bins have equal width in log-space.

* I could have had a spike at 0, but that seems not quite fair since it was specified that '100' and '0' would be treated as '100-epsilon' and 'epsilon' respectively, and it's only a Tragic Mistake if you actually answer 0 instead of epsilon.

Comment author: MTGandP 10 January 2015 06:30:17AM 0 points [-]

Sort-of related question: How do you compute calibration scores?

Comment author: Vaniver 10 January 2015 07:03:15AM 0 points [-]

I was using a logarithmic scoring rule, with a base of 10. (What base you use doesn't really matter.) The Excel formula for the first question (I'm pretty sure I didn't delete any columns, so it should line up) was:

Comment author: redlizard 04 January 2015 07:01:17AM 16 points [-]

MIRI Mission/MIRI Effectiveness .395 (1331)

This result sets off my halo effect alarm.

Comment author: William_Quixote 05 January 2015 03:10:28PM *  15 points [-]

Once again pandemic is the leading cat risk. It was the leading cat risk last year. http://lesswrong.com/lw/jj0/2013_survey_results/aekk It was the leading cat risk the year before that. http://lesswrong.com/lw/fp5/2012_survey_results/7xz0

Pandemics are the risk LWers are most afraid of and to my knowledge we as a community have expended almost no effort on preventing them.

So this year I resolve that my effort towards pandemic prevention will be greater than simply posting a remark about how it's the leading risk.

Comment author: SolveIt 06 January 2015 08:57:32AM 8 points [-]

Pandemics may be the largest risk, but the marginal contribution a typical LWer can make is probably very low, and not their comparative advantage. Let the WHO do its work, and turn your attention to underconsidered risks.

Comment author: 27chaos 06 January 2015 07:05:13PM 2 points [-]

Money can be donated.

Comment author: Fluttershy 06 January 2015 12:50:40AM 7 points [-]

Givewell has looked into global catastrophic risks in general, plus pandemic preparedness in particular. My impression is that quite a bit more is spent per year on biosecurity (around 6 billion in the US) than is on other catastrophic risks such as AI.

Comment author: blacktrance 05 January 2015 08:54:39PM 19 points [-]

Clearly, we haven't been doing enough to increase other risks. We can't let pandemic stay in the lead.

Comment author: Ander 07 January 2015 12:32:06AM 1 point [-]

Get to work on making more AIs everyone!

Comment author: someonewrongonthenet 07 January 2015 11:42:14PM *  2 points [-]

we as a community have expended almost no effort on preventing them.

I'm not so sure about that. Isn't the effective altruist focus on global poverty/disease reducing the risk of pandemic? I know very little about epidemiology, but if seems as if a lot of scary diseases (AIDs, ebola...) would never have spread to the human population if certain regions of the third world had better medical infrastructure.

Comment author: William_Quixote 09 January 2015 06:37:23PM 1 point [-]

That's fair. It's certianly true that poverty reduction also reduces pandemic risk. But it does so inditectly and slowly. There are probably faster ways to reduce pandemic risk than working on poverty.

Comment author: iarwain1 04 January 2015 03:14:34PM 14 points [-]


This is why I didn't vote on the politics question.

This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.

Theory: People use this site as a geek / intellectual social outlet and/or insight porn and/or self-help site more than they seriously try to get progressively better at rationality. At least, I know that applies to me :).

Comment author: FeepingCreature 04 January 2015 04:34:28PM *  24 points [-]

This definitely belongs on the next survey!

Why do you read LessWrong? [ ] Rationality improvement [ ] Insight Porn [ ] Geek Social Fuzzies [ ] Self-Help Fuzzies [ ] Self-Help Utilons [ ] I enjoy reading the posts

Comment author: MTGandP 10 January 2015 05:46:39AM 3 points [-]

And then check if the "rationality improvement" people do better on calibration. (I'm guessing they don't.)

Comment author: homunq 23 February 2015 01:53:00AM 1 point [-]

[ ] Wow, these people are smart. [ ] Wow, these people are dumb. [ ] Wow, these people are freaky. [ ] That's a good way of putting it, I'll remember that.

(For me, it's all of the above. "Insight porn" is probably the biggest, but it doesn't dominate.)

Comment author: [deleted] 07 January 2015 07:37:26PM 4 points [-]

Theory: People use this site as a geek / intellectual social outlet and/or insight porn and/or self-help site more than they seriously try to get progressively better at rationality.

Is that actually nonobvious? It's sure as hell what I'm here for. I mean, I do actually generally want to be more rational about stuff, but I can't get that by reading a website. Inculcating better habits and reflexes requires long hours spent on practicing better habits and reflexes so I can move the stuff System 2 damn well knows already down into System 1.

Comment author: Unnamed 06 January 2015 02:04:29AM 12 points [-]

I decided to take a look at overconfidence (rather than calibration) on the 10 calibration questions.

For each person, I added up the probabilities that they assigned to getting each of the 10 questions correct, and then subtracted the number of correct answers. Positive numbers indicate overconfidence (fewer correct answers than they predicted they'd get), negative numbers indicate underconfidence (more correct answers than they predicted). Note that this is somewhat different from calibration: you could get a good score on this if you put 40% on each question and get 40% of them right (showing no ability to distinguish between what you know and what you don't), or if you put 99% on the ones you get wrong and 1% on the ones you get right. But this overconfidence score is easy to calculate, has a nice distribution, and is informative about the general tendency to be overconfident.

After cleaning up the data set in a few ways (which I'll describe in a reply to this comment), the average overconfidence score was 0.39. On average, people expected to get 4.79 of the 10 questions correct, but only got 4.40 correct. My impression is that this gap (4 percentage points) is smallish compared to what overconfidence research tends to find, but I don't have any numbers at hand to make direct comparisons with the numbers in the published literature.

People were most overconfident on question 6 (densest planet: 18% correct, 35% average estimate) and question 10 (bestselling video game: 7% correct, 22% average estimate) and most underconfident on questions 4 (Norse God: 87% correct, 75% average estimate) and 2 (Obama's state: 82% correct, 71% average estimate).

Overconfidence was correlated with a few other variables at p<.01:

SATscoresoutof2400 -.185 (242)
SATscoresoutof1600 -.160 (329)
IQ -.157 (368)
PCryonics .116 (1112)
MinimumWage .086 (1055)

That is, people who were more overconfident had lower test scores, assigned a higher probability to cryonics working, and were more in favor of raising the minimum wage. On PCryonics, I think my comments about the cryonics questions on the 2011 Survey are related to what's going on.

Overconfidence had no significant relationships with any of the other numerical variables, including the various other probability estimates and political views, age, finger ratio, or charitable donations. It was also uncorrelated with scales measuring growth mindset and self-efficacy.

When I turned them into numbers, various measures of ties to the LW community were correlated with overconfidence in the expected direction (closer ties to LW --> less overconfident), but not at p < .01 (perhaps in part because they weren't really intended to be continuous variables). So I combined several questions about ties to LW into a simple composite variable where a person gets one point each for: having read the sequences, having joined the community before 2010, having at least 1000 karma, having read all of HPMOR, having attended a full CFAR workshop, having posted in main, regularly attending meetups, regularly interacting with LWers in person, and having a romantic partner that they met through LW. This composite variable (which ranged from 0 to 8) correlated with overconfidence at r = -.085, p < .01. In other words: people with closer ties to LW were less overconfident.

But it's probably more informative to compare means on these variables, instead of turning them into an ad hoc continuous variable. Here is the average overconfidence score among various subgroups (where the full sample was overconfident by 0.39 questions out of ten, etc.):

Everyone 0.39 (4.79 pred - 4.40 actual) (n=1141)
Read HPMOR 0.35 (4.70 pred - 4.35 actual) (n=753)
Active in-person 0.26 (4.55 pred - 4.29 actual) (n=171)
Read the sequences 0.23 (4.69 pred - 4.46 actual) (n=357)
Attended CFAR 0.15 (4.42 pred - 4.27 actual) (n=91)
High test scores 0.15 (5.14 pred - 4.99 actual) (n=260)
1000 karma 0.14 (4.98 pred - 4.83 actual) (n = 127)

(The active in-person group includes everyone who answered yes/regularly/all the time to any of the 3 questions: in-person interaction, attending meetups, or LW romantic partner. The high test scores group includes anyone who was in the top 25% of reported scores on any one of the 4 test score questions: IQ (146+), SAT out of 1600 (1560+), SAT out of 2400 (2330+), or ACT (35+).)

Compared to the full sample (which was overconfident by 0.39 questions), there was less than half as much overconfidence among people who attended CFAR, have 1000 karma, or have high test scores. Other indicators of LW involvement were also associated with less overconfidence, though with smaller effect sizes.

Note that being overconfident by 0.14 questions is a small enough gap to be accounted for entirely by a single one of the 10 questions. If we remove the video game question, for example, then the people with 1000+ karma are within 0.01 question of being neither overconfident nor underconfident. So these results are consistent with the 1000+ karma group being perfectly calibrated (although they still count as some evidence in favor of that group being a bit overconfident).

In summary: LWers show some overconfidence, probably less overconfidence than in the published literature, and there's less overconfidence among those with close ties to LW (e.g., high karma or CFAR alumni) or with high test scores. Pretty similar to what I found for other biases on the 2012 LW Survey.

Comment author: Unnamed 06 January 2015 02:04:54AM 10 points [-]

Details on data cleanup:

In the publicly available data set, I restricted my analysis to people who:
* entered a number on each of the 10 calibration probability estimates
* did not enter any estimates larger than 100
* entered at least one estimate larger than 1
* entered something on each of the 10 calibration guesses
* did not enter a number for any of the 10 calibration guesses

Failure to meet any of these criteria generally indicated either a failure to understand the format of the calibration questions, or a decision to skip one or more of the questions. Each of these criteria eliminated at least 1 person, leaving a sample of 1141 people.

I counted as "correct":
* any answer which Scott/Ozy counted as correct
* any answer to question 1 (largest bone) which began with "fem" (e.g., "femer")
* any answer to question 2 (Obama's state) which began with "haw" (e.g., "Hawii")
* any answer to question 4 (Norse god) which began with "od" or "wo" (e.g., "Wotan")
* any answer to question 8 (cell) which began with "mito" (e.g., "Mitochondira")

These seem to cover the most common misspellings (or alternate names, e.g. "Wotan" is the German name for Odin), while counting very few obviously wrong answers as correct, and without having to go through every answer one by one. Counting these answers gave the average participant another 0.15 correct answers, and I suspect we could add another 0.05 or so by going through answer by answer with lenient standards. The mitochondria leniency made the largest difference, adding 97 correct answers.

Without counting these additional correct answers, the average overconfidence score would have been 0.54 among the full sample, 0.40 among sequence readers, 0.32 among CFAR alumni, 0.40 among active-in-person LWers, 0.30 among those with 1000 karma, and 0.23 among those with high test scores. Counting these additional correct answers helped non-US LWers more than US LWers (by 0.21 questions vs. 0.11); I suspect that part of that is due to spelling difficulties for non-native speakers and part is due to the Odin vs. Wotan thing.

Comment author: Unnamed 06 January 2015 07:07:09AM 9 points [-]

And here's an analysis of calibration.

If a person was perfectly calibrated, then each 10% increase in their probability estimate would translate into a 10% higher likelihood of getting the answer correct. If you plot probability estimates on the x axis and whether or not the event happened on the y axis, then you should get a slope of 1 (the line y=x). But people tend to be miscalibrated - out of the questions where they say "90%", they might only get 70% correct. This results in a shallower slope (in this example, the line would go through the point (90,70) instead of (90,90)) - a slope less than 1.

I took the 1141 people's answers to the 10 calibration questions as 11410 data points, plotted them on an x-y graph (with the probability estimate as the x value and a y value of 100 if it's correct and 0 if it's incorrect), and ran an ordinary linear regression to find the slope of the line fit to all 11410 data points.

That line had a slope of 0.91. In other words, if a LWer gave a probability estimate that was 10 percentage points higher, then on average the claim was 9.1 percentage points more likely to be true. Not perfect calibration, but not bad.

If we look at various subsets of LWers on the survey, here are the slopes that we get:

0.91 Everyone
0.92 Read HPMOR
0.92 1000 karma
0.93 High test scores
0.93 Read the sequences
0.96 Active in-person
0.96 Attended CFAR

I haven't done any tests of statistical significance, but all of these more LWy subgroups do have slopes that are higher (and closer to the well-calibrated slope of 1) than the slope for the full sample (as do the people with high scores on SAT/ACT/IQ tests).

Comment author: ilzolende 04 January 2015 06:05:03AM 11 points [-]

Thanks for showing us that there are autistic cryonics patients in the world. I am more likely to sign up when I am old enough to legally do so without parental permission, because now I know I wouldn't be the only autistic person in the future, no matter what happens when people develop a prenatal autism test.

Comment author: NancyLebovitz 04 January 2015 02:25:08PM 6 points [-]

I believe that if cryonics works, people will tend to associate with those from their home era.

Comment author: Jacobian 06 January 2015 04:22:11AM *  10 points [-]

Myth: Americans think they know a lot about other countries but really are clueless.

Verdict: Self-cancelling prophesy.

Method: Semi-humorous generalization from a single data series, hopefully inspiring replication instead of harsh judgment :)

I decided to do some analysis about what makes people overconfident about certain subjects, and decided to start with an old stereotype. I compared how people did on the population calibration question (#9) based on their country.

Full disclosure: I'm Israeli (currently living in the US) and would've guessed Japan with 50% confidence, but I joined LW (unlurked) two days after the end of the survey.

I normalized every probability by rounding extreme confidence values to 1% and 99% and scored each answer that seemed close enough to a misspelling of Indonesia according to the log rule.

Results: Americans didn't have a strong showing with an average score of -0.0071, but the rest of the world really sucked with an average of -0.0296. The reason? While the correct answer rate was almost identical (28.3% v 28.8%) Americans were much less confident in their answers: 42.4% confidence v 46.3% (p<0.01).

Dear Americans, you don't know (significantly) less about the world than everyone else, but at least you internalized the fact that you don't know much*!

Next up: how people who grew up in a religious household do on the Biblical calibration question.

*Unlike cocky Israelis like me.

Comment author: Vaniver 04 January 2015 10:55:19PM *  10 points [-]

Thanks for doing this!

[This question was poorly worded and should have acknowledged that people can both be asexual and have a specific orientation; as a result it probably vastly undercounted our asexual readers]

I find the "vastly" part dubious, given that 3% asexual already seems disproportionately large (general population seems to be about 1%). I would expect for asexuals to be overrepresented, and I do think the question wording means the survey's estimate underestimates the true proportion, but I don't think that it's, say, actually 10% instead of actually 4%.

Comment author: randallsquared 04 January 2015 02:43:41PM 10 points [-]

May is missing from Birth Month.

Comment author: [deleted] 05 January 2015 08:11:09PM 8 points [-]

I think it's pretty astounding that nobody at Less Wrong was born in May. I'm not sure why Scott doesn't think that's a deviation from randomness.

Comment author: imuli 06 January 2015 06:44:53PM 2 points [-]

May is in the data, a copy-paste error is much less astounding than nobody being born in May.

119 respondents, nothing surprising here.

Comment author: gjm 07 January 2015 10:14:36AM 3 points [-]

You might consider the hypothesis that FrameBenignly appreciates this and was making a joke. This seems much more likely to me than that s/he actually thinks no one said they were born in May.

(Of course, maybe I'm missing a meta-joke where you pretend to take FrameBenignly at face value just as s/he pretended to take the alleged survey data at face value. But then maybe you're now missing a meta-meta-joke where I pretend to take you at face value...)

Comment author: [deleted] 08 January 2015 10:33:07PM 4 points [-]

I'd make a triple-meta joke, but there's a two-meta limit on all month of birth jokes.

Comment author: imuli 07 January 2015 01:26:50PM 1 point [-]

Oh no! I forgot to leave my evidence.

Comment author: TheOtherDave 07 January 2015 03:36:06PM 0 points [-]

I see what you did there.

Comment author: whateverfor 04 January 2015 09:48:56PM 7 points [-]

Do you have some links to calibration training? I'm curious how they handle model error (the error when your model is totally wrong).

For question 10 for example, I'm guessing that many more people would have gotten the correct answer if the question was something like "Name the best selling PC game, where best selling solely counts units not gross, number of box purchases and not subscriptions, and also does not count games packaged with other software?" instead of "What is the best-selling computer game of all time?". I'm guessing most people answered WOW or Solitaire/Minesweeper or Tetris, each of which would be the correct answer if you remove on of those restraiints.

But it seems hard to guess beforehand that the question you thought you were answering wasn't the question that you were being asked! So you'd end up distributing that model error relatively evenly over all the questions, and so you'd end up underconfident on the questions where the model was straightfoward and correct and overconfident when the question wasn't as simple as it appeared.

Comment author: Vaniver 04 January 2015 10:12:09PM 6 points [-]

I'm curious how they handle model error (the error when your model is totally wrong).

They punish it. That is, your stated credence should include both your 'inside view' error of "How confident is my mythology module in this answer?" and your 'outside view' error of "How confident am I in my mythology module?"

One of the primary benefits of playing a Credence Game like this one is it gives you a sense of those outside view confidences. I am, for example, able to tell which of two American postmasters general came first at the 60% level, simply by using the heuristic of "which of these names sounds more old-timey?", but am at the 50% level (i.e. pure chance) in determining which sports team won a game by comparing their names.

But it seems hard to guess beforehand that the question you thought you were answering wasn't the question that you were being asked!

This is the sort of thing you learn by answering a bunch of questions from the same person, or by having a lawyer-sense of "how many qualifications would I need to add or remove to this sentence to be sure?".

Comment author: orthonormal 09 January 2015 02:00:09AM 6 points [-]

It looks to me like everyone was horrendously underconfident on all the easy questions, and horrendously overconfident on all the hard questions.

I think that this is what correct calibration overall looks like, since you don't know in advance which questions are easy and which ones are tricky. I would be quite impressed if a group of super-calibrators had correct calibration curves on every question, rather than on average over a set of questions.

Comment author: Douglas_Knight 12 February 2015 02:04:12AM 0 points [-]

Right, Dunning-Kruger is just regression to the mean.

Comment author: orthonormal 17 February 2015 01:42:13AM 0 points [-]

No, that's false. It's possible (and common) for a person to be wildly overconfident on a pretty broad domain of questions.

Comment author: Unnamed 05 January 2015 10:10:13PM 6 points [-]

I think that there are better analyses of calibration which could be done than the ones that are posted here.

For example, I think it's better to combine all 10 questions into a single graph rather than looking at each one separately.

The pattern of overconfidence on hard questions and underconfidence on easy questions is actually what you'd expect to find, even if people are well-calibrated. One thing that makes a question easy is if the obvious guess is the correct answer (like a question about Confederate Civil War generals where the correct answer is Robert E. Lee). On those sorts of easy questions, a bunch of people who made the obvious guess with moderate confidence will turn out to be correct, and they'll look underconfident. Whereas on questions where the correct answer is more obscure or counterintuitive, those guesses will be wrong and they'll look overconfident.

I'll see what other analyses I can do with the data. First I'll need to make some not-entirely-straightforward decisions about what to do with misspellings and other interesting responses. This may be fairly important, since those account for a decent chunk of all responses. For example, here are the most common responses to question 8 (in descending order of frequency):


Comment author: Nate_Gabriel 04 January 2015 06:24:40AM *  6 points [-]

P Supernatural: 6.68 + 20.271 (0, 0, 1) [1386]

P God: 8.26 + 21.088 (0, 0.01, 3) [1376]

The question for P(Supernatural) explicitly said "including God." So either LW assigns a median probability of at least one in 10,000 that God created the universe and then did nothing, or there's a bad case of conjunction fallacy.

Comment author: epursimuove 04 January 2015 06:42:53AM 6 points [-]

So either LW assigns a median probability of at least one in 10,000 that God created the universe and then did nothing

Religion Deist/pantheist/etc.: 22,, 1.5%

Comment author: Coscott 04 January 2015 07:40:47AM *  2 points [-]

Conjunctions do not work with medians that way. From what you quoted, it is entirely possible that the median probability for that claim is 0. You can figure it out from the raw data.

Comment author: TheMajor 04 January 2015 10:20:31AM 2 points [-]

I don't understand. Since existence of God is explicitly included in the question about the existence of supernatural things, everybody should have put P(God) < P(Supernatural), and therefore the median also is lower (since for every entry P(God) there is a higher entry P(Supernatural) by that same person). So the result above should be weak evidence that a significant proportion of the LW'ers fell prey to the conjunction fallacy here, right?

Comment author: Coscott 04 January 2015 07:49:22PM *  1 point [-]

No, I think that a god that does not interfere with the physical universe at all counts as not supernatural by the wording of the question.

My point was that the median of the difference of two data sets is not the difference of the median. (although it is still evidence of a problem)

Comment author: John_Maxwell_IV 05 January 2015 08:55:05AM 5 points [-]

This is not a human universal - people who put even a small amount of training into calibration can become very well calibrated very quickly. This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.

Can someone who's done calibration training comment on whether it really seems to represent the ability to "judge how much evidence you have on a given issue", as opposed to accurately translate brain-based probability estimates in to numerical probability estimates?

Comment author: Vaniver 05 January 2015 10:28:49AM 5 points [-]

Can someone who's done calibration training comment on whether it really seems to represent the ability to "judge how much evidence you have on a given issue", as opposed to accurately translate brain-based probability estimates in to numerical probability estimates?

As I interpret it, the two are distinct but calibration training does both. That is, there's both a "subjective feeling of certainty"->"probability number" model that's being trained, and that model probably ought to be trained for every field independently (that is, determining how much subjective feeling of certainty you should have in different cases). There appears to be some transfer but I don't think it's as much as Yvain seems to be postulating.

Comment author: Unknowns 05 January 2015 09:26:23AM 1 point [-]

Are these two things significantly different?

Comment author: lmm 05 January 2015 07:36:57PM 2 points [-]

Imagine someone who acted appropriately towards particular risks (maybe not very artificial ones like betting, but someone who did things like saving an appropriate proportion of their income, spent an appropriate amount of their free time doing fun-but-dangerous things like skydiving), but couldn't translate their risk attitudes into numbers.

Comment author: rule_and_line 07 January 2015 03:44:47AM 0 points [-]

I'm having difficulty replacing your quotation with its referent. Could you describe an activity I could do that would demonstrate that I was judging how much evidence I have on a given issue?

Comment author: Username 07 January 2015 10:13:38AM *  0 points [-]

Make money playing poker, maybe?

Comment author: Kaj_Sotala 04 January 2015 12:42:02PM 5 points [-]

This was the first time that I failed to take the survey, because I kept going "I don't have a scanner at home for the finger ratio thing, so I'll do it when I'm at the university" and then each time when I was at the university, I forgot. (I wasn't there often.)

Comment author: Vulture 04 January 2015 06:18:32AM *  5 points [-]

Yayy! I was having a shitty day, and seeing these results posted lifted my spirits. Thank you for that! Below are my assorted thoughts:

I'm a little disappointed that the correlation between height and P(supernatural)-and-similar didn't hold up this year, because it was really fun trying to come up with explanations for that that weren't prima facie moronic. Maybe that should have been a sign it wasn't a real thing.

The digit ratio thing is indeed delicious. I love that stuff. I'm surprised there wasn't a correlation to sexual orientation, though, since I seem to recall reading that that was relatively well-supported. Oh well.

WTF was going on with the computer games question? Could there have been some kind of widespread misunderstanding of the question? In any case, it's pretty clearly poorly-calibrated Georg, but the results from the other questions are horrendous enough on their own.

On that subject, I have to say that even more amusing than the people who gave 100% and got it wrong are the people who put down 0% and then got it right -- aka, really lucky guessers :P

Congrats to the Snicket fan!

This was a good survey and a good year. Cheers!

Comment author: Grothor 04 January 2015 11:15:01PM 9 points [-]

I remember answering the computer games question and at first feeling like I knew the answer. Then I realized the feeling I was having was that I had a better shot at the question than the average person that I knew, not that I knew the answer with high confidence. Once I mentally counted up all the games that I thought might be it, then considered all the games I probably hadn't even thought of (of which Minecraft was one), I realized I had no idea what the right answer was and put something like 5% confidence in The Sims 3 (which at least is a top ten game). But the point is that I think I almost didn't catch my mistake before it was too late, and this kind of error may be common.

Comment author: [deleted] 05 January 2015 10:22:29PM 8 points [-]

The correct answer is Tetris. The question should have been what is the best selling personal computer game of all time? Mobile phones are technically computers too. I'm not sure how much difference that would have made.

Comment author: habeuscuppus 07 January 2015 10:51:10PM 0 points [-]

I interpreted the question to include mobile devices and answered Tetris with high confidence.

It would be interesting to see the results of the question if we accepted either Tetris or Minecraft as the correct answer, since both are correct depending on whether or not "computer" was meant to mean "IBM PC Compatible" or "video game playing platform"

Comment author: James_Miller 04 January 2015 05:40:58PM *  8 points [-]

I was confident in my incorrect computer game answer because I had recently read this Wikipedia page List of best-selling video games remembered the answer and unthinkingly assumed that "video games" was the same as "computer games".

Comment author: emr 09 January 2015 10:52:11PM 1 point [-]

On the computer game question: Isn't there an implicit "X is true and X will be marked correct by the rater"? You'd hope these two are clearly aligned, but if you've taken many real-world quizzes, you'll recognize the difference.

Comment author: simon 04 January 2015 06:10:08AM 4 points [-]

Any correlation between digit ratios and gender default?

For the warming question, much of the difference between responses is likely due to differing judgments on what is "significant" as opposed to on what will/has been actually happening. I think it would be better to try to split these components. A first question could ask for example, how much do you think human influence (e.g. CO2 emitted due to human activity) has raised the temperature at the current moment (give a value that you think has a 50% chance of being above or below the correct value), also how much it will have raised the temperature in 2050 and 2100, this could be preceded by a graph showing temperature change due to all causes for the last 80 years or so, to provide a common anchor point. Then one or more additional questions could be aimed at how big a problem people think it is and how much they think should be done about it.

Comment author: buybuydandavis 11 February 2015 09:23:34PM 0 points [-]

What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions?

I remember being annoyed with this question, as I am annoyed with most "news" articles on "the debate", with gibberish such as "Climate denier". That's not even English.

"Significant" and "soon" are both a problem here.

I think your question overly complicates, as it's only one component of warming, "due to human activity". Just CO2? Greenhouse emissions? All the forests and herd animals ever killed? Transformation of species populations? Pollution?

How about what the average global temp will be in the 2050s, according to some existing instrumental measurement regime?

Comment author: gothgirl420666 04 January 2015 07:37:44AM *  8 points [-]

I would be really interested in hearing from one of the fourteen schizophrenic rationalists. Given that one of the most prominent symptoms of schizophrenia is delusional thinking, a.k.a. irrationality... I wonder how this plays out in someone who has read the Sequences. Do these people have less severe symptoms as a result? When your brain decides to turn against you, is there a way to win?

I also find it fascinating that bisexuality is vastly overrepresented here (14.4% in LW vs. 1-2% in US), while homosexuality is not. My natural immediate interpretation of this is that bisexuality is a choice. I think Eliezer said once that he would rather be bisexual than straight, because it would allow for more opportunities to have fun. This seems like an attitude many LW members might share, given that polyamory a.k.a. pursuing a weird dating strategy because it's more fun is very popular in this community. (I personally also share Eliezer's attitude, but unfortunately I'm pretty sure I'm straight.) So to me it seems logical that the large number of bisexuals may come from a large number of people-who-want-to-be-bisexual actually becoming so. This seems more likely to me than some aspect or correlate of bisexuality (and not homosexuality) causing people to find LW.

Alternatively, and now that I think about it probably more realistically, perhaps the vast majority of people in America who are attracted to two genders decide to keep their same-sex attraction to themselves, concluding (arguably rationally) that the added sexual opportunities aren't worth the stigmatization. However, LW members are more likely to be unashamed of being weird, and also more likely to socialize e.g. with a bunch of nerds in the Bay Area, meaning that the risk of stigmatization is much lower.

Or perhaps the true answer is some sort of combination of the two I just postulated.

[Poor calibration] is not a human universal - people who put even a small amount of training into calibration can become very well calibrated very quickly.

Is there a source on this?

Comment author: Gunnar_Zarncke 04 January 2015 10:24:15AM 10 points [-]

I also find it fascinating that bisexuality is vastly overrepresented here.

I don't. Compare it with the OkCupid data analysis. Bisexuality could be more of a signal. Admittedly at least in the (quite large) OkCupid data.

Comment author: gothgirl420666 04 January 2015 11:36:54AM *  3 points [-]

Oh, wow, that's incredibly strange/interesting, I had never seen that before. Thanks for sharing.

The fact that young bi men are almost always closeted gay men, while old bi men are almost always closeted straight men, is particularly baffling.

Comment author: Izeinwinter 05 January 2015 03:57:06PM 4 points [-]

The first part does not actually follow from the data with any rigor - "Go online to meet people of the same sex, find opposite-sex partners in real life" is a perfectly reasonable strategy, simply because online dating avoids the whole "I'm straight" shot down in flames thing, which must get really old really quickly.

The older guys listing bisexuality and only messaging women, tough? Ehhr.. what?

Comment author: CBHacking 07 January 2015 02:27:58PM 3 points [-]

Best guesses at an explanation for that one: 1) A lot of older men had some homosexual experimentation in their past, decided that they therefore count as bi, but are now only interested in heterosexual relationships. 2) A lot of older men choose to signal what they believe to be the desirable characteristic of "sexual adventurousness" to their actual target sexual partner, which is younger women.

Comment author: Username 09 January 2015 05:19:28PM 3 points [-]

Nah, selection bias. You don't go on OK Cupid as a bi man to find men - that's Grindr or other similar sites. Much easier and quicker and more straightforward. But if you're a bi man looking for women, OK Cupid is a good place to go.

Comment author: Nornagest 05 January 2015 12:18:18AM *  3 points [-]

Hypothesis: a large fraction of young men in those results are coming to terms with their sexuality, while a large fraction of old men are trying to signal sexual adventurousness?

Comment author: gothgirl420666 05 January 2015 12:58:10AM 3 points [-]

Yeah, that's what I thought too. I'm just surprised that bisexuality would be something so many men imagine (perhaps correctly?) women are attracted to.

Comment author: Vaniver 05 January 2015 12:11:19AM 3 points [-]

The fact that young bi men are almost always closeted gay men, while old bi men are almost always closeted straight men, is particularly baffling.

I don't find the first part baffling; there's a trope that many gay men go through bisexuality on their way to accepting their homosexuality. (I had a brief period where I identified as bi because I wasn't fully ready to identify as gay.)

Comment author: Toggle 06 January 2015 02:37:58AM 2 points [-]

Same. It's easier to tell people that you have a left hand than it is to tell people you're left-handed, so to speak.

Comment author: Error 04 January 2015 06:21:25PM 1 point [-]

I also find it fascinating that bisexuality is vastly overrepresented here

I'd be interested to see the orientation numbers broken down by sex/gender. My personal experience is that geek/nerd women seem to be bisexual at surprisingly high rates. I'm wondering if having typically-male personal pursuits (e.g. LW) is correlated with typically-male sexual interests (i.e. liking women).

he would rather be bisexual than straight, because it would allow for more opportunities to have fun...(I personally also share Eliezer's attitude, but unfortunately I'm pretty sure I'm straight)

I'm in that boat. Feels like I'm missing out on half the potential fun. :-(

Comment author: Vaniver 05 January 2015 12:08:24AM *  6 points [-]

I'd be interested to see the orientation numbers broken down by sex/gender.

Using the "Sex" (not gender) and "Sexuality" columns, omitting blanks, asexuals, and others:

Male Heterosexual: 999
Male Bisexual: 142
Male Homosexual: 40
Female Heterosexual: 79
Female Bisexual: 62
Female Homosexual: 6

So the male/female ratio by sexuality is:

Heterosexual: 12.6
Bisexual: 2.3
Homosexual: 6.7

The sexuality percentage by sex is:

Male: 84.6% / 12.0% / 3.4%
Female: 53.7% / 42.2% / 4.1%

So while female bisexuality is almost as common as female heterosexuality here, the total bisexual ratio resembles the male bisexual ratio closely, as you would expect from the male/female ratio being so high overall (8 men per woman in this restricted sample).

Comment author: Error 05 January 2015 01:59:25PM 6 points [-]

almost as common as female heterosexuality here, as you would expect

I initially misparsed this as "the female bisexuality rate is as expected." I see that isn't what you meant, but had to re-read two or three times. Just FYI.

I feel like a 42.2% bisexuality rate among LW women is surprising enough to say something, but I'm not sure what.

Comment author: JohannesDahlstrom 05 January 2015 11:19:36PM *  3 points [-]

It is interesting. IME in real life and in OkCupid, female self-identification as bisexual correlates quite strongly with the geek/liberal/poly/kinky meme complex (edit: mirroring your experiences, didn't read carefully enough). Out of my top matches in OkCupid, over 80% of women interested in men seem to self-report as bisexual.

However, also IME, bisexual identification usually doesn't imply being biromantic! Many of those women have had, or would like to have, sexual experiences with other women, but still may prefer men in romantic relationships almost exclusively.

FWIW, I support adding a question about romantic orientation in the next survey.

Comment author: buybuydandavis 11 February 2015 09:44:08PM 3 points [-]

Great line from OkCupid:

The primacy of America's most popular threesome, two dudes and an Xbox, is safe.

Comment author: CBHacking 07 January 2015 02:32:24PM 1 point [-]

Anecdotally, this matches my experience (both on OKC and the "bisexual but hereroromantic" thing with three of my four most recent sexual partners).

Comment author: Vaniver 05 January 2015 11:29:07PM 1 point [-]

I initially misparsed this as "the female bisexuality rate is as expected." I see that isn't what you meant, but had to re-read two or three times. Just FYI.

Grammar modified to be clearer, thanks for pointing that out.

Comment author: roystgnr 05 January 2015 09:56:53PM *  1 point [-]

All I've come up with is a half-formed joke about how human females really are intrinsically attractive after all.

Comment author: NancyLebovitz 04 January 2015 09:37:30PM 4 points [-]

Men who aren't bisexual are missing considerably less than half the potential fun, since the proportion of men who are gay or bisexual is fairly low.

Comment author: gothgirl420666 04 January 2015 10:44:11PM *  1 point [-]

Yeah, but gay men are also more promiscuous.

Comment author: Alsadius 12 January 2015 09:35:06PM 0 points [-]

Is your comparison "than straight men" or "than straight women" here?

Comment author: Baisius 04 January 2015 10:54:20AM *  7 points [-]

I'm losing a lot of confidence in the digit ratio/masculinity femininity stuff. I'm not seeing a number of things I'd expect to see.

First, my numbers for correlations don't match up with yours. With filters on for female gendered, and answering all of BemSexRoleF, BemSexRoleM, RightHand, and LeftHand, I get a correlation of only -0.34 for RightHand and BemSexRoleM, not -0.433 as you say. I get various other differences as well, all weaker correlations than you describe. Perhaps differences in filtering explain this? -.34 vs -.433 seems to be high for this to be true though.

Second, Bem masculinity and femininity actually seem to have a positive correlation, albeit tiny. So more masculine people are... more feminine? This makes no sense and makes me more likely to throw out the entire data set.

Thirdly, I don't see any huge differences between Cisgender Men, Transgender Men, Cisgender Women, or Transgender Women on digit ratios. I would expect to see this as well. I get 95% confidence intervals (mean +/- 3*sigma/sqrt(n), formatted [Lower Right - Upper Right / Lower Left - Upper Left]) for the categories as follows:

  • F (Cis): [0.949 - 0.996 / 0.956 - 1.004]
  • M (Cis): [0.962 - 0.978 / 0.963 - 0.979]
  • M (Trans): [0.907 - 0.988 / 0.818 - 1.070]
  • F (Trans): [0.935 - 1.002 / 0.935 - 1.019]

There's pretty significant overlap between all 4 categories. I made a dotplot that I can't upload and it doesn't look to me like there's any difference in the distributions, but I don't think we have enough of a sample size to have a meaningful distribution on anything except cis males and maybe cis females.

Lastly, I guess I just skipped over the favorite LessWrong post? Not sure what I would have answered, but it would have been either When None Dare Urge Restraint or Intellectual Hipsters and Metacontrarians. Surprised to see neither of those on the list.

Edit: As always, thanks for doing this!

Comment author: simon 04 January 2015 12:15:32PM 7 points [-]

There isn't necessarily any problem with a small positive correlation between masculinity and femininity. The abstract of what I think is the original paper (I couldn't find an ungated version) says that "The dimensions of masculinity and femininity are empirically and logically independent."

Comment author: gwern 04 January 2015 06:23:55PM 7 points [-]
Comment author: Baisius 08 January 2015 01:54:08AM 1 point [-]

It's not clear that this maps to colloquial use of the terms "feminine" and "masculine" then. I think most would consider them opposite ends of the same spectrum.

Comment author: Nornagest 08 January 2015 02:04:14AM *  3 points [-]

There are aspects of the Western gender roles that are opposed to each other at least to some extent: emotionality vs. stoicism, active vs. passive romantic performance. But there are also aspects that aren't. Blue is not anti-pink. Skill at sewing doesn't forbid skill at fixing cars. These might resolve in people's perceptions to positions on some kind of spectrum of male vs. female presentation, but they won't show up that way on surveys measuring conformity with stereotype.

Indeed, that suggests a possible mechanism for these results. Assume for a moment that people prefer to occupy some particular point on the perception spectrum. But people often like stuff for reasons other than gender coding, so it'll sometimes happen that people will be into stuff with gender coding inconsistent with how they'd prefer to be seen. That creates pressure to take up other stuff with countervailing coding. If people respond to that pressure, the net result is a weak positive correlation between stuff with masculine and feminine coding.

Comment author: RicardoFonseca 06 January 2015 03:15:25AM 3 points [-]

So I noticed a few people from Portugal in the public raw data. I am one of those people and from what I saw in the data, the remaining are all lurkers.

Viva pessoal! Want to join the community, even if just to say hi? =)

Comment author: Ixiel 04 January 2015 09:05:56PM 3 points [-]

Thanks for all the hard work!

Comment author: Gondolinian 04 January 2015 07:23:46PM 3 points [-]

SAT Scores out of 1600/SAT Scores out of 2400 .844 (59)

I'm surprised that this correlation wasn't higher. They're both pretty much the same test, right?

Three explanations I thought of:

  1. I'm missing something/I have an inaccurate model of the difference between the two tests.

  2. There's a lot of random difference between SAT scores from different testings. If this is true, I would expect there to be a correlation of around .844 between one test score and a later test score under the same grading system.

  3. SAT scores are correlated with age (no idea whether this is true or not) and people take the two tests some time apart, and thus have better scores on the second.

Any ideas?

Comment author: gwern 04 January 2015 07:42:17PM *  9 points [-]

They're both pretty much the same test, right?

I thought they were partially not the same because they added the writing subtest.

If this is true, I would expect there to be a correlation of around .844 between one test score and a later test score under the same grading system.

The reliability of recent SAT tests seems to generally be ~0.9 according to one random PDF I found (and has long been high). If I am understanding the formulas in this page correctly, then in this application, reliability simplifies to the Pearson's r of the 2 scores*, and that reliability of 0.9 is pretty similar to the LW old/new correlation r of 0.84.

So this may be simply what one would expect from people taking the SAT twice, without having to invoke the lowered correlation caused by the additional sections and any other tweaks they've made.

* Specifically, I'm looking at Artifactual Influences, #3: reliability, where I think we can reuse the example: for test-retest, assume the LWer doesn't get dumber or smarter and the true correlation would be 1; the reliability of the old SAT should be 0.9, the reliability of the new one should be 0.9 too, so you get '1 * sqrt(0.9 * 0.9)' or 'sqrt(0.9 * 0.9)' or 'sqrt(0.9^2)' or '0.9'. So, the expected correlation of 2 SAT tests simplifies to the original reliability of 0.9.

Comment author: Vaniver 04 January 2015 07:53:42PM *  8 points [-]

Any ideas?

The psychometric term for #2 is test-retest reliability, and the numbers I've seen for the SAT range between .8 and .95, so I think that is a complete explanation for this phenomenon.

If the 2400 scores (which came later) are higher than the 1600 scores, that's evidence for #3, but comparing them is difficult because they do test different things and are normed differently.

Comment author: shullak7 04 January 2015 05:05:01PM 3 points [-]

It looks like the median age is 27.67, but I'm curious to see the age-range breakdown as I've frequently assumed I'm "old" for the group (over 40). Oh....never mind.....just saw the link to the Excel spreadsheet and will sort it myself.

Comment author: Vaniver 04 January 2015 10:41:47PM *  10 points [-]

1421 respondents supplied their age: 1292 (90.9%) of them were less than 40. The modal age is 25.

Comment author: shullak7 05 January 2015 05:37:23AM 3 points [-]

Thanks Vaniver. I am going to take this to mean that I'm young at heart.

Comment author: Gunnar_Zarncke 04 January 2015 10:50:02AM 3 points [-]

I'd like to see the correlations between Bem Sex Roles and actual Gender answers.

Comment author: Gunnar_Zarncke 04 January 2015 10:25:46AM 3 points [-]

Among the correlations the first I found surprising are

  • Minimum Wage/Feminism .378 (1286)

  • Immigration/Feminism .365 (1296)#

I wouldn't have guessed these from the corresponding memeplexes and can't see a plausible relationship. Anybody volunteer?

Comment author: blacktrance 04 January 2015 10:30:34AM 24 points [-]

Support for a higher minimum wage, increased immigration, and feminism are all typically left-wing positions, so it's not surprising that they're found together.

Comment author: Gunnar_Zarncke 04 January 2015 01:16:33PM 5 points [-]

Thanks. Maybe it's obvious for you but it does surprise me. Maybe it's more clear over ther in the U.S. I'm nonetheless surprised by the magnitude if they are only connected via such a unspecific bucket as 'left-wing'.

Comment author: blacktrance 04 January 2015 07:58:15PM 9 points [-]

As Arnold Kling suggests, progressives think of issues on an oppressor-oppressed axis. Women, poor people, and immigrants are all seen as oppressed, which is why feminism, raising the minimum wage, and support for more immigration are positions that are often found together.

Comment author: Prismattic 04 January 2015 09:59:40PM 2 points [-]

In my experience, libertarians tend to think highly of Arnold Kling's taxonomy, and liberals and conservatives do not. I regard it as a Turing test fail.

Comment author: roystgnr 05 January 2015 09:50:13PM 6 points [-]

Could you elaborate on your experience? The liberal philosophizing I've seen seems to go even further than Kling does. He suggests a possibly-subconscious implicit common thread, whereas they often talk explicitly about "punching up versus punching down", or redefine various subcategories of prejudice to only mean "prejudice plus power".

I can think of cases where there's a clear position among the U.S. left wing but that position isn't unambiguously objectively described as "support the oppressed against the oppressor", but even in those cases the activism for that position is usually given that framing.

Comment author: Prismattic 08 January 2015 02:46:48AM 4 points [-]

(Prefacing this by noting that I am not going to get into a normative discussion here of whether liberal values are better or worse than libertarian values. I'm only addressing the question of whether Arnold Kling is accurately framing liberal values.)

I'll leave speaking about what's wrong with the conservative frame for an actual conservative (from my also-outside perspective, it doesn't sound particularly accurate).

But as far as liberalism goes, I think what Kling describes might be an accurate depiction of, say, "social justice" blogs, but those are a subset of liberalism, not the essence of it, and it doesn't describe the way the blue tribe people I grew up around (New England, middle class, disproportionately Jewish) reasoned, nor do I think it captures the way the more wonkish liberal bloggers reason.

More specifically, libertarians think that only libertarians care about freedom, while liberals think that libertarians are privileging one particular, controversial, definition of freedom -- the negative liberty of freedom from government (and only government, and in some but not all cases, specifically Federal but not local government) coercion. The liberals I have always known also think that maximizing freedom is the goal, but we define it as something like the autonomy in practice to flourish. So for example, some (not all) libertarians think the Civil Rights Act reduced aggregate freedom, but pretty much all liberals think it increased it. There is a similar divergence in attitudes about net effect on freedom with regards to regulatory interference in freedom of contract between parties with unequal bargaining power. Etc.

Comment author: alienist 08 January 2015 05:23:08AM -2 points [-]

More specifically, libertarians think that only libertarians care about freedom, while liberals think that libertarians are privileging one particular, controversial, definition of freedom

In other words, liberals are perfectly willing to say they're for "freedom" as long as they're allowed to redefine "freedom" however they want.

Comment author: Luke_A_Somers 05 January 2015 11:17:04PM 3 points [-]

Turing test fail? Where was blacktrance trying to pass as having a particular political position?

Comment author: blacktrance 07 January 2015 03:45:36AM 1 point [-]

I think he meant that Kling, being a libertarian, failed the Turing Test when describing the framework behind the progressive and conservative viewpoints.

Comment author: Luke_A_Somers 07 January 2015 03:58:08PM 3 points [-]

I get that. But he wasn't even TAKING the Turing test. He described it fairly accurately, if in terms that people on the inside wouldn't have used. So?

Comment author: JoshuaZ 04 January 2015 10:22:18PM 1 point [-]

Can you expand on the last sentence?

Comment author: fubarobfusco 04 January 2015 10:46:18PM 3 points [-]

The Ideological Turing Test is a concept invented by American economist Bryan Caplan to test whether a political or ideological partisan correctly understands the arguments of his or her intellectual adversaries. The partisan is invited to answer questions or write an essay posing as his opposite number; if neutral judges cannot tell the difference between the partisan's answers and the answers of the opposite number, the candidate is judged to correctly understand the opposing side.


Comment author: falenas108 04 January 2015 04:20:12PM 9 points [-]

I'm pretty sure it's more than just that, a lot of feminist ideas are about helping typically underprivileged communities. I've seen a lot of stuff on feminist areas about helping the poor and the undocumented as an extension of that.

Comment author: alienist 05 January 2015 05:49:17AM 8 points [-]

That's simply the inside view of blacktrance's point.

Comment author: gjm 06 January 2015 02:36:31AM *  4 points [-]

How do you know?

I mean, we have (at least) two hypotheses. 1: lefties classify people on a scale from "oppressor" to "oppressed" and favour the oppressed. 2: lefties are interested in helping people in bad situations, and there are some categories of people who are systematically much more likely to be in bad situations. It looks as if lefties themselves tend to say #2, whereas various non-lefties say #1.

One explanation, indeed, is that #1 is correct and #2 is what the poor self-deluded lefties think they're doing. Another is that #2 is correct and that #1 is what the poor deluded non-lefties mistakenly see it as. Why should we favour the second of these over the first?

(I'm mostly a leftie. It looks to me as if there's some truth in both #1 and #2, but it seems to me that #1 is a consequence of #2, a special case, more than it's a fundamental motivation that gets rationalized as #2. I expect there are people for whom #1 is primary but don't see any reason to think that's the usual case. It seems like the obvious way for someone to feel #1 but need to deceive themselves that #2 is their real motivation would be if they are themselves in an "oppressed" group, so that #1 is a matter of self-interest. I think a majority of the lefties I know -- and for that matter of the non-lefties -- are fairly "privileged" and for those the "#1 primary, #2 is rationalization" account seems awfully implausible.)

[EDITED to add: 1. I would be interested to hear from either of the people who downvoted me why they thought this comment merited a downvote. It looks OK to me. 2. In the 8 hours since it was posted, I am down 31 karma points. I wonder what account name The Artist Formerly Known As Eugine_Nier is using now.]

Comment author: blacktrance 07 January 2015 03:51:07AM 2 points [-]

I'm not a progressive, but I don't see 1 and 2 as mutually exclusive. 1 is just a different way of stating 2 - leftists classify people on an oppressor-oppressed axis, where the oppressed are people perceived to be in bad situations.

Comment author: gjm 07 January 2015 09:13:36AM 1 point [-]

I think "oppressed" is more specific than "in a bad situation", and "oppressor" is much more specific than "in a comfortable situation".

Saying that lefties classify people on an oppressor/oppressed axis suggests that they're addicted to what's sometimes called "politics of envy" -- it's not enough to help the poor, the rich must be made to suffer becaucse they are evil oppressors, etc. I'm sure there are people who think (and feel) that way, but I think it's a straw man if presented as an analysis of how lefties generally see the world.

I think most lefties would agree with me that when people are in bad situations it doesn't have to be because anyone's oppressing them. They might just have been unlucky, or they might in some sense have done it to themselves (one place where Left and Right commonly disagree: on the left, this is not usually taken to mean that they shouldn't be helped). And I think most lefties would agree with me that someone very comfortably off is not necessarily oppressing anyone.

(There are some who would say that our society systematically favours some groups and screws others over, to the benefit of the former at the expense of the latter, and that that means that being a rich straight white educated able-bodied man does make you in some sense an oppressor. I, and I think many others, largely agree with the first bit of that but think the conclusion that the rich (etc.) are oppressors is misguided: to benefit from an oppressive system is not necessarily to be an oppressor. And I, and I think many others, think "oppressed" is too strong and too specific a word to describe the ways in which things are bad for most statistically-disadvantaged groups.)

Comment author: Fluttershy 04 January 2015 08:17:12AM 6 points [-]

Good job on running the survey and analyzing the data! I do wish that one of the extra credit questions had asked whether or not readers were fans of My Little Pony: Friendship is Magic.

Comment author: Petter 04 January 2015 11:50:09PM 4 points [-]

This post had more statements of the type “p < 0.01” than I would expect at LW. I recently read “Frequentist Statistics are Frequently Subjective” here.

Comment author: Luke_A_Somers 05 January 2015 09:43:29PM 3 points [-]

It's easy to calculate, and as long as you keep in mind what it means, it's not bad to include.

Comment author: lirene 04 January 2015 02:24:17PM 4 points [-]

I only identify with my birth gender by default: 681, 45.3%

I'm surprised at this. Is there a special term for "only identifying with one's gender by default" or keywords I can use to look for statistics for among the general population? (a brief googling didn't uncover anything). I would've guessed this number to be much lower, and now I'm wondering whether this is signaling or whether my model of other people in this particular instance is completely wrong.

Comment author: Kaj_Sotala 04 January 2015 07:40:32PM 5 points [-]

It's not an established term in the sense that people would have conducted research on it, as far as I know: unless I'm mistaken, it comes from here.

Comment author: arundelo 04 January 2015 08:06:11PM *  6 points [-]
Comment author: lirene 06 January 2015 04:09:44PM 1 point [-]

Thank you both for providing the links. I will wait and see whether the percentage stays the same in the 2015 survey...

Comment author: FiftyTwo 18 January 2015 01:38:55AM 2 points [-]

Personally I was surprised so amny cis people strongly identified with their gender

[tpical mind etc...]

Comment author: Stuart_Armstrong 15 January 2015 04:45:35PM 2 points [-]

Thanks Alex and Ozy!

Comment author: tog 08 January 2015 04:55:41PM 2 points [-]

To compare, are there any public stats on LessWrong readership, such as how many new and returning visitors the site gets?

Comment author: hxka 05 January 2015 09:32:28PM 2 points [-]

It looks to me like everyone was horrendously underconfident on all the easy questions, and horrendously overconfident on all the hard questions.

Have you ranked questions by their easiness before you looked at the results?

Comment author: Leonhart 04 January 2015 11:19:53PM 2 points [-]

I want to be friends with the write-in worshiper of CelestAI mentioned :) PM if you like!

Comment author: Luke_A_Somers 05 January 2015 09:16:46PM 2 points [-]

I'd be on board with Celestia, but CelestAI? No.

Comment author: Grothor 04 January 2015 10:45:58PM 2 points [-]

Under the profession listings, it says 35 people and 4% for Business. 35 is 2.7% of 1500.

Comment author: dspeyer 04 January 2015 10:46:18AM 2 points [-]

Do I understand correctly that a more masculine finger ratio correlated strongly to support for feminism in both men and women?

I am also amused to note that, despite our extreme sex ratio, our BEM gender masculinity and femininity are almost exactly equal -- way below error.

Comment author: simon 04 January 2015 02:50:25PM *  2 points [-]

No, as I interpret what Yvain wrote, a more feminine digit ratio correlated to support for feminism in men and the correlation was not quite statistically significant in women.

Comment author: tog 08 January 2015 06:07:16PM 3 points [-]

It's interesting to compare these results to those of the 2014 Survey of Effective Altruists. These will be released soon, but here are some initial ways in which effective altruists who took this survey compare to LessWrong census takers:

  • Somewhat less male (75% vs 87%)
  • More in the UK
  • Equally atheist/agnostic
  • More consequentialist (74% vs 60%)
  • Much more vegan/vegetarian (34% vs 10%)
  • Witty, attractive, and great company at parties
Comment author: Username 08 January 2015 09:03:40PM 0 points [-]

For someone who isn't an EA, and therefor shouldn't take this survey, is there a place where we can see the results?

Comment author: RyanCarey 08 January 2015 09:29:03PM *  0 points [-]

AFAIK, the results aren't out yet, but they'll go on effective-altruism.com when they are.

Comment author: tog 09 January 2015 10:25:21AM 0 points [-]

Correct, Peter Hurford is working on them and will I believe finish them soon.

Comment author: Tenoke 04 January 2015 12:33:14PM 3 points [-]

What's up with a whole 10% being 'Atheist and spiritual'? It doesn't seem to be a family thing, as you get only 4.9% with that belief in the family section, and the numbers don't match up with the P(Supernatural) question.

I was worried about this last year when it was 8.1%, and the number seems to be increasing. Is this Will Newsome's post-rationality faction or what?

Comment author: Kaj_Sotala 04 January 2015 12:51:16PM 9 points [-]

"Spiritual" doesn't necessarily imply a belief in anything supernatural: as Wikipedia puts it,

Since the 19th century spirituality is often separated from religion, and has become more oriented on subjective experience and psychological growth. It may refer to almost any kind of meaningful activity or blissful experience, but without a single, widely-agreed definition.

I've sometimes marked myself in the "atheist and spiritual" category in the surveys, though not always since I've been a little unsure of what exactly is meant by it. When I have, I've taken "spiritual" to refer to practices like engaging in meditation (possibly with the intention of seeing through the illusion of the self), seeking to perceive a higher meaning in everything that one does, enjoyment of ritual, cultivation of empathy towards other people, looking to connect with nature, etc.

Comment author: fubarobfusco 04 January 2015 04:32:07PM 6 points [-]

It seems to me that religious belief and religious practice should be distinguished. The current questions are about religious belief and family background. Perhaps a question like this:

How many times in the past year have you done the following practices? Estimates are OK. The question here is whether you did the thing, not whether you believed in it, made it a habit, or did it voluntarily.

  • Attended a religious service?
  • Attended a "regular" religious service (e.g. a weekly church or synagogue service, Muslim daily prayers, etc.; not a wedding, funeral, or holiday service)?
  • Prayed to God, gods, saints, or other religious figures outside of a religious service?
  • Done yoga, qigong, or another movement practice derived from religious or spiritual beliefs?
  • Used or participated in psychic or fortune-telling practices (palmistry, ouija, I Ching, tarot)?
  • Meditated?
  • Meditated for thirty minutes or more in a single session?
Comment author: Tenoke 04 January 2015 01:36:49PM *  2 points [-]

I'm sceptical that this interpretation makes sense in a question about religious views, but I guess it may explain it.

Comment author: Leonhart 04 January 2015 11:09:40PM *  12 points [-]

Data point: I picked this option, because of a grab-bag of vaguely related positions in my head that make me feel dissatisfied with the flat "atheist" option, including:

  • I enjoy and endorse rituals such as the Solstice celebration, as opposed to the set here who are triggered by them (ETA: not in any way claiming they are wrong to be so triggered, or don't have reasons)
  • I find the Virtues, and other parts of the Sequences with similar styling, to be deeply moving and uplifting, and consider this element of our house style to be a strength rather than a liability
  • We worry too damn much about the c-word, in a pointless attempt to appease the humourless, and we've compromised too much of our aesthetic identity doing it
  • Scott's Moloch isn't actually the Devil, but maybe acting as if is a good strategy for recruiting all parts of our minds to the fight. Ditto for Elua
  • After some experimentation, I think I understand better what the mindstate associated with "worshiping" actually feels like (really damn good) and suspect that the emotional benefits are totally available even if you know the targeted god doesn't exist

(I actually wish it was reversed to "religious but not spiritual", because "spiritual" feels more like the "supernatural/irreducibly mental" word, whereas "religious" feels more of a piece with perfectly sensible things like not breaking my word even to save humanity. But that's just me.)

I have no idea whether this is remotely related to postrationalism; if anyone actually knows what postrationalism is, please write a FAQ. I do miss Newsome though; he wrote my favourite ever LW sentence.

Comment author: covaithe 06 January 2015 05:36:34PM 2 points [-]

I quite like this formulation, and if I had thought of it at survey time I might well have answered 'atheist(spiritual)' instead of 'atheist(nonspiritual)'.

Regarding emotional benefits: I sing in moderately serious classical choirs, where inevitably much of the music is set to religious texts. I get some but not all of the emotional benefits from this that I used to get from religious worship, back when I was a committed theist. I think I would get more benefits if the texts were not religious, and still more if the texts were humanist / rationalist / expressed beliefs that I actively profess.

Comment author: Kaj_Sotala 05 January 2015 10:41:54AM 2 points [-]

I have no idea whether this is remotely related to postrationalism; if anyone actually knows what postrationalism is, please write a FAQ.

There's Postrationality, Table of Contents, though the author hasn't written any follow-up posts yet.

Comment author: RichardKennaway 05 January 2015 05:34:02PM 3 points [-]

There's Postrationality, Table of Contents, though the author hasn't written any follow-up posts yet.

Postrationality appears to stand in the same relation to rationality as Romanticism did to the Enlightenment. That is, a falling away from the Way, not a progression past it; the easy, broad path and not the strait and narrow path that must be walked to hit the small target of truth.

Comment author: fubarobfusco 05 January 2015 05:55:51PM *  3 points [-]

In particular, rationality tends to give advice like “ignore your intuitions/feelings, and rely on conscious reasoning and explicit calculation”.

I can't tell if this is an Ideological Turing Test failure, or just a lie.

Comment author: Leonhart 05 January 2015 06:32:59PM 1 point [-]

Upvoted for informing me that "straight and narrow" was a malformation. Also, yes.

Comment author: Kaj_Sotala 04 January 2015 07:41:56PM 5 points [-]

A question about religious views seems like the perfect place to signal that you're not religious but still like some of the things that are commonly associated with religion.

Comment author: lmm 04 January 2015 10:20:01AM 3 points [-]

As always, thanks for doing this.

people can both be asexual and have a specific orientation

Huh? This is worded as a question about orientation rather than practice, so people who have an orientation have an orientation, no? Or is the issue something else?

Why are some results ordered and some listed alphabetically? That's a bit confusing.

What's this easy becoming calibrated through training? Bonus for a mainstream-ish source rather than LWsphere.

Comment author: falenas108 04 January 2015 04:20:54PM 2 points [-]

Huh? This is worded as a question about orientation rather than practice, so people who have an orientation have an orientation, no? Or is the issue something else?

People can be asexual but, say, homoromantic.

Comment author: lmm 04 January 2015 08:30:05PM 2 points [-]

Sure. And such people would be asexual and answer the survey as asexual. What's the issue?

Comment author: philh 04 January 2015 10:33:35PM 3 points [-]

Romantic preferences were implicitly tied into that question, because there wasn't a separate question for them, so such a person could reasonably answer homosexual.

Alternatively, one could have weak sexual preferences in a particular orientation, like "having sex with guys is kind of fun if there's nothing better to do, but I wouldn't have sex with girls".

Comment author: lmm 05 January 2015 07:27:55PM 2 points [-]

Did anyone want to know about romantic orientation? I mean, if that's something people are interested in then sure, let's add a question for it, but I don't think that's a problem with a question about sexual orientation.

Alternatively, one could have weak sexual preferences in a particular orientation, like "having sex with guys is kind of fun if there's nothing better to do, but I wouldn't have sex with girls".

That's a problem with any kind of discrete categorization of sexual orientation, nothing to do with confusing it with romantic orientation. You could argue for e.g. Kinsey scale (adjusted to include asexuality at similar precision) rather than the four choices we have, but there will always be edge cases.

Comment author: Kaj_Sotala 05 January 2015 08:35:39PM 2 points [-]

I think most people don't realize that sexual and romantic orientation even can be distinct, so if there's a question about "sexual orientation", one never knows what's actually meant - sexual orientation, romantic orientation, or both - unless it's explicitly specified.

Comment author: therufs 04 January 2015 03:01:14PM 1 point [-]
Comment author: Gram_Stone 09 January 2015 02:00:21PM *  2 points [-]

The questions about sexual orientation and gender identity seem to be quite well thought-out, as I would expect, but I have some suggestions in that regard for next year's survey:

1) Include the option to report as a man who has sex with men or a woman who has sex with women.

This could either be incorporated into the question of sexual orientation, or, perhaps more appropriately, because it is not technically a sexual orientation, it could be included as a standalone question.

2) Include the option to report one's sexual orientation using the Klein Sexual Orientation Grid.

If the Grid is too complex, the Kinsey scale is a simpler alternative. However, this survey is already so complex that I fail to see how that case could be made.

Rationale for 1):

Warning: Long anecdote! TL;DR is below. Inexorably, it comes with the territory of men who self-report as MWHSWM. Even though I am male, I have a lot of fun feeling feminine. This was the reason that I began to have sex with other men, rather than any attraction to the men themselves (although I have actually grown to appreciate the male form over the years, and now I do become aroused when I see a penis or a fit, well-dressed man.) I have always had very nice hair, and am usually too lazy to cut it on a regular basis; I am spindly; and when I was a young teenager and my bones were still growing and my shoulder-to-hip ratio was closer to 1:1, in the right clothes I could pass for a moderately attractive girl. I certainly cannot be mistaken for a woman at this age, but sometimes people mention in passing that in subtle ways I am quite feminine in appearance for a male, and not in a disparaging way; I enjoy that! Yet, I have never wanted to have hormone treatments or sex reassignment surgery. There are days when I enjoy feeling masculine, and I have never felt trapped by my male form. Oral and anal sex are enough for me anyway; even if receiving vaginal sex is fun to imagine, I consider it hardly worth losing the ability to penetrate with an innervated sex organ! That's what lucid dreams are for. My gender identity has always been very fluid. Should certain future events come to pass, that fluidity will likely become (virtually) physical. Reporting as a MWHSWM comes into play for me when I consider how to relate this to someone else. LessWrong is one of few places where I could explain all of this. There are many places where people would become repulsed or bored (maybe some of you are) before I finish my paragraph-long explanation of my sexuality. There are other places where people would think I was weird. (And there are still more where people would plot to slaughter me.) For this reason, if I can get away with it safely, I truncate this to: "I'm bisexual." This is inarguably a lossy compression, and one for which there is no need here.

TL;DR: Men who identify as heterosexual may self-report as bisexual and skew the results if they can't self-report as a heterosexual man who has sex with men.

Rationale for 2):

The Grid considers how sexual preferences have changed over time, and even what sexual preferences one prefers oneself to have, among many other things. I think that we could get some really interesting results from that. Someone said in this comment that Eliezer has said that although he identifies as heterosexual, he actually has a preference to be bisexual because there are more opportunities for fun! That's just one person and one example.

Comment author: [deleted] 09 January 2015 03:19:05PM 2 points [-]

In my opinion, sexual orientation should always be relative to one's gender, not one's biological sex. The MSM or WSW identifier violates this to some extent (particularly as it is used by, e.g., the CDC) and causes, again imo, more problems than it's worth.

The KSOG also seems plagued by this conflation.

Comment author: Gram_Stone 09 January 2015 04:30:20PM *  2 points [-]

I can see how it would be wrong to claim that someone else is a MSM or a WSW, but I don't see how it could be wrong to give someone the option to self-report that way since they won't use the label if they disagree with it, and especially since this data won't be used for anything. Can you elaborate on what sorts of problems you think it would cause?

At any rate, I agree that sexual identity should be self-determined, and therefore relative to one's gender identity rather than one's biological sex. To that end, perhaps the question on sexual orientation should include the option to identify as androphilic, gynephilic, or ambiphilic, or perhaps there should be separate questions for identifications of orientation that incorporate gender identity and those that do not.

Comment author: [deleted] 09 January 2015 05:51:28PM 2 points [-]

Can you elaborate on what sorts of problems you think it would cause?

MSM is a confused category with multiple edge cases. Does "male" refer to sex or gender? It has somewhat troubling connotations with promiscuity that don't always make sense, as it is sometimes applied to otherwise asexual males in homosexual relationships. It's not at all clear that the demographic it was designed to apply to (otherwise straight men who have specifically anal sex with other men) feel more comfortable identifying as MSM than gay or bisexual.

I can understand that you feel this is the correct label for your situation, but it is somewhat fraught with historical baggage.

especially since this data won't be used for anything.

I don't think we actually know this.

Comment author: Gram_Stone 09 January 2015 06:14:33PM *  1 point [-]

I wasn't aware of these controversies, but I don't see how the historical baggage is relevant at LW. I think that most everyone here is going to realize that a 'man who has sex with men' is a person who self-identifies as male and has sex with others whom they identify as male. Furthermore, people will only be applying the label to themselves. You just implicitly gave me the option to use it, so I guess I should ask: Is there anything that makes me different from anyone else, or any reason that you wouldn't give everyone the same option simultaneously?

I don't think we actually know this.

I concede that point. It's also true that those who don't want their data to be used have the option not to take the survey or to have their data removed from the public results. On the other hand, I'm not sure how many people that would discourage or how much it would skew the results.

Also, I missed this:

The KSOG also seems plagued by this conflation.

I agree. I see that this is the case in its use of the terms heterosexual, homosexual, etc. We could fix this simply by modifying it to use the the terms androphilic, gynephilic, etc., and including a term each for gender identity and biological sex. It would make no such conflation while preserving the information provided by the existing labels, and thereafter provide even more information. Furthermore, if people want to interpret the data using the previous labels of heterosexual, etc., one could derive that from the data produced from the scheme that I just proposed, or there could be a separate question with the original labels. (Separate questions or the use of both gender-loaded and gender-neutral terms in the same question would also implicitly provide data on the respective popularities of gender-loaded and gender-neutral terms for sexual orientation, though it would be skewed by people whose usage patterns are changed by the question itself. That could be rectified by a follow-up question asking what term one used to self-identify before reading the previous question, if you want to go nuts with it. LWers often do.)

Lastly, it looks like someone bombed this comment thread and then proceeded to bomb the shit out of almost all of my posts, so just so you know, that wasn't me. I've compensated by upvoting your comments even though I didn't in the first place and still don't agree with them in entirety. Nevertheless, I don't want people to see negative numbers and then dismiss this thread, or either of us, completely.

Comment author: Gunnar_Zarncke 04 January 2015 10:43:12AM *  2 points [-]

I think one logical correlation following from the Simulation Argument is underappreciated in the correlations.

I spotted this in the uncorrelated data already:

  • P Supernatural: 6.68 + 20.271 (0, 0, 1) [1386]

  • P God: 8.26 + 21.088 (0, 0.01, 3) [1376]

  • P Simulation 24.31 + 28.2 (1, 10, 50) [1320]

Shouldn't evidence for simulations - and apparently the median belief is 10% for simulation - be evidence for Supernatural influences, for which there is 0% median belief (not even 0.01). After all a simulation implies a simulator and thus a more complex 'outer world' doing the simulation and thus disabling occams razor style arguments against gods.

Admittedly there is a small correlation:

  • P God/P Simulation .110 (1296)

Interestingly this is on the same order as

  • P Aliens/P Simulation .098 (1308)

but there is no correlation listed between P Aliens/P God. Thus my initial hypothesis that aliens running the simulation of gods being the argument behind the 0.11 correlation is invalid.

Note that I mentioned simulation as weak argument for theism earlier.

Comment author: gwern 05 January 2015 05:02:17AM 7 points [-]

Shouldn't evidence for simulations - and apparently the median belief is 10% for simulation - be evidence for Supernatural influences

A simulation is still a naturalistic non-supernatural thing, and it would just mean we see less of the universe than we thought we do. The question was, after all:

What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe?

Comment author: [deleted] 05 January 2015 08:38:12PM 1 point [-]

It depends on your definition of supernatural, and most people on LessWrong seem to have a very narrow definition of supernatural. I think Eliezer once wrote a post about it, but I don't believe he cited any references. Some definitions of supernatural would require many people on here to revise their estimate significantly upward. I took the lack of a definition to mean we should use any and all possible definitions of supernatural when considering the question, which is why I picked 100 percent. There's actually been a discussion on whether simulations imply God, and most answered no. I thought the reasoning some used for that was rather peculiar. That discussion of course didn't include any citations either.

Comment author: Leonhart 06 January 2015 12:02:41AM 3 points [-]

You're thinking of this one, and he cited Carrier, and we have this argument after every survey. At this point it's a Tradition, and putting "ARGH LOOK JUST USE CARRIER'S DEFINITION" on the survey itself would just spoil it :)

Comment author: goocy 06 January 2015 09:42:54AM *  2 points [-]

Charity: 1996.76 + 9492.71

For a statistician, this is insane. In this case, this would mean that a sizable chunk of responders actually receives money from charity.

You seem to assume that every dataset has an inherent mean and standard deviation. But means and standard deviations are the results of modeling a gaussian distribution, and if the model fit is too bad, these metrics simply don't apply for this dataset.

The Lilliefors test was created for exactly this purpose: it gives you the probability that a dataset is not normal distributed. Please use it, or leave out means and standard deviations altogether. The percentiles are (in my - very biased - opinion) much more helpful anyways.

Comment author: satt 06 January 2015 02:56:06PM *  9 points [-]

But means and standard deviations are the results of modeling a gaussian distribution, and if the model fit is too bad, these metrics simply don't apply for this dataset.


Means and standard deviations are general properties one can compute for any statistical distribution which doesn't have pathologically fat tails. (Granted, it would've been conceptually cleaner for Yvain to present the mean & SD of log donations, but there's nothing stopping us from using his mean & SD to estimate the parameters of e.g. a log-normal distribution instead of a normal distribution.)

Comment author: Meni_Rosenfeld 16 February 2015 08:42:31PM 2 points [-]

Is the link to "Logical disjunction" intentional?

Comment author: satt 21 February 2015 12:16:14PM 0 points [-]

It isn't! Thanks for catching that, I've fixed the link.

Comment author: gjm 09 September 2015 12:09:49PM 1 point [-]

You can indeed compute means and standard distributions for any distribution with small enough tails, but if the distribution is far from normal then they may not be very useful statistics. E.g., an important reason why the mean of a bunch of samples is an interesting statistic is that if the underlying distribution is normal then the sample mean is the maximum-likelihood estimator of the distribution's mean. But, e.g., if the underlying distribution is a double exponential then the max-likelihood estimator for its position is the median rather than the mean. Or if the distribution is Cauchy then the sample mean is just as noisy as a single sample.

Comment author: DTX 07 September 2015 10:43:45PM 0 points [-]

I'd expect a Pareto distribution for charitable donations, not log-normal, and that's exactly what the histogram looks like:

charitable donations

Looks like alpha >> 2, so the variance is infinite.

Comment author: satt 09 September 2015 02:08:49AM *  3 points [-]

Thanks for prompting me to take a closer look at this.

The distribution is certainly very positively skewed, but for that reason that histogram is a blunt diagnostic. Almost all of the probability mass is lumped into the first bar, so it's impossible to see how the probability distribution looks for small donations. There could be a power law there, but it's not obvious that the distribution isn't just log-normal with enough dispersion to produce lots of small values.

Looking at the actual numbers from the survey data file, I see it's impossible for the distribution to be strictly log-normal or a power law, because neither distribution includes zero in its support, while zero is actually the most common donation reported.

I can of course still ask which distribution best fits the rest of the donation data. A quick & dirty way to eyeball this is to take logs of the non-zero donations and plot their distribution. If the non-zero donations are log-normal, I'll see a bell curve; if the non-zero donations are Pareto, I'll see a monotonically downward curve. I plot the kernel density estimate (instead of a histogram 'cause binning throws away information) and I see

Kernel density estimate of distribution of logged non-zero donations

which is definitely closer to a bell curve. So the donations seem closer to a lognormal distribution than a Pareto distribution. Still, the log-donation distribution probably isn't exactly normal (looks a bit too much like a cone to me). Let's slap a normal distribution on top and see how that looks. Looks like the mean is about 6 and the standard deviation about 2?

Kernel density distribution of logged non-zero donations, with normal distribution

Wow, that's a far closer match than it has any right to be! Admittedly, if I ask R to run a Lilliefors test, the test rejects the hypothesis of normality (p = 0.0007), and it remains the case that the donations are neither log-normal nor power-law distributed because some of the values are zero. But the non-zero donations look impressively close to a log-normal distribution, and I really doubt a Pareto distribution would fit them better. (And in general it's easy to see Pareto distributions where they don't really exist.)

Comment author: gwern 09 September 2015 06:54:44PM 0 points [-]

Admittedly, if I ask R to run a Lilliefors test, the test rejects the hypothesis of normality (p = 0.0007), and it remains the case that the donations are neither log-normal nor power-law distributed because some of the values are zero.

As I understand it, tests of normality are not all that useful because: they are underpowered & won't reject normality at the small samples where you need to know about non-normality because it'll badly affect your conclusions; and at larger samples like the LW survey, because real-world data is rarely exactly normal, they will always reject normality even when it makes not the slightest difference to your results (because the sample is now large enough to benefit from the asymptotics and various robustnesses).

When I was looking at donations vs EA status earlier this year, I just added +1 to remove the zero-inflation, and then logged donation amount. Seemed to work well. A zero-inflated log-normal might have worked even better.

Also, you don't have to look at only one year's data; you can look at 3 or 4 by making sure to filter out responses based whether they report answering a previous survey.

Comment author: satt 09 September 2015 10:38:11PM *  0 points [-]

As I understand it, tests of normality are not all that useful because: they are underpowered & won't reject normality at the small samples where you need to know about non-normality because it'll badly affect your conclusions; and at larger samples [...], because real-world data is rarely exactly normal, they will always reject normality even when it makes not the slightest difference to your results

I agree that normality tests are too insensitive for most small samples, and too sensitive for pretty much any big sample, but I'd presumed there was a sweet spot (when the sample size is a few hundred) where normality tests have decent sensitivity without giving everything a negligible p-value, and that the LW survey is near that sweet spot. If I'd been lazy and used R's out-of-the-box normality test (Shapiro-Wilk) instead of following goocy's recommendation (Lilliefors, which R hides in its nortest library) I'd have got an insignificant p of 0.11, so the sample [edit: of non-zero donations] evidently isn't large enough to guarantee rejection by normality tests in general.

Also, you don't have to look at only one year's data; you can look at 3 or 4 by making sure to filter out responses based whether they report answering a previous survey.

Certainly. It might be interesting to investigate whether the log-normal-with-zeroes distribution holds up in earlier years, and if so, whether the distribution's parameters drift over time. Still, goocy's complaint was about 2014's data, so I stuck with that.

Comment author: skeptical_lurker 05 January 2015 11:12:10PM *  0 points [-]

Has the digit ratio/feminism/immigration thing been published elsewhere? If not, this is an interesting novel result.

Its also kinda worrying IMO, as it seems to indicate that (surprise, surprise) political reasoning is far more emotional than logical, logic being independent of the hormone balance of the person doing the reasoning. This also explains how open borders and basic income are both really popular, despite being mutually exclusive AFAICT (if one country has both these policies, then loads of poor people would move there, and the system would collapse. If you say that one has to have been resident for n years before being eligible for basic income, then immigrants are second class citizens, which has to cause resentment). In fact, the possibility that mass automation might make basic income necessary is a powerful reason why I am sceptical of open boarders.

If being dominant increases testosterone then this leads to a feedback effect where feminist policies make men less dominant, which lowers testosterone, which makes men more feminist.

Studies published in the Journal of the American College of Cardiology, the journal Diabetes Care, the journal Heart and other major medical journals show that low testosterone levels not only lead to obesity, loss of muscle, weak bones and depression, but also increase the odds of heart disease, diabetes, Alzheimers and other major health problems.

(via http://www.washingtonsblog.com/2012/04/man-up-boost-your-testosterone-level-for-health-power-and-confidence.html)

So... is this scientific proof that feminism and belief in open borders are correlated with weakness and poor health (in men)? I know this sounds like I am trying to annoy people, but I can't see a more charitable way to interpret the data. Of course, these correlations are rather weak, and 'feminism' is an umbrella term covering a huge range of opinions.

Comment author: Kaj_Sotala 06 January 2015 10:10:20AM 3 points [-]

Its also kinda worrying IMO, as it seems to indicate that (surprise, surprise) political reasoning is far more emotional than logical, logic being independent of the hormone balance of the person doing the reasoning.

I don't disagree with the conclusion of political reasoning being more emotional than logical, but "hormone balance affects one's political stance" doesn't necessarily imply that conclusion. Hormone balance could also affect your values. So even if everyone was a perfectly logical reasoner, hormone balance could still put them in different political camps since their values would be different. (The most stereotypical hormone -> values pathway that comes to mind would be that both women and left-wing groups are generally thought to put a higher value on the care/harm axis of the MFT than men/right-wing groups do.)

Comment author: skeptical_lurker 06 January 2015 03:56:46PM 3 points [-]

Yes, you are quite right. My mistake.

Comment author: gwern 14 May 2015 01:06:05AM 1 point [-]

Looking at 2013/2014 effective altruist donation amounts: http://lesswrong.com/r/discussion/lw/m6o/lw_survey_effective_altruists_and_donations/

Comment author: BoilingLeadBath 10 January 2015 03:44:32AM 1 point [-]

If anyone else is interested in them, I'm willing to score, count, and/or categorize the responses to the "Changes in Routine" and "What Different" questions.

However, I've started to try and develop a scheme for the former... and I've hit twenty different categories (counting subcategories) and will probably end up with 5-10 more if I don't prune them down.

What sort of things do you think might be interesting to look for?

(Though I haven't started to do work on paper, the latter seems like a much simpler problem. However, if you have thoughts on the selection of bins, please share them.)

(As a note: I would be able to modify the .xls or such, but some one else would have to do the stats; I haven't developed practical skills in that field yet, so the turnaround time would be awful.)

Comment author: EternalStargazer 09 January 2015 02:32:31AM 1 point [-]

Birth Month Jan: 109, 7.3% Feb: 90, 6.0% Mar: 123, 8.2% Apr: 126, 8.4% Jun: 107, 7.1% .Jul: 109, 7.3% Aug: 120, 8.0% Sep: 94, 6.3% Oct: 111, 7.4% Nov: 102, 6.8% Dec: 106, 7.1%

[Despite my hope of something turning up here, these results don't deviate from chance]

It would appear 0% of lesswrongers were born in May. Which is strange, because I seem to remember being born then, and also taking the survey.

Comment author: Meni_Rosenfeld 16 February 2015 08:36:12PM 0 points [-]

I was born in May, and I approve this message.

Comment author: Capla 06 January 2015 06:27:13PM 1 point [-]

Hmmm...How strongly does this agrees with the sequences factor correlate with having read the sequences?

Comment author: taryneast 06 January 2015 09:06:04AM 1 point [-]

Is there any correlation between calibration and CFAR-attendance?

(you mention "previous training in calibration", but I thought I'd specifically ask just in case)

Comment author: RicardoFonseca 06 January 2015 02:24:05AM *  1 point [-]

Here is a complete list of Country counts (ordered by count) (ordered alphabetically).

Note that only countries from people who allowed their results to be public are listed. It is possible for someone from a non-listed country to have completed the survey but chose to remain private.

Inside square brackets are the full counts provided by Yvain (which include private results).

People who have not chosen a country are not listed. People with multiple countries are counted multiple times.

Comment author: cameroncowan 05 January 2015 04:53:17AM 1 point [-]

Thank you for this, very informative!

Comment author: AlexMennen 04 January 2015 11:14:26PM 1 point [-]

P Global Catastrophic Risk/MIRI Mission -.174

I am so confused.

Also, I'd like to see an aggregate calibration graph that includes people's answers to all of the calibration questions.

Comment author: seez 04 January 2015 11:10:11AM 1 point [-]

Could someone explain or link to an explanation of the significance of the feminism-digit ratio connection? Why is it exciting?

Comment author: Viliam_Bur 07 January 2015 10:07:59AM *  2 points [-]

I was imagining a ritual where a devoted feminist cuts off an end of their finger, as a symbolic castration of the internalized patriarchy.

But the official version is that digit ratios are influenced by sex hormones. So if something correlates with the digit ratio, it suggests that it could correlate with the sex hormones. Essentially, the question seems to be: "Are our political opinions about feminism influenced biologically by our sex hormones?" Of course, even if we see a correlation, it wouldn't immediately prove that feminism is determined biologically... but it is a piece of data that is interesting and easy to collect, so we collect it.

Comment author: blacktrance 04 January 2015 07:43:56AM *  1 point [-]

Thank you for doing this survey.

I would be interested to see the correlations between political identification and moral views, and between moral views and meta-ethics.

(Also, looking at my responses to the survey, I think I unintentionally marked "Please do not use my data for formal research".)

Comment author: Conscience 16 April 2016 06:53:20AM *  0 points [-]

Definition of Rational Atheist - considers probability of God at ~5%.

Row Labels Average of PGod StdDev of PGod Count of PGod

Agnostic 17,20 22,13 130
Atheist and not spiritual 3,27 10,15 958
Atheist but spiritual 7,95 19,66 129
Committed theist 75,07 34,40 47
Deist/Pantheist/etc. 41,24 36,04 19
Lukewarm theist 48,09 34,61 39
(blank) 3,67 5,51 3
Grand Total 9,96 22,90 1 325

Comment author: Conscience 13 April 2016 05:08:37PM *  0 points [-]

I don't understand:

393 answer “yes” to at least one mental illness question. Lower bound: 26.1% of the LW population is mentally ill. Gosh, we have a lot of self-diagnosers.

Depression Yes, I was formally diagnosed: 273, 18.2% Yes, I self-diagnosed: 383, 25.5% No: 759, 50.5% *

The question above is one option, so for depression only self+notself 383+273 = 656, how come 393 lower bound?

Comment author: private_messaging 27 March 2015 11:05:00PM 0 points [-]

I think it's interesting to note the lack of significant correlation between either IQ or calibration(as a proxy for rationality and/or sanity) and various beliefs such as many worlds. It's a common sentiment here that beliefs are a gauge of intelligence and rationality, but that doesn't seem to be true.

It would be interesting to include a small set of IQ test like questions, to confirm that there is a huge correlation between IQ and correct answers in general.

Comment author: taryneast 03 March 2015 05:00:11AM 0 points [-]

In retrospect, I'm really really sad we didn't have a "city" question (along with country) this year. In talking to other Australian organisers, we occasionally wonder about which cities to target with "come to your local LessWrong meetup" notifications... that time we had city in the survey was useful for knowing that there was a body of local people who were interested, but we hadn't reached.

So I'd like to request that next year we add that question back, please :)

Comment author: ChristianKl 08 February 2016 12:28:24PM 1 point [-]

The problem with a city question is that it allows you to look up the results of individual people quite easily.

Comment author: taryneast 09 February 2016 09:47:10PM 1 point [-]

This can be solved by only using it in aggregate (ie not releasing it in the final CSV)

Comment author: Good_Burning_Plastic 09 January 2015 10:05:22AM 0 points [-]

What's the correlation between the left- and the right-hand digit ratio?