Zombies Redacted
I looked at my old post Zombies! Zombies? and it seemed to have some extraneous content. This is a redacted and slightly rewritten version.
Rationality: From AI to Zombies
Eliezer Yudkowsky's original Sequences have been edited, reordered, and converted into an ebook!
Rationality: From AI to Zombies is now available in PDF, EPUB, and MOBI versions on intelligence.org (link). You can choose your own price to pay for it (minimum $0.00), or buy it for $4.99 from Amazon (link). The contents are:
- 333 essays from Eliezer's 2006-2009 writings on Overcoming Bias and Less Wrong, including 58 posts that were not originally included in a named sequence.
- 5 supplemental essays from yudkowsky.net, written between 2003 and 2008.
- 6 new introductions by me, spaced throughout the book, plus a short preface by Eliezer.
The ebook's release has been timed to coincide with the end of Eliezer's other well-known introduction to rationality, Harry Potter and the Methods of Rationality. The two share many similar themes, and although Rationality: From AI to Zombies is (mostly) nonfiction, it is decidedly unconventional nonfiction, freely drifting in style from cryptic allegory to personal vignette to impassioned manifesto.
The 333 posts have been reorganized into twenty-six sequences, lettered A through Z. In order, these are titled:
- A — Predictably Wrong
- B — Fake Beliefs
- C — Noticing Confusion
- D — Mysterious Answers
- E — Overly Convenient Excuses
- F — Politics and Rationality
- G — Against Rationalization
- H — Against Doublethink
- I — Seeing with Fresh Eyes
- J — Death Spirals
- K — Letting Go
- L — The Simple Math of Evolution
- M — Fragile Purposes
- N — A Human's Guide to Words
- O — Lawful Truth
- P — Reductionism 101
- Q — Joy in the Merely Real
- R — Physicalism 201
- S — Quantum Physics and Many Worlds
- T — Science and Rationality
- U — Fake Preferences
- V — Value Theory
- W — Quantified Humanism
- X — Yudkowsky's Coming of Age
- Y — Challenging the Difficult
- Z — The Craft and the Community
Several sequences and posts have been renamed, so you'll need to consult the ebook's table of contents to spot all the correspondences. Four of these sequences (marked in bold) are almost completely new. They were written at the same time as Eliezer's other Overcoming Bias posts, but were never ordered or grouped together. Some of the others (A, C, L, S, V, Y, Z) have been substantially expanded, shrunk, or rearranged, but are still based largely on old content from the Sequences.
One of the most common complaints about the old Sequences was that there was no canonical default order, especially for people who didn't want to read the entire blog archive chronologically. Despite being called "sequences," their structure looked more like a complicated, looping web than like a line. With Rationality: From AI to Zombies, it will still be possible to hop back and forth between different parts of the book, but this will no longer be required for basic comprehension. The contents have been reviewed for consistency and in-context continuity, so that they can genuinely be read in sequence. You can simply read the book as a book.
I have also created a community-edited Glossary for Rationality: From AI to Zombies. You're invited to improve on the definitions and explanations there, and add new ones if you think of any while reading. When we release print versions of the ebook (as a six-volume set), a future version of the Glossary will probably be included.
2014 Survey Results
Thanks to everyone who took the 2014 Less Wrong Census/Survey. Extra thanks to Ozy, who did a lot of the number crunching work.
This year's results are below. Some of them may make more sense in the context of the original survey questions, which can be seen here. Please do not try to take the survey as it is over and your results will not be counted.
I. Population
There were 1503 respondents over 27 days. The last survey got 1636 people over 40 days. The last four full days of the survey saw nineteen, six, and four responses, for an average of about ten. If we assume the next thirteen days had also gotten an average of ten responses - which is generous, since responses tend to trail off with time - then we would have gotten about as many people as the last survey. There is no good evidence here of a decline in population, although it is perhaps compatible with a very small decline.
II. Demographics
Sex
Female: 179, 11.9%
Male: 1311, 87.2%
Gender
F (cisgender): 150, 10.0%
F (transgender MtF): 24, 1.6%
M (cisgender): 1245, 82.8%
M (transgender FtM): 5, 0.3%
Other: 64, 4.3%
Sexual Orientation
Asexual: 59, 3.9%
Bisexual: 216, 14.4%
Heterosexual: 1133, 75.4%
Homosexual: 47, 3.1%
Other: 35, 2.3%
[This question was poorly worded and should have acknowledged that people can both be asexual and have a specific orientation; as a result it probably vastly undercounted our asexual readers]
Relationship Style
Prefer monogamous: 778, 51.8%
Prefer polyamorous: 227, 15.1%
Uncertain/no preference: 464, 30.9%
Other: 23, 1.5%
Number of Partners
0: 738, 49.1%
1: 674, 44.8%
2: 51, 3.4%
3: 17, 1.1%
4: 7, 0.5%
5: 1, 0.1%
Lots and lots: 3, 0.2%
Relationship Goals
Currently not looking for new partners: 648, 43.1%
Open to new partners: 467, 31.1%
Seeking more partners: 370, 24.6%
[22.2% of people who don’t have a partner aren’t looking for one.]
Relationship Status
Married: 274, 18.2%
Relationship: 424, 28.2%
Single: 788, 52.4%
[6.9% of single people have at least one partner; 1.8% have more than one.]
Living With
Alone: 345, 23.0%
With parents and/or guardians: 303, 20.2%
With partner and/or children: 411, 27.3%
With roommates: 428, 28.5%
Children
0: 1317, 81.6%
1: 66, 4.4%
2: 78, 5.2%
3: 17, 1.1%
4: 6, 0.4%
5: 3, 0.2%
6: 1, 0.1%
Lots and lots: 1, 0.1%
Want More Children?
Yes: 549, 36.1%
Uncertain: 426, 28.3%
No: 516, 34.3%
[418 of the people who don’t have children don’t want any, suggesting that the LW community is 27.8% childfree.]
Country
United States, 822, 54.7%
United Kingdom, 116, 7.7%
Canada, 88, 5.9%
Australia: 83, 5.5%
Germany, 62, 4.1%
Russia, 26, 1.7%
Finland, 20, 1.3%
New Zealand, 20, 1.3%
India, 17, 1.1%
Brazil: 15, 1.0%
France, 15, 1.0%
Israel, 15, 1.0%
Lesswrongers Per Capita
Finland: 1/271,950
New Zealand: 1/223,550
Australia: 1/278,674
United States: 1/358,390
Canada: 1/399,545
Israel: 1/537,266
United Kingdom: 1/552,586
Germany: 1/1,290,323
France: 1/ 4,402,000
Russia: 1/ 5,519,231
Brazil: 1/ 13,360,000
India: 1/ 73,647,058
Race
Asian (East Asian): 59. 3.9%
Asian (Indian subcontinent): 33, 2.2%
Black: 12. 0.8%
Hispanic: 32, 2.1%
Middle Eastern: 9, 0.6%
Other: 50, 3.3%
White (non-Hispanic): 1294, 86.1%
Work Status
Academic (teaching): 86, 5.7%
For-profit work: 492, 32.7%
Government work: 59, 3.9%
Homemaker: 8, 0.5%
Independently wealthy: 9, 0.6%
Nonprofit work: 58, 3.9%
Self-employed: 122, 5.8%
Student: 553, 36.8%
Unemployed: 103, 6.9%
Profession
Art: 22, 1.5%
Biology: 29, 1.9%
Business: 35, 4.0%
Computers (AI): 42, 2.8%
Computers (other academic): 106, 7.1%
Computers (practical): 477, 31.7%
Engineering: 104, 6.1%
Finance/Economics: 71, 4.7%
Law: 38, 2.5%
Mathematics: 121, 8.1%
Medicine: 32, 2.1%
Neuroscience: 18, 1.2%
Philosophy: 36, 2.4%
Physics: 65, 4.3%
Psychology: 31, 2.1%
Other: 157, 10.2%
Other “hard science”: 25, 1.7%
Other “social science”: 34, 2.3%
Degree
None: 74, 4.9%
High school: 347, 23.1%
2 year degree: 64, 4.3%
Bachelors: 555, 36.9%
Masters: 278, 18.5%
JD/MD/other professional degree: 44, 2.9%
PhD: 105, 7.0%
Other: 24, 1.4%
III. Mental Illness
535 answer “no” to all the mental illness questions. Upper bound: 64.4% of the LW population is mentally ill.
393 answer “yes” to at least one mental illness question. Lower bound: 26.1% of the LW population is mentally ill. Gosh, we have a lot of self-diagnosers.
Depression
Yes, I was formally diagnosed: 273, 18.2%
Yes, I self-diagnosed: 383, 25.5%
No: 759, 50.5%
OCD
Yes, I was formally diagnosed: 30, 2.0%
Yes, I self-diagnosed: 76, 5.1%
No: 1306, 86.9%
Autism spectrum
Yes, I was formally diagnosed: 98, 6.5%
Yes, I self-diagnosed: 168, 11.2%
No: 1143, 76.0%
Bipolar
Yes, I was formally diagnosed: 33, 2.2%
Yes, I self-diagnosed: 49, 3.3%
No: 1327, 88.3%
Anxiety disorder
Yes, I was formally diagnosed: 139, 9.2%
Yes, I self-diagnosed: 237, 15.8%
No: 1033, 68.7%
BPD
Yes, I was formally diagnosed: 5, 0.3%
Yes, I self-diagnosed: 19, 1.3%
No: 1389, 92.4%
[Ozy says: RATIONALIST BPDERS COME BE MY FRIEND]
Schizophrenia
Yes, I was formally diagnosed: 7, 0.5%
Yes, I self-diagnosed: 7, 0.5%
No: 1397, 92.9%
IV. Politics, Religion, Ethics
Politics
Communist: 9, 0.6%
Conservative: 67, 4.5%
Liberal: 416, 27.7%
Libertarian: 379, 25.2%
Social Democratic: 585, 38.9%
[The big change this year was that we changed "Socialist" to "Social Democratic". Even though the description stayed the same, about eight points worth of Liberals switched to Social Democrats, apparently more willing to accept that label than "Socialist". The overall supergroups Libertarian vs. (Liberal, Social Democratic) vs. Conservative remain mostly unchanged.]
Politics (longform)
Anarchist: 40, 2.7%
Communist: 9, 0.6%
Conservative: 23, 1.9%
Futarchist: 41, 2.7%
Left-Libertarian: 192, 12.8%
Libertarian: 164, 10.9%
Moderate: 56, 3.7%
Neoreactionary: 29, 1.9%
Social Democrat: 162, 10.8%
Socialist: 89, 5.9%
[Amusing politics answers include anti-incumbentist, having-well-founded-opinions-is-hard-but-I’ve-come-to-recognize-the-pragmatism-of-socialism-I-don’t-know-ask-me-again-next-year, pirate, progressive social democratic environmental liberal isolationist freedom-fries loving pinko commie piece of shit, republic-ist aka read the federalist papers, romantic reconstructionist, social liberal fiscal agnostic, technoutopian anarchosocialist (with moderate snark), whatever it is that Scott is, and WHY ISN’T THERE AN OPTION FOR NONE SO I CAN SIGNAL MY OBVIOUS OBJECTIVITY WITH MINIMAL EFFORT. Ozy would like to point out to the authors of manifestos that no one will actually read their manifestos except zir, and they might want to consider posting them to their own blogs.]
American Parties
Democratic Party: 221, 14.7%
Republican Party: 55, 3.7%
Libertarian Party: 26, 1.7%
Other party: 16, 1.1%
No party: 415, 27.6%
Non-Americans who really like clicking buttons: 415, 27.6%
Voting
Yes: 881, 58.6%
No: 444, 29.5%
My country doesn’t hold elections: 5, 0.3%
Religion
Atheist and not spiritual: 1054, 70.1%
Atheist and spiritual: 150, 10.0%
Agnostic: 156, 10.4%
Lukewarm theist: 44, 2.9%
Deist/pantheist/etc.: 22,, 1.5%
Committed theist: 60, 4.0%
Religious Denomination
Christian (Protestant): 53, 3.5%
Mixed/Other: 32, 2.1%
Jewish: 31, 2.0%
Buddhist: 30, 2.0%
Christian (Catholic): 24, 1.6%
Unitarian Universalist or similar: 23, 1.5%
[Amusing denominations include anti-Molochist, CelestAI, cosmic engineers, Laziness, Thelema, Resimulation Theology, and Pythagorean. The Cultus Deorum Romanorum practitioner still needs to contact Ozy so they can be friends.]
Family Religion
Atheist and not spiritual: 213, 14.2%
Atheist and spiritual: 74, 4.9%
Agnostic: 154. 10.2%
Lukewarm theist: 541, 36.0%
Deist/Pantheist/etc.: 28, 1.9%
Committed theist: 388, 25.8%
Religious Background
Christian (Protestant): 580, 38.6%
Christian (Catholic): 378, 25.1%
Jewish: 141, 9.4%
Christian (other non-protestant): 88, 5.9%
Mixed/Other: 68, 4.5%
Unitarian Universalism or similar: 29, 1.9%
Christian (Mormon): 28, 1.9%
Hindu: 23, 1.5%’
Moral Views
Accept/lean towards consequentialism: 901, 60.0%
Accept/lean towards deontology: 50, 3.3%
Accept/lean towards natural law: 48, 3.2%
Accept/lean towards virtue ethics: 150, 10.0%
Accept/lean towards contractualism: 79, 5.3%
Other/no answer: 239, 15.9%
Meta-ethics
Constructivism: 474, 31.5%
Error theory: 60, 4.0%
Non-cognitivism: 129, 8.6%
Subjectivism: 324, 21.6%
Substantive realism: 209, 13.9%
V. Community Participation
Less Wrong Use
Lurker: 528, 35.1%
I’ve registered an account: 221, 14.7%
I’ve posted a comment: 419, 27.9%
I’ve posted in Discussion: 207, 13.8%
I’ve posted in Main: 102, 6.8%
Sequences
Never knew they existed until this moment: 106, 7.1%
Knew they existed, but never looked at them: 42, 2.8%
Some, but less than 25%: 270, 18.0%
About 25%: 181, 12.0%
About 50%: 209, 13.9%
About 75%: 242, 16.1%
All or almost all: 427, 28.4%
Meetups
Yes, regularly: 154, 10.2%
Yes, once or a few times: 325, 21.6%
No: 989, 65.8%
Community
Yes, all the time: 112, 7.5%
Yes, sometimes: 191, 12.7%
No: 1163, 77.4%
Romance
Yes: 82, 5.5%
I didn’t meet them through the community but they’re part of the community now: 79, 5.3%
No: 1310, 87.2%
CFAR Events
Yes, in 2014: 45, 3.0%
Yes, in 2013: 60, 4.0%
Both: 42, 2.8%
No: 1321, 87.9%
CFAR Workshop
Yes: 109, 7.3%
No: 1311, 87.2%
[A couple percent more people answered 'yes' to each of meetups, physical interactions, CFAR attendance, and romance this time around, suggesting the community is very very gradually becoming more IRL. In particular, the number of people meeting romantic partners through the community increased by almost 50% over last year.]
HPMOR
Yes: 897, 59.7%
Started but not finished: 224, 14.9%
No: 254, 16.9%
Referrals
Referred by a link: 464, 30.9%
HPMOR: 385, 25.6%
Been here since the Overcoming Bias days: 210, 14.0%
Referred by a friend: 199, 13.2%
Referred by a search engine: 114, 7.6%
Referred by other fiction: 17, 1.1%
[Amusing responses include “a rationalist that I follow on Tumblr”, “I’m a student of tribal cultishness”, and “It is difficult to recall details from the Before Time. Things were brighter, simpler, as in childhood or a dream. There has been much growth, change since then. But also loss. I can't remember where I found the link, is what I'm saying.”]
Blog Referrals
Slate Star Codex: 40, 2.6%
Reddit: 25, 1.6%
Common Sense Atheism: 21, 1.3%
Hacker News: 20, 1.3%
Gwern: 13, 1.0%
VI. Other Categorical Data
Cryonics Status
Don’t understand/never thought about it: 62, 4.1%
Don’t want to: 361, 24.0%
Considering it: 551, 36.7%
Haven’t gotten around to it: 272, 18.1%
Unavailable in my area: 126, 8.4%
Yes: 64, 4.3%
Type of Global Catastrophic Risk
Asteroid strike: 64, 4.3%
Economic/political collapse: 151, 10.0%
Environmental collapse: 218, 14.5%
Nanotech/grey goo: 47, 3.1%
Nuclear war: 239, 15.8%
Pandemic (bioengineered): 310, 20.6%
Pandemic (natural): 113. 7.5%
Unfriendly AI: 244, 16.2%
[Amusing answers include ennui/eaten by Internet, Friendly AI, “Greens so weaken the rich countries that barbarians conquer us”, and Tumblr.]
Effective Altruism (do you self-identify)
Yes: 422, 28.1%
No: 758, 50.4%
[Despite some impressive outreach by the EA community, numbers are largely the same as last year]
Effective Altruism (do you participate in community)
Yes: 191, 12.7%
No: 987, 65.7%
Vegetarian
Vegan: 31, 2.1%
Vegetarian: 114, 7.6%
Other meat restriction: 252, 16.8%
Omnivore: 848, 56.4%
Paleo Diet
Yes: 33, 2.2%
Sometimes: 209, 13.9%
No: 1111, 73.9%
Food Substitutes
Most of my calories: 8. 0.5%
Sometimes: 101, 6.7%
Tried: 196, 13.0%
No: 1052, 70.0%
Gender Default
I only identify with my birth gender by default: 681, 45.3%
I strongly identify with my birth gender: 586, 39.0%
Books
<5: 198, 13.2%
5 - 10: 384, 25.5%
10 - 20: 328, 21.8%
20 - 50: 264, 17.6%
50 - 100: 105, 7.0%
> 100: 49, 3.3%
Birth Month
Jan: 109, 7.3%
Feb: 90, 6.0%
Mar: 123, 8.2%
Apr: 126, 8.4%
Jun: 107, 7.1%
Jul: 109, 7.3%
Aug: 120, 8.0%
Sep: 94, 6.3%
Oct: 111, 7.4%
Nov: 102, 6.8%
Dec: 106, 7.1%
[Despite my hope of something turning up here, these results don't deviate from chance]
Handedness
Right: 1170, 77.8%
Left: 143, 9.5%
Ambidextrous: 37, 2.5%
Unsure: 12, 0.8%
Previous Surveys
Yes: 757, 50.7%
No: 598, 39.8%
Favorite Less Wrong Posts (all > 5 listed)
An Alien God: 11
Joy In The Merely Real: 7
Dissolving Questions About Disease: 7
Politics Is The Mind Killer: 6
That Alien Message: 6
A Fable Of Science And Politics: 6
Belief In Belief: 5
Generalizing From One Example: 5
Schelling Fences On Slippery Slopes: 5
Tsuyoku Naritai: 5
VII. Numeric Data
Age: 27.67 + 8.679 (22, 26, 31) [1490]
IQ: 138.25 + 15.936 (130.25, 139, 146) [472]
SAT out of 1600: 1470.74 + 113.114 (1410, 1490, 1560) [395]
SAT out of 2400: 2210.75 + 188.94 (2140, 2250, 2320) [310]
ACT out of 36: 32.56 + 2.483 (31, 33, 35) [244]
Time in Community: 2010.97 + 2.174 (2010, 2011, 2013) [1317]
Time on LW: 15.73 + 95.75 (2, 5, 15) [1366]
Karma Score: 555.73 + 2181.791 (0, 0, 155) [1335]
P Many Worlds: 47.64 + 30.132 (20, 50, 75) [1261]
P Aliens: 71.52 + 34.364 (50, 90, 99) [1393]
P Aliens (Galaxy): 41.2 + 38.405 (2, 30, 80) [1379]
P Supernatural: 6.68 + 20.271 (0, 0, 1) [1386]
P God: 8.26 + 21.088 (0, 0.01, 3) [1376]
P Religion: 4.99 + 18.068 (0, 0, 0.5) [1384]
P Cryonics: 22.34 + 27.274 (2, 10, 30) [1399]
P Anti-Agathics: 24.63 + 29.569 (1, 10, 40) [1390]
P Simulation 24.31 + 28.2 (1, 10, 50) [1320]
P Warming 81.73 + 24.224 (80, 90, 98) [1394]
P Global Catastrophic Risk 72.14 + 25.620 (55, 80, 90) [1394]
Singularity: 2143.44 + 356.643 (2060, 2090, 2150) [1177]
[The mean for this question is almost entirely dependent on which stupid responses we choose to delete as outliers; the median practically never changes]
Abortion: 4.38 + 1.032 (4, 5, 5) [1341]
Immigration: 4 + 1.078 (3, 4, 5) [1310]
Taxes : 3.14 + 1.212 (2, 3, 4) [1410] (from 1 - should be lower to 5 - should be higher)
Minimum Wage: 3.21 + 1.359 (2, 3, 4) [1298] (from 1 - should be lower to 5 - should be higher)
Feminism: 3.67 + 1.221 (3, 4, 5) [1332]
Social Justice: 3.15 + 1.385 (2, 3, 4) [1309]
Human Biodiversity: 2.93 + 1.201 (2, 3, 4) [1321]
Basic Income: 3.94 + 1.087 (3, 4, 5) [1314]
Great Stagnation: 2.33 + .959 (2, 2, 3) [1302]
MIRI Mission: 3.90 + 1.062 (3, 4, 5) [1412]
MIRI Effectiveness: 3.23 + .897 (3, 3, 4) [1336]
[Remember, all of these are asking you to rate your belief in/agreement with the concept on a scale of 1 (bad) to 5 (great)]
Income: 54129.37 + 66818.904 (10,000, 30,800, 80,000) [923]
Charity: 1996.76 + 9492.71 (0, 100, 800) [1009]
MIRI/CFAR: 511.61 + 5516.608 (0, 0, 0) [1011]
XRisk: 62.50 + 575.260 (0, 0, 0) [980]
Older siblings: 0.51 + .914 (0, 0, 1) [1332]
Younger siblings: 1.08 + 1.127 (0, 1, 1) [1349]
Height: 178.06 + 11.767 (173, 179, 184) [1236]
Hours Online: 43.44 + 25.452 (25, 40, 60) [1221]
Bem Sex Role Masculinity: 42.54 + 9.670 (36, 42, 49) [1032]
Bem Sex Role Femininity: 42.68 + 9.754 (36, 43, 50) [1031]
Right Hand: .97 + 0.67 (.94, .97, 1.00)
Left Hand: .97 + .048 (.94, .97, 1.00)
VIII. Fishing Expeditions
[correlations, in descending order]
SAT Scores out of 1600/SAT Scores out of 2400 .844 (59)
P Supernatural/P God .697 (1365)
Feminism/Social Justice .671 (1299)
P God/P Religion .669 (1367)
P Supernatural/P Religion .631 (1372)
Charity Donations/MIRI and CFAR Donations .619 (985)
P Aliens/P Aliens 2 .607 (1376)
Taxes/Minimum Wage .587 (1287)
SAT Score out of 2400/ACT Score .575 (89)
Age/Number of Children .506 (1480)
P Cryonics/P Anti-Agathics .484 (1385)
SAT Score out of 1600/ACT Score .480 (81)
Minimum Wage/Social Justice .456 (1267)
Taxes/Social Justice .427 (1281)
Taxes/Feminism .414 (1299)
MIRI Mission/MIRI Effectiveness .395 (1331)
P Warming/Taxes .385 (1261)
Taxes/Basic Income .383 (1285)
Minimum Wage/Feminism .378 (1286)
P God/Abortion -.378 (1266)
Immigration/Feminism .365 (1296)
P Supernatural/Abortion -.362 (1276)
Feminism/Human Biodiversity -.360 (1306)
MIRI and CFAR Donations/Other XRisk Charity Donations .345 (973)
Social Justice/Human Biodiversity -.341 (1288)
P Religion/Abortion -.326 (1275)
P Warming/Minimum Wage .324 (1248)
Minimum Wage/Basic Income .312 (1276)
P Warming/Basic Income .306 (1260)
Immigration/Social Justice .294 (1278)
P Anti-Agathics/MIRI Mission .293 (1351)
P Warming/Feminism .285 (1281)
P Many Worlds/P Anti-Agathics .276 (1245)
Social Justice/Femininity .267 (990)
Minimum Wage/Human Biodiversity -.264 (1274)
Immigration/Human Biodiversity -.263 (1286)
P Many Worlds/MIRI Mission .263 (1233)
P Aliens/P Warming .262 (1365)
P Warming/Social Justice .257 (1262)
Taxes/Human Biodiversity -.252 (1291)
Social Justice/Basic Income .251 (1281)
Feminism/Femininity .250 (1003)
Older Siblings/Younger Siblings -.243 (1321)
Charity Donations/Other XRisk Charity Donations .240 (957
P Anti-Agathics/P Simulation .238 (1312)
Abortion/Minimum Wage .229 (1293)
Feminism/Basic Income .227 (1297)
Abortion/Feminism .226 (1321)
P Cryonics/MIRI Mission .223 (1360)
Immigration/Basic Income .208 (1279)
P Many Worlds/P Cryonics .202 (1251)
Number of Current Partners/Femininity: .202 (1029)
P Warming/Immigration .202 (1260)
P Warming/Abortion .201 (1289)
Abortion/Taxes .198 (1304)
Age/P Simulation .197 (1313)
Political Interest/Masculinity .194 (1011)
P Cryonics/MIRI Effectiveness .191 (1285)
Abortion/Social Justice .191 (1301)
P Simulation/MIRI Mission .188 (1290)
P Many Worlds/P Warming .188 (1240)
Age/Number of Current Partners .184 (1480)
P Anti-Agathics/MIRI Effectiveness .183 (1277)
P Many Worlds/P Simulation .181 (1211)
Abortion/Immigration .181 (1304)
Number of Current Partners/Number of Children .180 (1484)
P Cryonics/P Simulation .174 (1315)
P Global Catastrophic Risk/MIRI Mission -.174 (1359)
Minimum Wage/Femininity .171 (981)
Abortion/Basic Income .170 (1302)
Age/P Cryonics -.165 (1391)
Immigration/Taxes .165 (1293)
P Warming/Human Biodiversity -.163 (1271)
P Aliens 2/Warming .160 (1353)
Abortion/Younger Siblings -.155 (1292)
P Religion/Meditate .155 (1189)
Feminism/Masculinity -.155 (1004)
Immigration/Femininity .155 (988)
P Supernatural/Basic Income -.153 (1246)
P Supernatural/P Warming -.152 (1361)
Number of Current Partners/Karma Score .152 (1332)
P Many Worlds/MIRI Effectiveness .152 (1181)
Age/MIRI Mission -.150 (1404)
P Religion/P Warming -.150 (1358)
P Religion/Basic Income -.146 (1245)
P God/Basic Income -.146 (1237)
Human Biodiversity/Femininity -.145 (999)
P God/P Warming -.144 (1351)
Taxes/Femininity .142 (987)
Number of Children/Younger Siblings .138 (1343)
Number of Current Partners/Masculinity: .137 (1030)
P Many Worlds/P God -.137 (1232)
Age/Charity Donations .133 (1002)
P Anti-Agathics/P Global Catastrophic Risk -.132 (1373)
P Warming/Masculinity -.132 (992)
P Global Catastrophic Risk/MIRI and CFAR Donations -.132 (982)
P Supernatural/Singularity .131 (1148)
God/Taxes -.130 (1240)
Age/P Anti-Agathics -.128 (1382)
P Aliens/Taxes .127(1258)
Feminism/Great Stagnation -.127 (1287)
P Many Worlds/P Supernatural -.127 (1241)
P Aliens/Abortion .126 (1284)
P Anti-Agathics/Great Stagnation -.126 (1248)
P Anti-Agathics/P Warming .125 (1370)
Age/P Aliens .124 (1386)
P Aliens/Minimum Wage .124 (1245)
P Aliens/P Global Catastrophic Risk .122 (1363)
Age/MIRI Effectiveness -.122 (1328)
Age/P Supernatural .120 (1370)
P Supernatural/MIRI Mission -.119 (1345)
P Many Worlds/P Religion -.119 (1238)
P Religion/MIRI Mission -.118 (1344)
Political Interest/Social Justice .118 (1304)
P Anti-Agathics/MIRI and CFAR Donations .118 (976)
Human Biodiversity/Basic Income -.115 (1262)
P Many Worlds/Abortion .115 (1166)
Age/Karma Score .114 (1327)
P Aliens/Feminism .114 (1277)
P Many Worlds/P Global Catastrophic Risk -.114 (1243)
Political Interest/Femininity .113 (1010)
Number of Children/P Simulation -.112 (1317)
P Religion/Younger Siblings .112 (1275)
P Supernatural/Taxes -.112 (1248)
Age/Masculinity .112 (1027)
Political Interest/Taxes .111 (1305)
P God/P Simulation .110 (1296)
P Many Worlds/Basic Income .110 (1139)
P Supernatural/Younger Siblings .109 (1274)
P Simulation/Basic Income .109 (1195)
Age/P Aliens 2 .107 (1371)
MIRI Mission/Basic Income .107 (1279)
Age/Great Stagnation .107 (1295)
P Many Worlds/P Aliens .107 (1253)
Number of Current Partners/Social Justice .106 (1304)
Human Biodiversity/Great Stagnation .105 (1285)
Number of Children/Abortion -.104 (1337)
Number of Current Partners/P Cryonics -.102 (1396)
MIRI Mission/Abortion .102 (1305)
Immigration/Great Stagnation -.101 (1269)
Age/Political Interest .100 (1339)
P Global Catastrophic Risk/Political Interest .099 (1295)
P Aliens/P Religion -.099 (1357)
P God/MIRI Mission -.098 (1335)
P Aliens/P Simulation .098 (1308)
Number of Current Partners/Immigration .098 (1305)
P God/Political Interest .098 (1274)
P Warming/P Global Catastrophic Risk .096 (1377)
In addition to the Left/Right factor we had last year, this data seems to me to have an Agrees with the Sequences Factor-- the same people tend to believe in many-worlds, cryo, atheism, simulationism, MIRI’s mission and effectiveness, anti-agathics, etc. Weirdly, belief in global catastrophic risk is negatively correlated with most of the Agrees with Sequences things. Someone who actually knows how to do statistics should run a factor analysis on this data.
IX. Digit Ratios
After sanitizing the digit ratio numbers, the following correlations came up:
Digit ratio R hand was correlated with masculinity at a level of -0.180 p < 0.01
Digit ratio L hand was correlated with masculinity at a level of -0.181 p < 0.01
Digit ratio R hand was slightly correlated with femininity at a level of +0.116 p < 0.05
Holy #@!$ the feminism thing ACTUALLY HELD UP. There is a 0.144 correlation between right-handed digit ratio and feminism, p < 0.01. And an 0.112 correlation between left-handed digit ratio and feminism, p < 0.05.
The only other political position that correlates with digit ratio is immigration. There is a 0.138 correlation between left-handed digit ratio and believe in open borders p < 0.01, and an 0.111 correlation between right-handed digit ratio and belief in open borders, p < 0.05.
No digit correlation with abortion, taxes, minimum wage, social justice, human biodiversity, basic income, or great stagnation.
Okay, need to rule out that this is all confounded by gender. I ran a few analyses on men and women separately.
On men alone, the connection to masculinity holds up. Restricting sample size to men, left-handed digit ratio corresponds to masculinity with at -0.157, p < 0.01. Left handed at -0.134, p < 0.05. Right-handed correlates with femininity at 0.120, p < 0.05. The feminism correlation holds up. Restricting sample size to men, right-handed digit ratio correlates with feminism at a level of 0.149, p < 0.01. Left handed just barely fails to correlate. Both right and left correlate with immigration at 0.135, p < 0.05.
On women alone, the Bem masculinity correlation is the highest correlation we're going to get in this entire study. Right hand is -0.433, p < 0.01. Left hand is -0.299, p < 0.05. Femininity trends toward significance but doesn't get there. The feminism correlation trends toward significance but doesn't get there. In general there was too small a sample size of women to pick up anything but the most whopping effects.
Since digit ratio is related to testosterone and testosterone sometimes affects risk-taking, I wondered if it would correlate with any of the calibration answers. I selected people who had answered Calibration Question 5 incorrectly and ran an analysis to see if digit ratio was correlated with tendency to be more confident in the incorrect answer. No effect was found.
Other things that didn't correlate with digit ratio: IQ, SAT, number of current partners, tendency to work in mathematical professions.
...I still can't believe this actually worked. The finger-length/feminism connection ACTUALLY WORKED. What a world. What a world. Someone may want to double-check these results before I get too excited.
X. Calibration
There were ten calibration questions on this year's survey. Along with answers, they were:
1. What is the largest bone in the body? Femur
2. What state was President Obama born in? Hawaii
3. Off the coast of what country was the battle of Trafalgar fought? Spain
4. What Norse God was called the All-Father? Odin
5. Who won the 1936 Nobel Prize for his work in quantum physics? Heisenberg
6. Which planet has the highest density? Earth
7. Which Bible character was married to Rachel and Leah? Jacob
8. What organelle is called "the powerhouse of the cell"? Mitochondria
9. What country has the fourth-highest population? Indonesia
10. What is the best-selling computer game? Minecraft
I ran calibration scores for everybody based on how well they did on the ten calibration questions. These failed to correlate with IQ, SAT, LW karma, or any of the things you might expect to be measures of either intelligence or previous training in calibration; they didn't differ by gender, correlates of community membership, or any mental illness [deleted section about correlating with MWI and MIRI, this was an artifact].
Your answers looked like this:
The red line represents perfect calibration. Where answers dip below the line, it means you were overconfident; when they go above, it means you were underconfident.
It looks to me like everyone was horrendously underconfident on all the easy questions, and horrendously overconfident on all the hard questions. To give an example of how horrendous, people who were 50% sure of their answers to question 10 got it right only 13% of the time; people who were 100% sure only got it right 44% of the time. Obviously those numbers should be 50% and 100% respectively.
This builds upon results from previous surveys in which your calibration was also horrible. This is not a human universal - people who put even a small amount of training into calibration can become very well calibrated very quickly. This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.
XI. Wrapping Up
To show my appreciation for everyone completing this survey, including the arduous digit ratio measurements, I have randomly chosen a person to receive a $30 monetary prize. That person is...the person using the public key "The World Is Quiet Here". If that person tells me their private key, I will give them $30.
I have removed 73 people who wished to remain private, deleted the Private Keys, and sanitized a very small amount of data. Aside from that, here are the raw survey results for your viewing and analyzing pleasure:
(as Excel)
TV's "Elementary" Tackles Friendly AI and X-Risk - "Bella" (Possible Spoilers)
I was a bit surprised to find this week's episode of Elementary was about AI... not just AI and the Turing Test, but also a fairly even-handed presentation of issues like Friendliness, hard takeoff, and the difficulties of getting people to take AI risks seriously.
The case revolves around a supposed first "real AI", dubbed "Bella", and the theft of its source code... followed by a computer-mediated murder. The question of whether "Bella" might actually have murdered its creator for refusing to let it out of the box and connect it to the internet is treated as an actual possibility, springboarding to a discussion about how giving an AI a reward button could lead to it wanting to kill all humans and replace them with a machine that pushes the reward button.
Also demonstrated are the right and wrong ways to deal with attempted blackmail... But I'll leave that vague so it doesn't spoil anything. An X-risks research group and a charismatic "dangers of AI" personality are featured, but do not appear intended to resemble any real-life groups or personalities. (Or if they are, I'm too unfamiliar with the groups or persons to see the resemblence.) They aren't mocked, either... and the episode's ending is unusually ambiguous and open-ended for the show, which more typically wraps everything up with a nice bow of Justice Being Done. Here, we're left to wonder what the right thing actually is, or was, even if it's symbolically moved to Holmes' smaller personal dilemma, rather than leaving the focus on the larger moral dilemma that created Holmes' dilemma in the first place.
The episode actually does a pretty good job of raising an important question about the weight of lives, even if LW has explicitly drawn a line that the episode's villain(s)(?) choose to cross. It also has some fun moments, with Holmes becoming obsessed with proving Bella isn't an AI, even though Bella makes it easy by repeatedly telling him it can't understand his questions and needs more data. (Bella, being on an isolated machine without internet access, doesn't actually know a whole lot, after all.) Personally, I don't think Holmes really understands the Turing Test, even with half a dozen computer or AI experts assisting him, and I think that's actually the intended joke.
There's also an obligatory "no pity, remorse, fear" speech lifted straight from The Terminator, and the comment "That escalated quickly!" in response to a short description of an AI box escape/world takeover/massacre.
(Edit to add: one of the unusually realistic things about the AI, "Bella", is that it was one of the least anthromorphized fictional AI's I have ever seen. I mean, there was no way the thing was going to pass even the most primitive Turing test... and yet it still seemed at least somewhat plausible as a potential murder suspect. While perhaps not a truly realistic demonstration of just how alien an AI's thought process would be, it felt like the writers were at least making an actual effort. Kudos to them.)
(Second edit to add: if you're not familiar with the series, this might not be the best episode to start with; a lot of the humor and even drama depends upon knowledge of existing characters, relationships, backstory, etc. For example, Watson's concern that Holmes has deliberately arranged things to separate her from her boyfriend might seem like sheer crazy-person paranoia if you don't know about all the ways he did interfere with her personal life in previous seasons... nor will Holmes' private confessions to Bella and Watson have the same impact without reference to how difficult any admission of feeling was for him in previous seasons.)
When should an Effective Altruist be vegetarian?
Crossposted from Meteuphoric
I have lately noticed several people wondering why more Effective Altruists are not vegetarians. I am personally not a vegetarian because I don't think it is an effective way to be altruistic.
As far as I can tell the fact that many EAs are not vegetarians is surprising to some because they think 'animals are probably morally relevant' basically implies 'we shouldn't eat animals'. To my ear, this sounds about as absurd as if Givewell's explanation of their recommendation of SCI stopped after 'the developing world exists, or at least has a high probability of doing so'.
(By the way, I do get to a calculation at the bottom, after some speculation about why the calculation I think is appropriate is unlike what I take others' implicit calculations to be. Feel free to just scroll down and look at it).
I think this fairly large difference between my and many vegetarians' guesses at the value of vegetarianism arises because they think the relevant question is whether the suffering to the animal is worse than the pleasure to themselves at eating the animal. This question sounds superficially plausibly relevant, but I think on closer consideration you will agree that it is the wrong question.
The real question is not whether the cost to you is small, but whether you could do more good for the same small cost.
Similarly, when deciding whether to donate $5 to a random charity, the question is whether you could do more good by donating the money to the most effective charity you know of. Going vegetarian because it relieves the animals more than it hurts you is the equivalent of donating to a random developing world charity because it relieves the suffering of an impoverished child more than foregoing $5 increases your suffering.
Trading with inconvenience and displeasure
My imaginary vegetarian debate partner objects to this on grounds that vegetarianism is different from donating to ineffective charities, because to be a vegetarian you are spending effort and enjoying your life less rather than spending money, and you can't really reallocate that inconvenience and displeasure to, say, preventing artificial intelligence disaster or feeding the hungry, if don't use it on reading food labels and eating tofu. If I were to go ahead and eat the sausage instead - the concern goes - probably I would just go on with the rest of my life exactly the same, and a bunch of farm animals somewhere would be the worse for it, and I scarcely better.
I agree that if the meat eating decision were separated from everything else in this way, then the decision really would be about your welfare vs. the animal's welfare, and you should probably eat the tofu.
However whether you can trade being vegetarian for more effective sacrifices is largely a question of whether you choose to do so. And if vegetarianism is not the most effective way to inconvenience yourself, then it is clear that you should choose to do so. If you eat meat now in exchange for suffering some more effective annoyance at another time, you and the world can be better off.
Imagine an EA friend says to you that she gives substantial money to whatever random charity has put a tin in whatever shop she is in, because it's better than the donuts and new dresses she would buy otherwise. She doesn't see how not giving the money to the random charity would really cause her to give it to a better charity - empirically she would spend it on luxuries. What do you say to this?
If she were my friend, I might point out that the money isn't meant to magically move somewhere better - she may have to consciously direct it there. She might need to write down how much she was going to give to the random charity, then look at the note later for instance. Or she might do well to decide once and for all how much to give to charity and how much to spend on herself, and then stick to that. As an aside, I might also feel that she was using the term 'Effective Altruist' kind of broadly.
I see vegetarianism for the sake of not managing to trade inconveniences as quite similar. And in both cases you risk spending your life doing suboptimal things every time a suboptimal altruistic opportunity has a chance to steal resources from what would be your personal purse. This seems like something that your personal and altruistic values should cooperate in avoiding.
It is likely too expensive to keep track of an elaborate trading system, but you should at least be able to make reasonable long term arrangements. For instance, if instead of eating vegetarian you ate a bit frugally and saved and donated a few dollars per meal, you would probably do more good (see calculations lower in this post). So if frugal eating were similarly annoying, it would be better. Eating frugally is inconvenient in very similar ways to vegetarianism, so is a particularly plausible trade if you are skeptical that such trades can be made. I claim you could make very different trades though, for instance foregoing the pleasure of an extra five minute's break and working instead sometimes. Or you could decide once and for all how much annoyance to have, and then choose most worthwhile bits of annoyance, or put a dollar value on your own time and suffering and try to be consistent.
Nebulous life-worsening costs of vegetarianism
There is a separate psychological question which is often mixed up with the above issue. That is, whether making your life marginally less gratifying and more annoying in small ways will make you sufficiently less productive to undermine the good done by your sacrifice. This is not about whether you will do something a bit costly another time for the sake of altruism, but whether just spending your attention and happiness on vegetarianism will harm your other efforts to do good, and cause more harm than good.
I find this plausible in many cases, but I expect it to vary a lot by person. My mother seems to think it's basically free to eat supplements, whereas to me every additional daily routine seems to encumber my life and require me to spend disproportionately more time thinking about unimportant things. Some people find it hard to concentrate when unhappy, others don't. Some people struggle to feed themselves adequately at all, while others actively enjoy preparing food.
There are offsetting positives from vegetarianism which also vary across people. For instance there is the pleasure of self-sacrifice, the joy of being part of a proud and moralizing minority, and the absence of the horror of eating other beings. There are also perhaps health benefits, which probably don't vary that much by people, but people do vary in how big they think the health benefits are.
Another way you might accidentally lose more value than you save is in spending little bits of time which are hard to measure or notice. For instance, vegetarianism means spending a bit more time searching for vegetarian alternatives, researching nutrition, buying supplements, writing emails back to people who invite you to dinner explaining your dietary restrictions, etc. The value of different people's time varies a lot, as does the extent to which an additional vegetarianism routine would tend to eat their time.
On a less psychological note, the potential drop in IQ (~5 points?!) from missing out on creatine is a particularly terrible example of vegetarianism making people less productive. Now that we know about creatine and can supplement it, creatine itself is not such an issue. An issue does remain though: is this an unlikely one-off failure, or should we worry about more such deficiency? (this goes for any kind of unusual diet, not just meat-free ones).
How much is avoiding meat worth?
Here is my own calculation of how much it costs to do the same amount of good as replacing one meat meal with one vegetarian meal. If you would be willing to pay this much extra to eat meat for one meal, then you should eat meat. If not, then you should abstain. For instance, if eating meat does $10 worth of harm, you should eat meat whenever you would hypothetically pay an extra $10 for the privilege.
This is a tentative calculation. I will probably update it if people offer substantially better numbers.
All quantities are in terms of social harm.
Eating 1 non-vegetarian meal
< eating 1 chickeny meal (I am told chickens are particularly bad animals to eat, due to their poor living conditions and large animal:meal ratio. The relatively small size of their brains might offset this, but I will conservatively give all animals the moral weight of humans in this calculation.)
< eating 200 calories of chicken (a McDonalds crispy chicken sandwich probably contains a bit over 100 calories of chicken (based on its listed protein content); a Chipotle chicken burrito contains around 180 calories of chicken)
= causing ~0.25 chicken lives (1 chicken is equivalent in price to 800 calories of chicken breast i.e. eating an additional 800 calories of chicken breast conservatively results in one additional chicken. Calculations from data here and here.)
< -$0.08 given to the Humane League (ACE estimates the Humane League spares 3.4 animal lives per dollar). However since the humane league basically convinces other people to be vegetarians, this may be hypocritical or otherwise dubious.
< causing 12.5 days of chicken life (broiler chickens are slaughtered at between 35-49 days of age)
= causing 12.5 days of chicken suffering (I'm being generous)
< -$0.50 subsidizing free range eggs, (This is a somewhat random example of the cost of more systematic efforts to improve animal welfare, rather than necessarily the best. The cost here is the cost of buying free range eggs and selling them as non-free range eggs. It costs about 2.6 2004 Euro cents [= US 4c in 2014] to pay for an egg to be free range instead of produced in a battery. This corresponds to a bit over one day of chicken life. I'm assuming here that the life of a battery egg-laying chicken is not substantially better than that of a meat chicken, and that free range chickens have lives that are at least neutral. If they are positive, the figure becomes even more favorable to the free range eggs).
< losing 12.5 days of high quality human life (assuming saving one year of human life is at least as good as stopping one year of an animal suffering, which you may disagree with.)
= -$1.94-5.49 spent on GiveWell's top charities (This was GiveWell's estimate for AMF if we assume saving a life corresponds to saving 52 years - roughly the life expectancy of children in Malawi. GiveWell doesn't recommend AMF at the moment, but they recommend charities they considered comparable to AMF when AMF had this value.
GiveWell employees' median estimate for the cost of 'saving a life' through donating to SCI is $5936 [see spreadsheet here]. If we suppose a life is 37 DALYs, as they assume in the spreadsheet, then 12.5 days is worth 5936*12.5/37*365.25 = $5.49. Elie produced two estimates that were generous to cash and to deworming separately, and gave the highest and lowest estimates for the cost-effectiveness of deworming, of the group. They imply a range of $1.40-$45.98 to do as much good via SCI as eating vegetarian for a meal).
Given this calculation, we get a few cents to a couple of dollars as the cost of doing similar amounts of good to averting a meat meal via other means. We are not finished yet though - there were many factors I didn't take into account in the calculation, because I wanted to separate relatively straightforward facts for which I have good evidence from guesses. Here are other considerations I can think of, which reduce the relative value of averting meat eating:
- Chicken brains are fairly small, suggesting their internal experience is less than that of humans. More generally, in the spectrum of entities between humans and microbes, chickens are at least some of the way to microbes. And you wouldn't pay much to save a microbe.
- Eating a chicken only reduces the number of chicken produced by some fraction. According to Peter Hurford, an extra 0.3 chickens are produced if you demand 1 chicken. I didn't include this in the above calculation because I am not sure of the time scale of the relevant elasticities (if they are short-run elasticities, they might underestimate the effect of vegetarianism).
- Vegetable production may also have negative effects on animals.
- Givewell estimates have been rigorously checked relative to other things, and evaluations tend to get worse as you check them. For instance, you might forget to include any of the things in this list in your evaluation of vegetarianism. Probably there are more things I forgot. That is, if you looked into vegetarianism with the same detail as SCI, it would become more pessimistic, and so cheaper to do as much good with SCI.
- It is not at all obvious that meat animal lives are not worth living on average. Relatedly, animals generally want to be alive, which we might want to give some weight to.
- Animal welfare in general appears to have negligible predictable effect on the future (very debatably), and there are probably things which can have huge impact on the future. This would make animal altruism worse compared to present-day human interventions, and much worse compared to interventions directed at affecting the far future, such as averting existential risk.
My own quick guesses at factors by which the relative value of avoiding meat should be multiplied, to account for these considerations:
- Moral value of small animals: 0.05
- Raised price reduces others' consumption: 0.5
- Vegetables harm animals too: 0.9
- Rigorous estimates look worse: 0.9
- Animal lives might be worth living: 0.2
- Animals don't affect the future: 0.1 relative to human poverty charities
Thus given my estimates, we scale down the above figures by 0.05*0.5*0.9*0.9*0.2*0.1 =0.0004. This gives us $0.0008-$0.002 to do as much good as eating a vegetarian meal by spending on GiveWell's top charities. Without the factor for the future (which doesn't apply to these other animal charities), we only multiply the cost of eating a meat meal by 0.004. This gives us a price of $0.0003 with the Humane League, or $0.002 on improving chicken welfare in other ways. These are not price differences that will change my meal choices very often! I think I would often be willing to pay at least a couple of extra dollars to eat meat, setting aside animal suffering. So if I were to avoid eating meat, then assuming I keep fixed how much of my budget I spend on myself and how much I spend on altruism, I would be trading a couple of dollars of value for less than one thousandth of that.
I encourage you to estimate your own numbers for the above factors, and to recalculate the overall price according to your beliefs. If you would happily pay this much (in my case, less than $0.002) to eat meat on many occasions, you probably shouldn't be a vegetarian. You are better off paying that cost elsewhere. If you would rarely be willing to pay the calculated price, you should perhaps consider being a vegetarian, though note that the calculation was conservative in favor of vegetarianism, so you might want to run it again more carefully. Note that in judging what you would be willing to pay to eat meat, you should take into account everything except the direct cost to animals.
There are many common reasons you might not be willing to eat meat, given these calculations, e.g.:
- You don't enjoy eating meat
- You think meat is pretty unhealthy
- You belong to a social cluster of vegetarians, and don't like conflict
- You think convincing enough others to be vegetarians is the most cost-effective way to make the world better, and being a vegetarian is a great way to have heaps of conversations about vegetarianism, which you believe makes people feel better about vegetarians overall, to the extent that they are frequently compelled to become vegetarians.
- 'For signaling' is another common explanation I have heard, which I think is meant to be similar to the above, though I'm not actually sure of the details.
- You aren't able to treat costs like these as fungible (as discussed above)
- You are completely indifferent to what you eat (in that case, you would probably do better eating as cheaply as possible, but maybe everything is the same price)
- You consider the act-omission distinction morally relevant
- You are very skeptical of the ability to affect anything, and in particular have substantially greater confidence in the market - to farm some fraction of a pig fewer in expectation if you abstain from pork for long enough - than in nonprofits and complicated schemes. (Though in that case, consider buying free-range eggs and selling them as cage eggs).
- You think the suffering of animals is of extreme importance compared to the suffering of humans or loss of human lives, and don't trust the figures I have given for improving the lives of egg-laying chickens, and don't want to be a hypocrite. Actually, you still probably shouldn't here - the egg-laying chicken number is just an example of a plausible alternative way to help animals. You should really check quite a few of these before settling.
However I think for wannabe effective altruists with the usual array of characteristics, vegetarianism is likely to be quite ineffective.
Simulate and Defer To More Rational Selves
I sometimes let imaginary versions of myself make decisions for me.
(I also sometimes imagine what Anna would do, and then do that. I call it "Annajitsu".)
Three ways CFAR has changed my view of rationality
The Center for Applied Rationality's perspective on rationality is quite similar to Less Wrong's. In particular, we share many of Less Wrong's differences from what's sometimes called "traditional" rationality, such as Less Wrong's inclusion of Bayesian probability theory and the science on heuristics and biases.
But after spending the last year and a half with CFAR as we've developed, tested, and attempted to teach hundreds of different versions of rationality techniques, I've noticed that my picture of what rationality looks like has shifted somewhat from what I perceive to be the most common picture of rationality on Less Wrong. Here are three ways I think CFAR has come to see the landscape of rationality differently than Less Wrong typically does – not disagreements per se, but differences in focus or approach. (Disclaimer: I'm not speaking for the rest of CFAR here; these are my own impressions.)
1. We think less in terms of epistemic versus instrumental rationality.
Formally, the methods of normative epistemic versus instrumental rationality are distinct: Bayesian inference and expected utility maximization. But methods like "use Bayes' Theorem" or "maximize expected utility" are usually too abstract and high-level to be helpful for a human being trying to take manageable steps towards improving her rationality. And when you zoom in from that high-level description of rationality down to the more concrete level of "What five-second mental habits should I be training?" the distinction between epistemic and instrumental rationality becomes less helpful.
Here's an analogy: epistemic rationality is like physics, where the goal is to figure out what's true about the world, and instrumental rationality is like engineering, where the goal is to accomplish something you want as efficiently and effectively as possible. You need physics to do engineering; or I suppose you could say that doing engineering is doing physics, but with a practical goal. However, there's plenty of physics that's done for its own sake, and doesn't have obvious practical applications, at least not yet. (String theory, for example.) Similarly, you need a fair amount of epistemic rationality in order to be instrumentally rational, though there are parts of epistemic rationality that many of us practice for their own sake, and not as a means to an end. (For example, I appreciate clarifying my thinking about free will even though I don't expect it to change any of my behavior.)
In this analogy, many skills we focus on at CFAR are akin to essential math, like linear algebra or differential equations, which compose the fabric of both physics and engineering. It would be foolish to expect someone who wasn't comfortable with math to successfully calculate a planet's trajectory or design a bridge. And it would be similarly foolish to expect you to successfully update like a Bayesian or maximize your utility if you lacked certain underlying skills. Like, for instance: Noticing your emotional reactions, and being able to shift them if it would be useful. Doing thought experiments. Noticing and overcoming learned helplessness. Visualizing in concrete detail. Preventing yourself from flinching away from a thought. Rewarding yourself for mental habits you want to reinforce.
These and other building blocks of rationality are essential both for reaching truer beliefs, and for getting what you value; they don't fall cleanly into either an "epistemic" or an "instrumental" category. Which is why, when I consider what pieces of rationality CFAR should be developing, I've been thinking less in terms of "How can we be more epistemically rational?" or "How can we be more instrumentally rational?" and instead using queries like, "How can we be more metacognitive?"
2. We think more in terms of a modular mind.
The human mind isn't one coordinated, unified agent, but rather a collection of different processes that often aren't working in sync, or even aware of what each other is up to. Less Wrong certainly knows this; see, for example, discussions of anticipations versus professions, aliefs, and metawanting. But in general we gloss over that fact, because it's so much simpler and more natural to talk about "what I believe" or "what I want," even if technically there is no single "I" doing the believing or wanting. And for many purposes that kind of approximation is fine.
But a rationality-for-humans usually can't rely on that shorthand. Any attempt to change what "I" believe, or optimize for what "I" want, forces a confrontation of the fact that there are multiple, contradictory things that could reasonably be called "beliefs," or "wants," coexisting in the same mind. So a large part of applied rationality turns out to be about noticing those contradictions and trying to achieve coherence, in some fashion, before you can even begin to update on evidence or plan an action.
Many of the techniques we're developing at CFAR fall roughly into the template of coordinating between your two systems of cognition: implicit-reasoning System 1 and explicit-reasoning System 2. For example, knowing when each system is more likely to be reliable. Or knowing how to get System 2 to convince System 1 of something ("We're not going to die if we go talk to that stranger"). Or knowing what kinds of questions System 2 should ask of System 1 to find out why it's uneasy about the conclusion at which System 2 has arrived.
This is all, of course, with the disclaimer that the anthropomorphizing of the systems of cognition, and imagining them talking to each other, is merely a useful metaphor. Even the classification of human cognition into Systems 1 and 2 is probably not strictly true, but it's true enough to be useful. And other metaphors prove useful as well – for example, some difficulties with what feels like akrasia become more tractable when you model your future selves as different entities, as we do in the current version of our "Delegating to yourself" class.
3. We're more focused on emotions.
There's relatively little discussion of emotions on Less Wrong, but they occupy a central place in CFAR's curriculum and organizational culture.
It used to frustrate me when people would say something that revealed they held a Straw Vulcan-esque belief that "rationalist = emotionless robot". But now when I encounter that misconception, it just makes me want to smile, because I'm thinking to myself: "If you had any idea how much time we spend at CFAR talking about our feelings…"
Being able to put yourself into particular emotional states seems to make a lot of pieces of rationality easier. For example, for most of us, it's instrumentally rational to explore a wider set of possible actions – different ways of studying, holding conversations, trying to be happy, and so on – beyond whatever our defaults happen to be. And for most of us, inertia and aversions get in the way of that exploration. But getting yourself into "playful" mode (one of the hypothesized primary emotional circuits common across mammals) can make it easier to branch out into a wider swath of Possible-Action Space. Similarly, being able to call up a feeling of curiosity or of "seeking" (another candidate for a primary emotional circuit) can help you conquer motivated cognition and learned blankness.
And simply being able to notice your emotional state is rarer and more valuable than most people realize. For example, if you're in fight-or-flight mode, you're going to feel more compelled to reject arguments that feel like a challenge to your identity. Being attuned to the signs of sympathetic nervous system activation – that you're tensing up, or that your heart rate is increasing – means you get cues to double-check your reasoning, or to coax yourself into another emotional state.
We also use emotions as sources of data. You can learn to tap into feelings of surprise or confusion to get a sense of how probable you implicitly expect some event to be. Or practice simulating hypotheticals ("What if I knew that my novel would never sell well?") and observing your resultant emotions, to get a clearer picture of your utility function.
And emotions-as-data can be a valuable check on your System 2's conclusions. One of our standard classes is "Goal Factoring," which entails finding some alternate set of actions through which you can purchase the goods you want more cheaply. So you might reason, "I'm doing martial arts for the exercise and self-defense benefits... but I could purchase both of those things for less time investment by jogging to work and carrying Mace." If you listened to your emotional reaction to that proposal, however, you might notice you still feel sad about giving up martial arts even if you were getting the same amount of exercise and self-defense benefits somehow else.
Which probably means you've got other reasons for doing martial arts that you haven't yet explicitly acknowledged -- for example, maybe you just think it's cool. If so, that's important, and deserves a place in your decisionmaking. Listening for those emotional cues that your explicit reasoning has missed something is a crucial step, and to the extent that aspiring rationalists sometimes forget it, I suppose that's a Steel-Manned Straw Vulcan (Steel Vulcan?) that actually is worth worrying about.
Conclusion
I'll name one more trait that unites, rather than divides, CFAR and Less Wrong. We both diverge from "traditional" rationality in that we're concerned with determining which general methods systematically perform well, rather than defending some set of methods as "rational" on a priori criteria alone. So CFAR's picture of what rationality looks like, and how to become more rational, will and should change over the coming years as we learn more about the effects of our rationality training efforts.
Biases of Intuitive and Logical Thinkers
Any intuition-dominant thinker who's struggled with math problems or logic-dominant thinker who's struggled with small-talk knows how difficult and hopeless the experience feels like. For a long time I was an intuition thinker, then I developed a logical thinking style and soon it ended up dominating -- granting me the luxury of experiencing both kinds of struggles. I eventually learned to apply the thinking style better optimized for the problem I was facing. Looking back, I realized why I kept sticking to one extreme.
I hypothesize that one-sided thinkers develop biases and tendencies that prevent them from improving their weaker mode of thinking. These biases cause a positive feedback loop that further skews thinking styles in the same direction.
The reasons why one style might be overdeveloped and the other underdeveloped vary greatly. Genes have a strong influence, but environment also plays a large part. A teacher may have inspired you to love learning science at a young age, causing you to foster to a thinking style better for learning science. Or maybe you grew up very physically attractive and found socializing with your peers a lot more rewarding than studying after school, causing you to foster a thinking style better for navigating social situations. Environment can be changed to help develop certain thinking styles, but it should be supplementary to exposing and understanding the biases you already have. Entering an environment that penalizes your thinking style can be uncomfortable, stressful and frustrating without being prepared. (Such a painful experience is part of why these biases cause a positive feedback loop, by making us avoid environments that require the opposite thinking style.)
Despite genetic predisposition and environmental circumstances, there's room for improvement and exposing these biases and learning to account for them is a great first step.
Below is a list of a few biases that worsen our ability to solve a certain class of problems and keep us from improving our underdeveloped thinking style.
Intuition-dominant Biases
Overlooking crucial details
Details matter in order to understand technical concepts. Overlooking a word or sentence structure can cause complete misunderstanding -- a common blunder for intuition thinkers.
Intuition is really good at making fairly accurate predictions without complete information, enabling us to navigate the world without having a deep understanding of it. As a result, intuition trains us to experience the feeling we understand something without examining every detail. In most situations, paying close attention to detail is unnecessary and sometimes dangerous. When learning a technical concept, every detail matters and the premature feeling of understanding stops us from examining them.
This bias is one that's more likely to go away once you realize it's there. You often don't know what details you're missing after you've missed them, so merely remembering that you tend to miss important details should prompt you to take closer examinations in the future.
Expecting solutions to sound a certain way
The Internship has a great example of this bias (and a few others) in action. The movie is about two middle-aged unemployed salesmen (intuition thinkers) trying to land an internship with Google. Part of Google's selection process has the two men participate in several technical challenges. One challenge required the men and their team to find a software bug. In a flash of insight, Vince Vaughn's character, Billy, shouts "Maybe the answer is in the question! Maybe it has something to do with the word bug. A fly!" After enthusiastically making several more word associations, he turns to his team and insists they take him seriously.
Why is it believable to the audience that Billy can be so confident about his answer?
Billy's intuition made an association between the challenge question and riddle-like questions he's heard in the past. When Billy used his intuition to find a solution, his confidence in a riddle-like answer grew. Intuition recklessly uses irrelevant associations as reasons for narrowing down the space of possible solutions to technical problems. When associations pop in your mind, it's a good idea to legitimize those associations with supporting reasons.
Not recognizing precise language
Intuition thinkers are multi-channel learners -- all senses, thoughts and emotions are used to construct a complex database of clustered knowledge to predict and understand the world. With robust information-extracting ability, correct grammar/word-usage is, more often than not, unnecessary for meaningful communication.
Communicating technical concepts in a meaningful way requires precise language. Connotation and subtext are stripped away so words and phrases can purely represent meaningful concepts inside a logical framework. Intuition thinkers communicate with imprecise language, gathering meaning from context to compensate. This makes it hard for them to recognize when to turn off their powerful information extractors.
This bias explains part of why so many intuition thinkers dread math "word problems". Introducing words and phrases rich with meaning and connotation sends their intuition running wild. It's hard for them to find correspondences between words in the problem and variables in the theorems and formulas they've learned.
The noise intuition brings makes it hard to think clearly. It's hard for intuition thinkers to tell whether their automatic associations should be taken seriously. Without a reliable way to discern, wrong interpretations of words go undetected. For example, without any physics background, an intuition thinker may read the statement "Matter can have both wave and particle properties at once" and believe they completely understand it. Unrelated associations of what matter, wave and particle mean, blindly take precedence over technical definitions.
The slightest uncertainty about what a sentence means should raise a red flag. Going back and finding correspondence between each word and how it fits into a technical framework will eliminate any uncertainty.
Believing their level of understanding is deeper than what it is
Intuition works on an unconscious level, making intuition thinkers unaware of how they know what they know. Not surprisingly, their best tool to learn what it means to understand is intuition. The concept "understanding" is a collection of associations from experience. You may have learned that part of understanding something means being able to answer questions on a test with memorized factoids, or knowing what to say to convince people you understand, or just knowing more facts than your friends. These are not good methods for gaining a deep understanding of technical concepts.
When intuition thinkers optimize for understanding, they're really optimizing for a fuzzy idea of what they think understanding means. This often leaves them believing they understand a concept when all they've done is memorize some disconnected facts. Not knowing what it feels like to have deeper understanding, they become conditioned to always expect some amount of surprise. They can feel max understanding with less confidence than logical thinkers when they feel max understanding. This lower confidence disincentivizes intuition thinkers to invest in learning technical concepts, further keeping their logical thinking style underdeveloped.
One way I overcame this tendency was to constantly ask myself "why" questions, like a curious child bothering their parents. The technique helped me uncover what used to be unknown unknowns that made me feel overconfident in my understanding.
Logic-dominant Biases
Ignoring information they cannot immediately fit into a framework
Logical thinkers have and use intuition -- problem is they don't feed it enough. They tend to ignore valuable intuition-building information if it doesn't immediately fit into a predictive model they deeply understand. While intuition thinkers don't filter out enough noise, logical thinkers filter too much.
For example, if a logical thinker doesn't have a good framework for understanding human behavior, they're more likely to ignore visual input like body language and fashion, or auditory input like tone of voice and intonation. Human behavior is complicated, there's no framework to date that can make perfectly accurate predictions about it. Intuition can build powerful models despite working with many confounding variables.
Bayesian probability enables logical thinkers to build predictive models from noisy data without having to use intuition. But even then, the first step of making a Bayesian update is data collection.
Combatting this tendency requires you to pay attention to input you normally ignore. Supplement your broader attentional scope with a researched framework as a guide. Say you want to learn how storytelling works. Start by grabbing resources that teach storytelling and learn the basics. Out in the real-world, pay close attention to sights, sounds, and feelings when someone starts telling a story and try identifying sensory input to the storytelling elements you've learned about. Once the basics are subconsciously picked up by habit, your conscious attention will be freed up to make new and more subtle observations.
Ignoring their emotions
Emotional input is difficult to factor, especially because you're emotional at the time. Logical thinkers are notorious for ignoring this kind of messy data, consequently starving their intuition of emotional data. Being able to "go with your gut feelings" is a major function of intuition that logical thinkers tend to miss out on.
Your gut can predict if you'll get along long-term with a new SO, or what kind of outfit would give you more confidence in your workplace, or if learning tennis in your free time will make you happier, or whether you prefer eating a cheeseburger over tacos for lunch. Logical thinkers don't have enough data collected about their emotions to know what triggers them. They tend to get bogged down and mislead with objective, yet trivial details they manage to factor out. A weak understanding of their own emotions also leads to a weaker understanding of other's emotions. You can become a better empathizer by better understanding yourself.
You could start from scratch and build your own framework, but self-assessment biases will impede productivity. Learning an existing framework is a more realistic solution. You can find resources with some light googling and I'm sure CFAR teaches some good ones too. You can improve your gut feelings too. One way is making sure you're always consciously aware of the circumstances you're in when experiencing an emotion.
Making rules too strict
Logical thinkers build frameworks in order to understand things. When adding a new rule to a framework, there's motivation to make the rule strict. The stricter the rule, the more predictive power, the better the framework. When the domain you're trying to understand has multivariable chaotic phenomena, strict rules are likely to break. The result is something like the current state of macroeconomics: a bunch of logical thinkers preoccupied by elegant models and theories that can only exist when useless in practice.
Following rules that are too strict can have bad consequences. Imagine John the salesperson is learning how to make better first impressions and has built a rough framework so far. John has a rule that smiling always helps make people feel welcomed the first time they meet him. One day he makes a business trip to Russia to meet with a prospective client. The moment he meet his russian client, he flashes a big smile and continues to smile despite negative reactions. After a few hours of talking, his client reveals she felt he wasn't trustworthy at first and almost called off the meeting. Turns out that in Russia smiling to strangers is a sign of insincerity. John's strict rule didn't account for cultural differences, blindsiding him from updating on his clients reaction, putting him in a risky situation.
The desire to hold onto strict rules can make logical thinkers susceptible to confirmation bias too. If John made an exception to his smiling rule, he'd feel less confident about his knowledge of making first impressions, subsequently making him feel bad. He may also have to amend some other rule that relates to the smiling rule, which would further hurt his framework and his feelings.
When feeling the urge to add on a new rule, take note of circumstances in which the evidence for the rule was found in. Add exceptions that limit the rule's predictive power to similar circumstances. Another option is to entertain multiple conflicting rules simultaneously, shifting weight from one to the other after gathering more evidence.
The Robots, AI, and Unemployment Anti-FAQ
Q. Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?
A. Conventional economic theory says this shouldn't happen. Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns. If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot dogs in 15 buns. On standard economic theory, improved productivity - including from automating away some jobs - should produce increased standards of living, not long-term unemployment.
Q. Sounds like a lovely theory. As the proverb goes, the tragedy of science is a beautiful theory slain by an ugly fact. Experiment trumps theory and in reality, unemployment is rising.
A. Sure. Except that the happy equilibrium with 15 hot dogs in buns, is exactly what happened over the last four centuries where we went from 95% of the population being farmers to 2% of the population being farmers (in agriculturally self-sufficient developed countries). We don't live in a world where 93% of the people are unemployed because 93% of the jobs went away. The first thought of automation removing a job, and thus the economy having one fewer job, has not been the way the world has worked since the Industrial Revolution. The parable of the hot dog in the bun is how economies really, actually worked in real life for centuries. Automation followed by re-employment went on for literally centuries in exactly the way that the standard lovely economic model said it should. The idea that there's a limited amount of work which is destroyed by automation is known in economics as the "lump of labour fallacy".
Q. But now people aren't being reemployed. The jobs that went away in the Great Recession aren't coming back, even as the stock market and corporate profits rise again.
A. Yes. And that's a new problem. We didn't get that when the Model T automobile mechanized the entire horse-and-buggy industry out of existence. The difficulty with supposing that automation is producing unemployment is that automation isn't new, so how can you use it to explain this new phenomenon of increasing long-term unemployment?

A New Interpretation of the Marshmallow Test
I've begun to notice a pattern with experiments in behavioral economics. An experiment produces a result that's counter-intuitive and surprising, and demonstrates that people don't behave as rationally as expected. Then, as time passes, other researchers contrive different versions of the experiment that show the experiment may not have been about what we thought it was about in the first place. For example, in the dictator game, Jeffrey Winking and Nicholas Mizer changed the experiment so that the participants didn't know each other and the subjects didn't know they were in an experiment. With this simple adjustment that made the conditions of the game more realistic, the "dictators" switched from giving away a large portion of their unearned gains to giving away nothing. Now it's happened to the marshmallow test.
In the original Stanford marshmallow experiment, children were given one marshmallow. They could eat the marshmallow right away; or, if they waited fifteen minutes for the experimenter to return without eating the marshmallow, they'd get a second marshmallow. Even more interestingly, in follow-up studies two decades later, the children who waited longer for the second marshmallow, i.e. showed delayed gratification, had higher SAT scores, school performance, and even improved Body Mass Index. This is normally interpreted as indicating the importance of self-control and delayed gratification for life success.
Not so fast.
In a new variant of the experiment entitled (I kid you not) "Rational snacking", Celeste Kidd, Holly Palmeri, and Richard N. Aslin from the University of Rochester gave the children a similar test with an interesting twist.
They assigned 28 children to two groups asked to perform art projects. Children in the first group each received half a container of used crayons, and were told that if they could wait, the researcher would bring them more and better art supplies. However, after two and a half minutes, the adult returned and told the child they had made a mistake, and there were no more art supplies so they'd have to use the original crayons.
In part 2, the adult gave the child a single sticker and told the child that if they waited, the adult would bring them more stickers to use. Again the adult reneged.
Children in the second group went through the same routine except this time the adult fulfilled their promises, bringing the children more and better art supplies and several large stickers.
After these two events, the experimenters repeated the classic marshmallow test with both groups. The results demonstrated children were a lot more rational than we might have thought. Of the 14 children in group 1, who had been shown that the experimenters were unreliable adults, 13 of them ate the first marshmallow. 8 of the 14 children in the reliable adult group, waited out the fifteen minutes. On average children in unreliable group 1 waited only 3 minutes, and those in reliable group 2 waited 12 minutes.
So maybe what the longitudinal studies show is that children who come from an environment where they have learned to be more trusting have better life outcomes. I make absolutely no claims as to which direction the arrow of causality may run, or whether it's pure correlation with other factors. For instance, maybe breastfeeding increases both trust and academic performance. But any way you interpret these results, the case for the importance and even the existence of innate self-control is looking a lot weaker.
View more: Next

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)