2016 LessWrong Diaspora Survey Analysis: Part Four (Politics, Calibration & Probability, Futurology, Charity & Effective Altruism)
Politics
The LessWrong survey has a very involved section dedicated to politics. In previous analysis the benefits of this weren't fully realized. In the 2016 analysis we can look at not just the political affiliation of a respondent, but what beliefs are associated with a certain affiliation. The charts below summarize most of the results.
Political Opinions By Political Affiliation

Miscellaneous Politics
There were also some other questions in this section which aren't covered by the above charts.
Voting
| Group | Turnout |
|---|---|
| LessWrong | 68.9% |
| Austrailia | 91% |
| Brazil | 78.90% |
| Britain | 66.4% |
| Canada | 68.3% |
| Finland | 70.1% |
| France | 79.48% |
| Germany | 71.5% |
| India | 66.3% |
| Israel | 72% |
| New Zealand | 77.90% |
| Russia | 65.25% |
| United States | 54.9% |
Calibration And Probability Questions
Calibration Questions
I just couldn't analyze these, sorry guys. I put many hours into trying to get them into a decent format I could even read and that sucked up an incredible amount of time. It's why this part of the survey took so long to get out. Thankfully another LessWrong user, Houshalter, has kindly done their own analysis.
All my calibration questions were meant to satisfy a few essential properties:
- They should be 'self contained'. I.E, something you can reasonably answer or at least try to answer with a 5th grade science education and normal life experience.
- They should, at least to a certain extent, be Fermi Estimable.
- They should progressively scale in difficulty so you can see whether somebody understands basic probability or not. (eg. In an 'or' question do they put a probability of less than 50% of being right?)
At least one person requested a workbook, so I might write more in the future. I'll obviously write more for the survey.
Probability Questions
| Question | Mean | Median | Mode | Stdev |
| Please give the obvious answer to this question, so I can automatically throw away all surveys that don't follow the rules: What is the probability of a fair coin coming up heads? | 49.821 | 50.0 | 50.0 | 3.033 |
| What is the probability that the Many Worlds interpretation of quantum mechanics is more or less correct? | 44.599 | 50.0 | 50.0 | 29.193 |
| What is the probability that non-human, non-Earthly intelligent life exists in the observable universe? | 75.727 | 90.0 | 99.0 | 31.893 |
| ...in the Milky Way galaxy? | 45.966 | 50.0 | 10.0 | 38.395 |
| What is the probability that supernatural events (including God, ghosts, magic, etc) have occurred since the beginning of the universe? | 13.575 | 1.0 | 1.0 | 27.576 |
| What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe? | 15.474 | 1.0 | 1.0 | 27.891 |
| What is the probability that any of humankind's revealed religions is more or less correct? | 10.624 | 0.5 | 1.0 | 26.257 |
| What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then? | 21.225 | 10.0 | 5.0 | 26.782 |
| What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time? | 25.263 | 10.0 | 1.0 | 30.510 |
| What is the probability that our universe is a simulation? | 25.256 | 10.0 | 50.0 | 28.404 |
| What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions? | 83.307 | 90.0 | 90.0 | 23.167 |
| What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity? | 76.310 | 80.0 | 80.0 | 22.933 |
Probability questions is probably the area of the survey I put the least effort into. My plan for next year is to overhaul these sections entirely and try including some Tetlock-esque forecasting questions, a link to some advice on how to make good predictions, etc.
Futurology
This section got a bit of a facelift this year. Including new cryonics questions, genetic engineering, and technological unemployment in addition to the previous years.
Cryonics
Interestingly enough, of those who think it will work with enough confidence to say 'yes', only 14 are actually signed up for cryonics.
sqlite> select count(*) from data where CryonicsNow="Yes" and Cryonics="Yes - signed up or just finishing up paperwork";
14
sqlite> select count(*) from data where CryonicsNow="Yes" and (Cryonics="Yes - signed up or just finishing up paperwork" OR Cryonics="No - would like to sign up but unavailable in my area" OR "No - would like to sign up but haven't gotten around to it" OR "No - would like to sign up but can't afford it");
34
LessWrongers seem to be very bullish on the underlying physics of cryonics even if they're not as enthusiastic about current methods in use.
The Brain Preservation Foundation also did an analysis of cryonics responses to the LessWrong Survey.
Singularity
SingularityYear
By what year do you think the Singularity will occur? Answer such that you think, conditional on the Singularity occurring, there is an even chance of the Singularity falling before or after this year. If you think a singularity is so unlikely you don't even want to condition on it, leave this question blank.
Mean: 8.110300081581755e+16
Median: 2080.0
Mode: 2100.0
Stdev: 2.847858859055733e+18
I didn't bother to filter out the silly answers for this.Obviously it's a bit hard to see without filtering out the uber-large answers, but the median doesn't seem to have changed much from the 2014 survey.
Genetic Engineering
Well that's fairly overwhelming.
I find it amusing how the strict "No" group shrinks considerably after this question.
This question is too important to just not have an answer to so I'll do it manually. Unfortunately I can't easily remove the 'excluded' entries so that we're dealing with the exact same distribution but only 13 or so responses are filtered out anyway.
sqlite> select count(*) from data where GeneticImprovement="Yes";
1100
>>> 1100 + 176 + 262 + 84
1622
>>> 1100 / 1622
0.6781750924784217
67.8% are willing to genetically engineer their children for improvements.
These numbers go about how you would expect, with people being progressively less interested the more 'shallow' a genetic change is seen as.
All three of these seem largely consistent with peoples personal preferences about modification. Were I inclined I could do a deeper analysis that actually takes survey respondents row by row and looks at correlation between preference for ones own children and preference for others.
Technological Unemployment
LudditeFallacy
Do you think the Luddite's Fallacy is an actual fallacy?
Yes: 443 (30.936%)
No: 989 (69.064%)
We can use this as an overall measure of worry about technological unemployment, which would seem to be high among the LW demographic.
UnemploymentYear
By what year do you think the majority of people in your country will have trouble finding employment for automation related reasons? If you think this is something that will never happen leave this question blank.
Mean: 2102.9713740458014
Median: 2050.0
Mode: 2050.0
Stdev: 1180.2342850727339
Question is flawed because you can't distinguish answers of "never happen" from people who just didn't see it.Interesting question that would be fun to take a look at in comparison to the estimates for the singularity.
EndOfWork
Do you think the "end of work" would be a good thing?
Yes: 1238 (81.287%)
No: 285 (18.713%)
Fairly overwhelming consensus, but with a significant minority of people who have a dissenting opinion.
EndOfWorkConcerns
If machines end all or almost all employment, what are your biggest worries? Pick two.
| Question | Count | Percent |
| People will just idle about in destructive ways | 513 | 16.71% |
| People need work to be fulfilled and if we eliminate work we'll all feel deep existential angst | 543 | 17.687% |
| The rich are going to take all the resources for themselves and leave the rest of us to starve or live in poverty | 1066 | 34.723% |
| The machines won't need us, and we'll starve to death or be otherwise liquidated | 416 | 13.55% |
The plurality of worries are about elites who refuse to share their wealth.
Existential Risk
XRiskType
Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?
Nuclear war: +4.800% 326 (20.6%)
Asteroid strike: -0.200% 64 (4.1%)
Unfriendly AI: +1.000% 271 (17.2%)
Nanotech / grey goo: -2.000% 18 (1.1%)
Pandemic (natural): +0.100% 120 (7.6%)
Pandemic (bioengineered): +1.900% 355 (22.5%)
Environmental collapse (including global warming): +1.500% 252 (16.0%)
Economic / political collapse: -1.400% 136 (8.6%)
Other: 35 (2.217%)
Significantly more people worried about Nuclear War than last year. Effect of new respondents, or geopolitical situation? Who knows.
Charity And Effective Altruism
Charitable Giving
Income
What is your approximate annual income in US dollars (non-Americans: convert at www.xe.com)? Obviously you don't need to answer this question if you don't want to. Please don't include commas or dollar signs.
Sum: 66054140.47384
Mean: 64569.052271593355
Median: 40000.0
Mode: 30000.0
Stdev: 107297.53606321265
IncomeCharityPortion
How much money, in number of dollars, have you donated to charity over the past year? (non-Americans: convert to dollars at http://www.xe.com/ ). Please don't include commas or dollar signs in your answer. For example, 4000
Sum: 2389900.6530000004
Mean: 2914.5129914634144
Median: 353.0
Mode: 100.0
Stdev: 9471.962766896671
XriskCharity
How much money have you donated to charities aiming to reduce existential risk (other than MIRI/CFAR) in the past year?
Sum: 169300.89
Mean: 1991.7751764705883
Median: 200.0
Mode: 100.0
Stdev: 9219.941506342007
CharityDonations
How much have you donated in US dollars to the following charities in the past year? (Non-americans: convert to dollars at http://www.xe.com/) Please don't include commas or dollar signs in your answer. Options starting with "any" aren't the name of a charity but a category of charity.
| Question | Sum | Mean | Median | Mode | Stdev |
| Against Malaria Foundation | 483935.027 | 1905.256 | 300.0 | None | 7216.020 |
| Schistosomiasis Control Initiative | 47908.0 | 840.491 | 200.0 | 1000.0 | 1618.785 |
| Deworm the World Initiative | 28820.0 | 565.098 | 150.0 | 500.0 | 1432.712 |
| GiveDirectly | 154410.177 | 1429.723 | 450.0 | 50.0 | 3472.082 |
| Any kind of animal rights charity | 83130.47 | 1093.821 | 154.235 | 500.0 | 2313.493 |
| Any kind of bug rights charity | 1083.0 | 270.75 | 157.5 | None | 353.396 |
| Machine Intelligence Research Institute | 141792.5 | 1417.925 | 100.0 | 100.0 | 5370.485 |
| Any charity combating nuclear existential risk | 491.0 | 81.833 | 75.0 | 100.0 | 68.060 |
| Any charity combating global warming | 13012.0 | 245.509 | 100.0 | 10.0 | 365.542 |
| Center For Applied Rationality | 127101.0 | 3177.525 | 150.0 | 100.0 | 12969.096 |
| Strategies for Engineered Negligible Senescence Research Foundation | 9429.0 | 554.647 | 100.0 | 20.0 | 1156.431 |
| Wikipedia | 12765.5 | 53.189 | 20.0 | 10.0 | 126.444 |
| Internet Archive | 2975.04 | 80.406 | 30.0 | 50.0 | 173.791 |
| Any campaign for political office | 38443.99 | 366.133 | 50.0 | 50.0 | 1374.305 |
| Other | 564890.46 | 1661.442 | 200.0 | 100.0 | 4670.805 |
This table is interesting given the recent debates about how much money certain causes are 'taking up' in Effective Altruism.
Effective Altruism
Vegetarian
Do you follow any dietary restrictions related to animal products?
Yes, I am vegan: 54 (3.4%)
Yes, I am vegetarian: 158 (10.0%)
Yes, I restrict meat some other way (pescetarian, flexitarian, try to only eat ethically sourced meat): 375 (23.7%)
No: 996 (62.9%)
EAKnowledge
Do you know what Effective Altruism is?
Yes: 1562 (89.3%)
No but I've heard of it: 114 (6.5%)
No: 74 (4.2%)
EAIdentity
Do you self-identify as an Effective Altruist?
Yes: 665 (39.233%)
No: 1030 (60.767%)
The distribution given by the 2014 survey results does not sum to one, so it's difficult to determine if Effective Altruism's membership actually went up or not but if we take the numbers at face value it experienced an 11.13% increase in membership.
EACommunity
Do you participate in the Effective Altruism community?
Yes: 314 (18.427%)
No: 1390 (81.573%)
Same issue as last, taking the numbers at face value community participation went up by 5.727%
EADonations
Has Effective Altruism caused you to make donations you otherwise wouldn't?
Yes: 666 (39.269%)
No: 1030 (60.731%)
Wowza!
Effective Altruist Anxiety
EAAnxiety
Have you ever had any kind of moral anxiety over Effective Altruism?
Yes: 501 (29.6%)
Yes but only because I worry about everything: 184 (10.9%)
No: 1008 (59.5%)
There's an ongoing debate in Effective Altruism about what kind of rhetorical strategy is best for getting people on board and whether Effective Altruism is causing people significant moral anxiety.
It certainly appears to be. But is moral anxiety effective? Let's look:
Sample Size: 244
Average amount of money donated by people anxious about EA who aren't EAs: 257.5409836065574
Sample Size: 679
Average amount of money donated by people who aren't anxious about EA who aren't EAs: 479.7501384388807
Sample Size: 249 Average amount of money donated by EAs anxious about EA: 1841.5292369477913
Sample Size: 314
Average amount of money donated by EAs not anxious about EA: 1837.8248407643312
It seems fairly conclusive that anxiety is not a good way to get people to donate more than they already are, but is it a good way to get people to become Effective Altruists?
Sample Size: 1685
P(Effective Altruist): 0.3940652818991098
P(EA Anxiety): 0.29554896142433235
P(Effective Altruist | EA Anxiety): 0.5
Maybe. There is of course an argument to be made that sufficient good done by causing people anxiety outweighs feeding into peoples scrupulosity, but it can be discussed after I get through explaining it on the phone to wealthy PR-conscious donors and telling the local all-kill shelter where I want my shipment of dead kittens.
EAOpinion
What's your overall opinion of Effective Altruism?
Positive: 809 (47.6%)
Mostly Positive: 535 (31.5%)
No strong opinion: 258 (15.2%)
Mostly Negative: 75 (4.4%)
Negative: 24 (1.4%)
EA appears to be doing a pretty good job of getting people to like them.
Interesting Tables
| Affiliation | Income | Charity Contributions | % Income Donated To Charity | Total Survey Charity % | Sample Size |
|---|---|---|---|---|---|
| Anarchist | 1677900.0 | 72386.0 | 4.314% | 3.004% | 50 |
| Communist | 298700.0 | 19190.0 | 6.425% | 0.796% | 13 |
| Conservative | 1963000.04 | 62945.04 | 3.207% | 2.612% | 38 |
| Futarchist | 1497494.1099999999 | 166254.0 | 11.102% | 6.899% | 31 |
| Left-Libertarian | 9681635.613839999 | 416084.0 | 4.298% | 17.266% | 245 |
| Libertarian | 11698523.0 | 214101.0 | 1.83% | 8.885% | 190 |
| Moderate | 3225475.0 | 90518.0 | 2.806% | 3.756% | 67 |
| Neoreactionary | 1383976.0 | 30890.0 | 2.232% | 1.282% | 28 |
| Objectivist | 399000.0 | 1310.0 | 0.328% | 0.054% | 10 |
| Other | 3150618.0 | 85272.0 | 2.707% | 3.539% | 132 |
| Pragmatist | 5087007.609999999 | 266836.0 | 5.245% | 11.073% | 131 |
| Progressive | 8455500.440000001 | 368742.78 | 4.361% | 15.302% | 217 |
| Social Democrat | 8000266.54 | 218052.5 | 2.726% | 9.049% | 237 |
| Socialist | 2621693.66 | 78484.0 | 2.994% | 3.257% | 126 |
| Community | Count | % In Community | Sample Size |
|---|---|---|---|
| LessWrong | 136 | 38.418% | 354 |
| LessWrong Meetups | 109 | 50.463% | 216 |
| LessWrong Facebook Group | 83 | 48.256% | 172 |
| LessWrong Slack | 22 | 39.286% | 56 |
| SlateStarCodex | 343 | 40.98% | 837 |
| Rationalist Tumblr | 175 | 49.716% | 352 |
| Rationalist Facebook | 89 | 58.94% | 151 |
| Rationalist Twitter | 24 | 40.0% | 60 |
| Effective Altruism Hub | 86 | 86.869% | 99 |
| Good Judgement(TM) Open | 23 | 74.194% | 31 |
| PredictionBook | 31 | 51.667% | 60 |
| Hacker News | 91 | 35.968% | 253 |
| #lesswrong on freenode | 19 | 24.675% | 77 |
| #slatestarcodex on freenode | 9 | 24.324% | 37 |
| #chapelperilous on freenode | 2 | 18.182% | 11 |
| /r/rational | 117 | 42.545% | 275 |
| /r/HPMOR | 110 | 47.414% | 232 |
| /r/SlateStarCodex | 93 | 37.959% | 245 |
| One or more private 'rationalist' groups | 91 | 47.15% | 193 |
| Affiliation | EA Income | EA Charity | Sample Size |
|---|---|---|---|
| Anarchist | 761000.0 | 57500.0 | 18 |
| Futarchist | 559850.0 | 114830.0 | 15 |
| Left-Libertarian | 5332856.0 | 361975.0 | 112 |
| Libertarian | 2725390.0 | 114732.0 | 53 |
| Moderate | 583247.0 | 56495.0 | 22 |
| Other | 1428978.0 | 69950.0 | 49 |
| Pragmatist | 1442211.0 | 43780.0 | 43 |
| Progressive | 4004097.0 | 304337.78 | 107 |
| Social Democrat | 3423487.45 | 149199.0 | 93 |
| Socialist | 678360.0 | 34751.0 | 41 |
Wikipedia usage survey results
Contents
- Summary
- Background
- Previous surveys
- Motivation
- Survey questions for the first survey
- Survey questions for the second survey
- Survey questions for the third survey (Google Consumer Surveys)
- Results
- S1Q1: number of Wikipedia pages read per week
- S1Q2: affinity for Wikipedia in search results
- S1Q3: section vs whole page
- S1Q4: search functionality on Wikipedia and surprise at lack of Wikipedia pages
- S1Q5: behavior on pages
- S2Q1: number of Wikipedia pages read per week
- S2Q2: multiple-choice of articles read
- S2Q3: free response of articles read
- S2Q4: free response of surprise at lack of Wikipedia pages
- S3Q1 (Google Consumer Surveys)
- Summaries of responses (exports for SurveyMonkey, weblink for Google Consumer Surveys)
- Survey-making lessons
- Further questions
- Further reading
- Acknowledgements
- Document source and versions
- License
Summary
The summary is not intended to be comprehensive. It highlights the most important takeaways you should get from this post.
-
Vipul Naik and I are interested in understanding how people use Wikipedia. One reason is that we are getting more people to work on editing and adding content to Wikipedia. We want to understand the impact of these edits, so that we can direct efforts more strategically. We are also curious!
-
From May to July 2016, we conducted two surveys of people’s Wikipedia usage. We collected survey responses from audience segments include Slate Star Codex readers, Vipul’s Facebook friends, and a few audiences through SurveyMonkey Audience and Google Consumer Surveys. Our survey questions measured how heavily people use Wikipedia, what sort of pages they read or expected to find, the relation between their search habits and Wikipedia, and other actions they took within Wikipedia.
-
Different audience segments responded very differently to the survey. Notably, the SurveyMonkey audience (which is closer to being representative of the general population) appears to use Wikipedia a lot less than Vipul’s Facebook friends and Slate Star Codex readers. Their consumption of Wikipedia is also more passive: they are less likely to explicitly seek Wikipedia pages when searching for a topic, and less likely to engage in additional actions on Wikipedia pages. Even the college-educated SurveyMonkey audience used Wikipedia very little.
-
This is tentative evidence that Wikipedia consumption is skewed towards a certain profile of people (and Vipul’s Facebook friends and Slate Star Codex readers sample much more heavily from that profile). Even more tentatively, these heavy users tend to be more “elite” and influential. This tentatively led us to revise upward our estimates of the social value of a Wikipedia pageview.
-
This was my first exercise in survey construction. I learned a number of lessons about survey design in the process.
-
All the survey questions, as well as the breakdown of responses for each of the audience segments, are described in this post. Links to PDF exports of response summaries are at the end of the post.
Background
At the end of May 2016, Vipul Naik and I created a Wikipedia usage survey to gauge the usage habits of Wikipedia readers and editors. SurveyMonkey allows the use of different “collectors” (i.e. survey URLs that keep results separate), so we circulated several different URLs among four locations to see how different audiences would respond. The audiences were as follows:
- SurveyMonkey’s United States audience with no demographic filters (62 responses, 54 of which are full responses)
- Vipul Naik’s timeline (post asking people to take the survey; 70 responses, 69 of which are full responses). For background on Vipul’s timeline audience, see his page on how he uses Facebook.
- The Wikipedia Analytics mailing list (email linking to the survey; 7 responses, 6 of which are full responses). Note that due to the small size of this group, the results below should not be trusted, unless possibly when the votes are decisive.
- Slate Star Codex (post that links to the survey; 618 responses, 596 of which are full responses). While Slate Star Codex isn’t the same as LessWrong, we think there is significant overlap in the two sites’ audiences (see e.g. the recent LessWrong diaspora survey results).
- In addition, although not an actual audience with a separate URL, several of the tables we present below will include an “H” group; this is the heavy users group of people who responded by saying they read 26 or more articles per week on Wikipedia. This group has 179 people: 164 from Slate Star Codex, 11 from Vipul’s timeline, and 4 from the Analytics mailing list.
We ran the survey from May 30 to July 9, 2016 (although only the Slate Star Codex survey had a response past June 1).
After we looked at the survey responses on the first day, Vipul and I decided to create a second survey to focus on the parts from the first survey that interested us the most. The second survey was only circulated among SurveyMonkey’s audiences: we used SurveyMonkey’s US audience with no demographic filters (54 responses), as well as a US audience of ages 18–29 with a college or graduate degree (50 responses). We first ran the survey on the unfiltered audience again because the wording of our first question was changed and we wanted to have the new baseline. We then chose to filter for young college-educated people because our prediction was that more educated people would be more likely to read Wikipedia (the SurveyMonkey demographic data does not include education, and we hadn’t seen the Pew Internet Research surveys in the next section, so we were relying on our intuition and some demographic data from past surveys) and because young people in our first survey gave more informative free-form responses in survey 2 (SurveyMonkey’s demographic data does include age).
We ran a third survey on Google Consumer Surveys with a single question that was a word-to-word replica of the first question from the second survey. The main motivation here was that on Google Consumer Surveys, a single-question survey costs only 10 cents per response, so it was possible to get to a large number of responses at relatively low cost, and achieve more confidence in the tentative conclusions we had drawn from the SurveyMonkey surveys.
Previous surveys
Several demographic surveys regarding Wikipedia have been conducted, targeting both editors and users. The surveys we found most helpful were the following:
- The 2010 Wikipedia survey by the Collaborative Creativity Group and the Wikimedia Foundation. The explanation before the bottom table on page 7 of the overview PDF has “Contributors show slightly but significantly higher education levels than readers”, which provides weak evidence that more educated people are more likely to engage with Wikipedia.
- The Global South User Survey 2014 by the Wikimedia Foundation
- Pew Internet Research’s 2011 survey: “Education level continues to be the strongest predictor of Wikipedia use. The collaborative encyclopedia is most popular among internet users with at least a college degree, 69% of whom use the site.” (page 3)
- Pew Internet Research’s 2007 survey
Note that we found the Pew Internet Research surveys after conducting our own two surveys (and during the write-up of this document).
Motivation
Vipul and I ultimately want to get a better sense of the value of a Wikipedia pageview (one way to measure the impact of content creation), and one way to do this is to understand how people are using Wikipedia. As we focus on getting more people to work on editing Wikipedia – thus causing more people to read the content we pay and help to create – it becomes more important to understand what people are doing on the site.
For some previous discussion, see also Vipul’s answers to the following Quora questions:
- What are the various parameters that affect the value of a pageview?
- What’s the relative social value of 1 Quora pageview (as measured by Quora stats http://www.quora.com/stats) and 1 Wikipedia pageview (as measured at, say, Wikipedia article traffic statistics)?
Wikipedia allows relatively easy access to pageview data (especially by using tools developed for this purpose, including one that Vipul made), and there are some surveys that provide demographic data (see “Previous surveys” above). However, after looking around, it was apparent that the kind of information our survey was designed to find was not available.
I should also note that we were driven by our curiosity of how people use Wikipedia.
Survey questions for the first survey
For reference, here are the survey questions for the first survey. A dummy/mock-up version of the survey can be found here: https://www.surveymonkey.com/r/PDTTBM8.
The survey introduction said the following:
This survey is intended to gauge Wikipedia use habits. This survey has 3 pages with 5 questions total (3 on the first page, 1 on the second page, 1 on the third page). Please try your best to answer all of the questions, and make a guess if you’re not sure.
And the actual questions:
-
How many distinct Wikipedia pages do you read per week on average?
- less than 1
- 1 to 10
- 11 to 25
- 26 or more
-
On a search engine (e.g. Google) results page, do you explicitly seek Wikipedia pages, or do you passively click on Wikipedia pages only if they show up at the top of the results?
- I explicitly seek Wikipedia pages
- I have a slight preference for Wikipedia pages
- I just click on what is at the top of the results
-
Do you usually read a particular section of a page or the whole article?
- Particular section
- Whole page
-
How often do you do the following? (Choices: Several times per week, About once per week, About once per month, About once per several months, Never/almost never.)
- Use the search functionality on Wikipedia
- Be surprised that there is no Wikipedia page on a topic
-
For what fraction of pages you read do you do the following? (Choices: For every page, For most pages, For some pages, For very few pages, Never. These were displayed in a random order for each respondent, but displayed in alphabetical order here.)
- Check (click or hover over) at least one citation to see where the information comes from on a page you are reading
- Check how many pageviews a page is getting (on an external site or through the Pageview API)
- Click through/look for at least one cited source to verify the information on a page you are reading
- Edit a page you are reading because of grammatical/typographical errors on the page
- Edit a page you are reading to add new information
- Look at the “See also” section for additional articles to read
- Look at the editing history of a page you are reading
- Look at the editing history solely to see if a particular user wrote the page
- Look at the talk page of a page you are reading
- Read a page mostly for the “Criticisms” or “Reception” (or similar) section, to understand different views on the subject
- Share the page with a friend/acquaintance/coworker
For the SurveyMonkey audience, there were also some demographic questions (age, gender, household income, US region, and device type).
Survey questions for the second survey
For reference, here are the survey questions for the second survey. A dummy/mock-up version of the survey can be found here: https://www.surveymonkey.com/r/28BW78V.
The survey introduction said the following:
This survey is intended to gauge Wikipedia use habits. Please try your best to answer all of the questions, and make a guess if you’re not sure.
This survey has 4 questions across 3 pages.
In this survey, “Wikipedia page” refers to a Wikipedia page in any language (not just the English Wikipedia).
And the actual questions:
-
How many distinct Wikipedia pages do you read (at least one sentence of) per week on average?
- Fewer than 1
- 1 to 10
- 11 to 25
- 26 or more
-
Which of these articles have you read (at least one sentence of) on Wikipedia (select all that apply)? (These were displayed in a random order except the last option for each respondent, but displayed in alphabetical order except the last option here.)
- Adele
- Barack Obama
- Bernie Sanders
- China
- Donald Trump
- Hillary Clinton
- India
- Japan
- Justin Bieber
- Justin Trudeau
- Katy Perry
- Taylor Swift
- The Beatles
- United States
- World War II
- None of the above
-
What are some of the Wikipedia articles you have most recently read (at least one sentence of)? Feel free to consult your browser’s history.
-
Recall a time when you were surprised that a topic did not have a Wikipedia page. What were some of these topics?
Survey questions for the third survey (Google Consumer Surveys)
This survey had exactly one question. The wording of the question was exactly the same as that of the first question of the second survey.
-
How many distinct Wikipedia pages do you read (at least one sentence of) per week on average?
- Fewer than 1
- 1 to 10
- 11 to 25
- 26 or more
One slight difference was that whereas in the second survey, the order of the options was fixed, the third survey did a 50/50 split between that order and the exact reverse order. Such splitting is a best practice to deal with any order-related biases, while still preserving the logical order of the options. You can read more on the questionnaire design page of the Pew Research Center.
Results
In this section we present the highlights from each of the survey questions. If you prefer to dig into the data yourself, there are also some exported PDFs below provided by SurveyMonkey. Most of the inferences can be made using these PDFs, but there are some cases where additional filters are needed to deduce certain percentages.
We use the notation “SnQm” to mean “survey n question m”.
S1Q1: number of Wikipedia pages read per week
Here is a table that summarizes the data for Q1:
| Response | SM | V | SSC | AM |
|---|---|---|---|---|
| less than 1 | 42% | 1% | 1% | 0% |
| 1 to 10 | 45% | 40% | 37% | 29% |
| 11 to 25 | 13% | 43% | 36% | 14% |
| 26 or more | 0% | 16% | 27% | 57% |
Here are some highlights from the first question that aren’t apparent from the table:
-
Of the people who read fewer than 1 distinct Wikipedia page per week (26 people), 68% were female even though females were only 48% of the respondents. (Note that gender data is only available for the SurveyMonkey audience.)
-
Filtering for high household income ($150k or more; 11 people) in the SurveyMonkey audience, only 2 read fewer than 1 page per week, although most (7) of the responses still fall in the “1 to 10” category.
The comments indicated that this question was flawed in several ways: we didn’t specify which language Wikipedias count nor what it meant to “read” an article (the whole page, a section, or just a sentence?). One comment questioned the “low” ceiling of 26; in fact, I had initially made the cutoffs 1, 10, 100, 500, and 1000, but Vipul suggested the final cutoffs because he argued they would make it easier for people to answer (without having to look it up in their browser history). It turned out this modification was reasonable because the “26 or more” group was a minority.
S1Q2: affinity for Wikipedia in search results
We asked Q2, “On a search engine (e.g. Google) results page, do you explicitly seek Wikipedia pages, or do you passively click on Wikipedia pages only if they show up at the top of the results?”, to see to what extent people preferred Wikipedia in search results. The main implication to this for people who do content creation on Wikipedia is that if people do explicitly seek Wikipedia pages (for whatever reason), it makes sense to give them more of what they want. On the other hand, if people don’t prefer Wikipedia, it makes sense to update in favor of diversifying one’s content creation efforts while still keeping in mind that raw pageviews indicate that content will be read more if placed on Wikipedia (see for instance Brian Tomasik’s experience, which is similar to my own, or gwern’s page comparing Wikipedia with other wikis).
The following table summarizes our results.
| Response | SM | V | SSC | AM | H |
|---|---|---|---|---|---|
| Explicitly seek Wikipedia | 19% | 60% | 63% | 57% | 79% |
| Slight preference for Wikipedia | 29% | 39% | 34% | 43% | 20% |
| Just click on top results | 52% | 1% | 3% | 0% | 1% |
One error on my part was that I didn’t include an option for people who avoided Wikipedia or did something else. This became apparent from the comments. For this reason, the “Just click on top results” options might be inflated. In addition, some comments indicated a mixed strategy of preferring Wikipedia for general overviews while avoiding it for specific inquiries, so allowing multiple selections might have been better for this question.
S1Q3: section vs whole page
This question is relevant for Vipul and me because the work Vipul funds is mainly whole-page creation. If people are mostly reading the introduction or a particular section like the “Criticisms” or “Reception” section (see S1Q5), then that forces us to consider spending more time on those sections, or to strengthen those sections on weak existing pages.
Responses to this question were fairly consistent across different audiences, as can be see in the following table.
| Response | SM | V | SSC | AM |
|---|---|---|---|---|
| Section | 73% | 80% | 74% | 86% |
| Whole | 34% | 23% | 33% | 29% |
Note that people were allowed to select more than one option for this question. The comments indicate that several people do a combination, where they read the introductory portion of an article, then narrow down to the section of their interest.
S1Q4: search functionality on Wikipedia and surprise at lack of Wikipedia pages
We asked about whether people use the search functionality on Wikipedia because we wanted to know more about people’s article discovery methods. The data is summarized in the following table.
| Response | SM | V | SSC | AM | H |
|---|---|---|---|---|---|
| Several times per week | 8% | 14% | 32% | 57% | 55% |
| About once per week | 19% | 17% | 21% | 14% | 15% |
| About once per month | 15% | 13% | 14% | 0% | 3% |
| About once per several months | 13% | 12% | 9% | 14% | 5% |
| Never/almost never | 45% | 43% | 24% | 14% | 23% |
Many people noted here that rather than using Wikipedia’s search functionality, they use Google with “wiki” attached to their query, DuckDuckGo’s “!w” expression, or some browser configuration to allow a quick search on Wikipedia.
To be more thorough about discovering people’s content discovery methods, we should have asked about other methods as well. We did ask about the “See also” section in S1Q5.
Next, we asked how often people are surprised that there is no Wikipedia page on a topic to gauge to what extent people notice a “gap” between how Wikipedia exists today and how it could exist. We were curious about what articles people specifically found missing, so we followed up with S2Q4.
| Response | SM | V | SSC | AM | H |
|---|---|---|---|---|---|
| Several times per week | 2% | 0% | 2% | 29% | 6% |
| About once per week | 8% | 22% | 18% | 14% | 34% |
| About once per month | 18% | 36% | 34% | 29% | 31% |
| About once per several months | 21% | 22% | 27% | 0% | 19% |
| Never/almost never | 52% | 20% | 19% | 29% | 10% |
Two comments on this question (out of 59) – both from the SSC group – specifically bemoaned deletionism, with one comment calling deletionism “a cancer killing Wikipedia”.
S1Q5: behavior on pages
This question was intended to gauge how often people perform an action for a specific page; as such, the frequencies are expressed in page-relative terms.
The following table presents the scores for each response, which are weighted by the number of responses. The scores range from 1 (for every page) to 5 (never); in other words, the lower the number, the more frequently one does the thing.
| Response | SM | V | SSC | AM | H |
|---|---|---|---|---|---|
| Check ≥1 citation | 3.57 | 2.80 | 2.91 | 2.67 | 2.69 |
| Look at “See also” | 3.65 | 2.93 | 2.92 | 2.67 | 2.76 |
| Read mostly for “Criticisms” or “Reception” | 4.35 | 3.12 | 3.34 | 3.83 | 3.14 |
| Click through ≥1 source to verify information | 3.80 | 3.07 | 3.47 | 3.17 | 3.36 |
| Share the page | 4.11 | 3.72 | 3.86 | 3.67 | 3.79 |
| Look at the talk page | 4.31 | 4.28 | 4.03 | 3.00 | 3.86 |
| Look at the editing history | 4.35 | 4.32 | 4.12 | 3.33 | 3.92 |
| Edit a page for grammatical/typographical errors | 4.50 | 4.41 | 4.22 | 3.67 | 4.02 |
| Edit a page to add new information | 4.61 | 4.55 | 4.49 | 3.83 | 4.34 |
| Look at editing history to verify author | 4.50 | 4.65 | 4.48 | 3.67 | 4.73 |
| Check how many pageviews a page is getting | 4.63 | 4.88 | 4.96 | 3.17 | 4.92 |
The table above provides a good ranking of how often people perform these actions on pages, but not the distribution information (which would require three dimensions to present fully). In general, the more common actions (scores of 2.5–4) had responses that clustered among “For some pages”, “For very few pages”, and “Never”, while the less common actions (scores above 4) had responses that clustered mainly in “Never”.
One comment (out of 43) – from the SSC group, but a different individual from the two in S1Q4 – bemoaned deletionism.
S2Q1: number of Wikipedia pages read per week
Note the wording changes on this question for the second survey: “less” was changed to “fewer”, the clarification “at least one sentence of” was added, and we explicitly allowed any language. We have also presented the survey 1 results for the SurveyMonkey audience in the corresponding rows, but note that because of the change in wording, the correspondence isn’t exact.
| Response | SM | CEYP | S1SM |
|---|---|---|---|
| Fewer than 1 | 37% | 32% | 42% |
| 1 to 10 | 48% | 64% | 45% |
| 11 to 25 | 7% | 2% | 13% |
| 26 or more | 7% | 2% | 0% |
Comparing SM with S1SM, we see that probably because of the wording, the percentages have drifted in the direction of more pages read. It might be surprising that the young educated audience seems to have a smaller fraction of heavy users than the general population. However note that each group only had ~50 responses, and that we have no education information for the SM group.
S2Q2: multiple-choice of articles read
Our intention with this question was to see if people’s stated or recalled article frequencies matched the actual, revealed popularity of the articles. Therefore we present the pageview data along with the percentage of people who said they had read an article.
| Response | SM | CEYP | 2016 | 2015 |
|---|---|---|---|---|
| None | 37% | 40% | — | — |
| World War II | 17% | 22% | 2.6 | 6.5 |
| Barack Obama | 17% | 20% | 3.0 | 7.7 |
| United States | 17% | 18% | 4.3 | 9.6 |
| Donald Trump | 15% | 18% | 14.0 | 6.6 |
| Taylor Swift | 9% | 18% | 1.7 | 5.3 |
| Bernie Sanders | 17% | 16% | 4.3 | 3.8 |
| Japan | 11% | 16% | 1.6 | 3.7 |
| Adele | 6% | 16% | 2.0 | 4.0 |
| Hillary Clinton | 19% | 14% | 2.8 | 1.5 |
| China | 13% | 14% | 1.9 | 5.2 |
| The Beatles | 11% | 14% | 1.4 | 3.0 |
| Katy Perry | 9% | 12% | 0.8 | 2.4 |
| 15% | 10% | 3.0 | 9.0 | |
| India | 13% | 10% | 2.4 | 6.4 |
| Justin Bieber | 4% | 8% | 1.6 | 3.0 |
| Justin Trudeau | 9% | 6% | 1.1 | 3.0 |
Below are four plots of the data. Note that r_s denotes Spearman’s rank correlation coefficient. Spearman’s rank correlation coefficient is used instead of Pearson’s r because the former is less affected by outliers. Note also that the percentage of respondents who viewed a page counts each respondent once, whereas the number of pageviews does not have this restriction (i.e. duplicate pageviews count), so we wouldn’t expect the relationship to be entirely linear even if the survey audiences were perfectly representative of the general population.
SM vs 2016 pageviews
SM vs 2015 pageviews
CEYP vs 2016 pageviews
CEYP vs 2015 pageviews
S2Q3: free response of articles read
The most common response was along the lines of “None”, “I don’t know”, “I don’t remember”, or similar. Among the more useful responses were:
- News stories (e.g. Death of Harambe, “WikiLeaks scandal” – unclear which page this is, since there are several pages on various aspects of WikiLeaks)
- Popular culture:
- People including Megan Fox, LeBron James, Rita Hayworth
- Works including Aladdin and the King of Thieves, X-Men: Apocalypse
- More traditional encyclopedic information (e.g. Emerald ash borer, Spain, Siphonophorae, Scolopendra gigantea)
S2Q4: free response of surprise at lack of Wikipedia pages
As with the previous question, the most common response was along the lines of “None”, “I don’t know”, “I don’t remember”, “Doesn’t happen”, or similar.
The most useful responses were classes of things: “particular words”, “French plays/books”, “Random people”, “obscure people”, “Specific list pages of movie genres”, “Foreign actors”, “various insect pages”, and so forth.
S3Q1 (Google Consumer Surveys)
The survey was circulated to a target size of 500 in the United States (no demographic filters), and received 501 responses.
Since there was only one question, but we obtained data filtered by demographics in many different ways, we present this table with the columns denoting responses and the rows denoting the audience segments. We also include the S1Q1SM, S2Q1SM, and S2Q1CEYP responses for easy comparison. Note that S1Q1SM did not include the “at least one sentence of” caveat. We believe that adding this caveat would push people’s estimates upward.
If you view the Google Consumer Surveys results online you will also see the 95% confidence intervals for each of the segments. Note that percentages in a row may not add up to 100% due to rounding or due to people entering “Other” responses. For the entire GCS audience, every pair of options had a statistically significant difference, but for some subsegments, this was not true.
| Audience segment | Fewer than 1 | 1 to 10 | 11 to 25 | 26 or more |
|---|---|---|---|---|
| S1Q1SM (N = 62) | 42% | 45% | 13% | 0% |
| S2Q1SM (N = 54) | 37% | 48% | 7% | 7% |
| S2Q1CEYP (N = 50) | 32% | 64% | 2% | 2% |
| GCS all (N = 501) | 47% | 35% | 12% | 6% |
| GCS male (N = 205) | 41% | 38% | 16% | 5% |
| GCS female (N = 208) | 52% | 34% | 10% | 5% |
| GCS 18–24 (N = 54) | 33% | 46% | 13% | 7% |
| GCS 25–34 (N = 71) | 41% | 37% | 16% | 7% |
| GCS 35–44 (N = 69) | 51% | 35% | 10% | 4% |
| GCS 45–54 (N = 77) | 46% | 40% | 12% | 3% |
| GCS 55–64 (N = 69) | 57% | 32% | 7% | 4% |
| GCS 65+ (N = 50) | 52% | 24% | 18% | 4% |
| GCS Urban (N = 176) | 44% | 35% | 14% | 7% |
| GCS Suburban (N = 224) | 50% | 34% | 10% | 6% |
| GCS Rural (N = 86) | 44% | 35% | 14% | 6% |
| GCS $0–24K (N = 49) | 41% | 37% | 16% | 6% |
| GCS $25–49K (N = 253) | 53% | 30% | 10% | 6% |
| GCS $50–74K (N = 132) | 42% | 39% | 13% | 6% |
| GCS $75–99K (N = 37) | 43% | 35% | 11% | 11% |
| GCS $100–149K (N = 11) | 9% | 64% | 18% | 9% |
| GCS $150K+ (N = 4) | 25% | 75% | 0% | 0% |
We can see that the overall GCS data vindicates the broad conclusions we drew from SurveyMonkey data. Moreover, most GCS segments with a sufficiently large number of responses (50 or more) display a similar trend as the overall data. One exception is that younger audiences seem to be slightly less likely to use Wikipedia very little (i.e. fall in the “Fewer than 1” category), and older audiences seem slightly more likely to use Wikipedia very little.
Summaries of responses (exports for SurveyMonkey, weblink for Google Consumer Surveys)
SurveyMonkey allows exporting of response summaries. Here are the exports for each of the audiences.
- Survey 1, SurveyMonkey’s audience
- Survey 1, Vipul’s timeline
- Survey 1, Wikipedia Analytics mailing list
- Survey 1, Slate Star Codex
- Survey 1, Heavy users
- Survey 2, no demographic filters
- Survey 2, educated young people
The Google Consumer Surveys survey results are available online at https://www.google.com/insights/consumersurveys/view?survey=o3iworx2rcfixmn2x5shtlppci&question=1&filter=&rw=1.
Survey-making lessons
Not having any experience designing surveys, and wanting some rough results quickly, I decided not to look into survey-making best practices beyond the feedback from Vipul. As the first survey progressed, it became clear that there were several deficiencies in that survey:
- Question 1 did not specify what counts as reading a page.
- We did not specify which language Wikipedias we were considering (multiple people noted how they read other language Wikipedias other than the English Wikipedia).
- Question 2 did not include an option for people who avoid Wikipedia or do something else entirely.
- We did not include an option to allow people to release their survey results.
Further questions
The two surveys we’ve done so far provide some insight into how people use Wikipedia, but we are still far from understanding the value of Wikipedia pageviews. Some remaining questions:
- Could it be possible that even on non-obscure topics, most of the views are by “elites” (i.e. those with outsized impact on the world)? This could mean pageviews are more valuable than previously thought.
- On S2Q1, why did our data show that CEYP was less engaged with Wikipedia than SM? Is this a limitation of the small number of responses or of SurveyMonkey’s audiences?
Further reading
- “The great decline in Wikipedia pageviews (condensed version)” by Vipul Naik
- “In Defense Of Inclusionism” by gwern
Acknowledgements
Thanks to Vipul Naik for collaboration on this project and feedback while writing this document, and for supplying the summary section, and thanks to Ethan Bashkansky for reviewing the document. All imperfections are my own.
The writing of this document was sponsored by Vipul Naik. Vipul Naik also paid SurveyMonkey (for the cost of SurveyMonkey Audience) and Google Consumer Surveys.
Document source and versions
The source files used to compile this document are available in a GitHub Gist. The Git repository of the Gist contains all versions of this document since its first publication.
This document is available in the following formats:
- As an HTML file at http://lesswrong.com/r/discussion/lw/nru/wikipedia_usage_survey_results/
- As a PDF file at http://files.issarice.com/wikipedia-survey-results.pdf
License
This document is released to the public domain.
2016 LessWrong Diaspora Survey Analysis: Part Three (Mental Health, Basilisk, Blogs and Media)
2016 LessWrong Diaspora Survey Analysis
Overview
- Results and Dataset
- Meta
- Demographics
- LessWrong Usage and Experience
- LessWrong Criticism and Successorship
- Diaspora Community Analysis
- Mental Health Section
- Basilisk Section/Analysis
- Blogs and Media analysis (You are here)
- Politics
- Calibration Question And Probability Question Analysis
- Charity And Effective Altruism Analysis
Mental Health
We decided to move the Mental Health section up closer in the survey this year so that the data could inform accessibility decisions.
| Condition | Base Rate | LessWrong Rate | LessWrong Self dx Rate | Combined LW Rate | Base/LW Rate Spread | Relative Risk |
|---|---|---|---|---|---|---|
| Depression | 17% | 25.37% | 27.04% | 52.41% | +8.37 | 1.492 |
| Obsessive Compulsive Disorder | 2.3% | 2.7% | 5.6% | 8.3% | +0.4 | 1.173 |
| Autism Spectrum Disorder | 1.47% | 8.2% | 12.9% | 21.1% | +6.73 | 5.578 |
| Attention Deficit Disorder | 5% | 13.6% | 10.4% | 24% | +8.6 | 2.719 |
| Bipolar Disorder | 3% | 2.2% | 2.8% | 5% | -0.8 | 0.733 |
| Anxiety Disorder(s) | 29% | 13.7% | 17.4% | 31.1% | -15.3 | 0.472 |
| Borderline Personality Disorder | 5.9% | 0.6% | 1.2% | 1.8% | -5.3 | 0.101 |
| Schizophrenia | 1.1% | 0.8% | 0.4% | 1.2% | -0.3 | 0.727 |
| Substance Use Disorder | 10.6% | 1.3% | 3.6% | 4.9% | -9.3 | 0.122 |
Base rates are taken from Wikipedia, US rates were favored over global rates where immediately available.
Accessibility Suggestions
So of the conditions we asked about, LessWrongers are at significant extra risk for three of them: Autism, ADHD, Depression.
LessWrong probably doesn't need to concern itself with being more accessible to those with autism as it likely already is. Depression is a complicated disorder with no clear interventions that can be easily implemented as site or community policy. It might be helpful to encourage looking more at positive trends in addition to negative ones, but the community already seems to do a fairly good job of this. (We could definitely use some more of it though.)
Attention Deficit Disorder - Public Service Announcement
That leaves ADHD, which we might be able to do something about, starting with this:
A lot of LessWrong stuff ends up falling into the same genre as productivity advice or 'self help'. If you have trouble with getting yourself to work, find yourself reading these things and completely unable to implement them, it's entirely possible that you have a mental health condition which impacts your executive function.
The best overview I've been able to find on ADD is this talk from Russell Barkely.
30 Essential Ideas For Parents
Ironically enough, this is a long talk, over four hours in total. Barkely is an entertaining speaker and the talk is absolutely fascinating. If you're even mildly interested in the subject I wholeheartedly recommend it. Many people who have ADHD just assume that they're lazy, or not trying hard enough, or just haven't found the 'magic bullet' yet. It never even occurs to them that they might have it because they assume that adult ADHD looks like childhood ADHD, or that ADHD is a thing that psychiatrists made up so they can give children powerful stimulants.
ADD is real, if you're in the demographic that takes this survey there's a decent enough chance you have it.
Attention Deficit Disorder - Accessibility
So with that in mind, is there anything else we can do?
Yes, write better.
Scott Alexander has written a blog post with writing advice for non-fiction, and the interesting thing about it is just how much of the advice is what I would tell you to do if your audience has ADD.
-
Reward the reader quickly and often. If your prose isn't rewarding to read it won't be read.
-
Make sure the overall article has good sectioning and indexing, people might be only looking for a particular thing and they won't want to wade through everything else to get it. Sectioning also gives the impression of progress and reduces eye strain.
-
Use good data visualization to compress information, take away mental effort where possible. Take for example the condition table above. It saves space and provides additional context. Instead of a long vertical wall of text with sections for each condition, it removes:
-
The extraneous information of how many people said they did not have a condition.
-
The space that would be used by creating a section for each condition. In fact the specific improvement of the table is that it takes extra advantage of space in the horizontal plane as well as the vertical plane.
And instead of just presenting the raw data, it also adds:
-
The normal rate of incidence for each condition, so that the reader understands the extent to which rates are abnormal or unexpected.
-
Easy comparison between the clinically diagnosed, self diagnosed, and combined rates of the condition in the LW demographic. This preserves the value of the original raw data presentation while also easing the mental arithmetic of how many people claim to have a condition.
-
Percentage spread between the clinically diagnosed and the base rate, which saves the effort of figuring out the difference between the two values.
-
Relative risk between the clinically diagnosed and the base rate, which saves the effort of figuring out how much more or less likely a LessWronger is to have a given condition.
Add all that together and you've created a compelling presentation that significantly improves on the 'naive' raw data presentation.
-
-
Use visuals in general, they help draw and maintain interest.
None of these are solely for the benefit of people with ADD. ADD is an exaggerated profile of normal human behavior. Following this kind of advice makes your article more accessible to everybody, which should be more than enough incentive if you intend to have an audience.1
Roko's Basilisk
This year we finally added a Basilisk question! In fact, it kind of turned into a whole Basilisk section. A fairly common question about this years survey is why the Basilisk section is so large. The basic reason is that asking only one or two questions about it would leave the results open to rampant speculation in one direction or another. By making the section comprehensive and covering every base, we've pretty much gotten about as complete of data as we'd want on the Basilisk phenomena.
Basilisk Knowledge
Do you know what Roko's Basilisk thought experiment is?
Yes: 1521 73.2%
No but I've heard of it: 158 7.6%
No: 398 19.2%
Basilisk Etiology
Where did you read Roko's argument for the Basilisk?
Roko's post on LessWrong: 323 20.2%
Reddit: 171 10.7%
XKCD: 61 3.8%
LessWrong Wiki: 234 14.6%
A news article: 71 4.4%
Word of mouth: 222 13.9%
RationalWiki: 314 19.6%
Other: 194 12.1%
Basilisk Correctness
Do you think Roko's argument for the Basilisk is correct?
Yes: 75 5.1%
Yes but I don't think it's logical conclusions apply for other reasons: 339 23.1%
No: 1055 71.8%
Basilisks And Lizardmen
One of the biggest mistakes I made with this years survey was not including "Do you believe Barack Obama is a hippopotamus?" as a control question in this section.2 Five percent is just outside of the infamous lizardman constant. This was the biggest survey surprise for me. I thought there was no way that 'yes' could go above a couple of percentage points. As far as I can tell this result is not caused by brigading but I've by no means investigated the matter so thoroughly that I would rule it out.
Higher?
Of course, we also shouldn't forget to investigate the hypothesis that the number might be higher than 5%. After all, somebody who thinks the Basilisk is correct could skip the questions entirely so they don't face potential stigma. So how many people skipped the questions but filled out the rest of the survey?
Eight people refused to answer whether they'd heard of Roko's Basilisk but went on to answer the depression question immediately after the Basilisk section. This gives us a decent proxy for how many people skipped the section and took the rest of the survey. So if we're pessimistic the number is a little higher, but it pays to keep in mind that there are other reasons to want to skip this section. (It is also possible that people took the survey up until they got to the Basilisk section and then quit so they didn't have to answer it, but this seems unlikely.)
Of course this assumes people are being strictly truthful with their survey answers. It's also plausible that people who think the Basilisk is correct said they'd never heard of it and then went on with the rest of the survey. So the number could in theory be quite large. My hunch is that it's not. I personally know quite a few LessWrongers and I'm fairly sure none of them would tell me that the Basilisk is 'correct'. (In fact I'm fairly sure they'd all be offended at me even asking the question.) Since 5% is one in twenty I'd think I'd know at least one or two people who thought the Basilisk was correct by now.
Lower?
One partial explanation for the surprisingly high rate here is that ten percent of the people who said yes by their own admission didn't know what they were saying yes to. Eight people said they've heard of the Basilisk but don't know what it is, and that it's correct. The lizardman constant also plausibly explains a significant portion of the yes responses, but that explanation relies on you already having a prior belief that the rate should be low.
Basilisk-Like Danger
Do you think Basilisk-like thought experiments are dangerous?
Yes, I think they're dangerous for decision theory reasons: 63 4.2%
Yes I think they're dangerous for social reasons (eg. A cult might use them): 194 12.8%
Yes I think they're dangerous for decision theory and social reasons: 136 9%
Yes I think they're socially dangerous because they make everybody involved look foolish: 253 16.7%
Yes I think they're dangerous for other reasons: 54 3.6%
No: 809 53.4%
Most people don't think Basilisk-Like thought experiments are dangerous at all. Of those that think they are, most of them think they're socially dangerous as opposed to a raw decision theory threat. The 4.2% number for pure decision theory threat is interesting because it lines up with the 5% number in the previous question for Basilisk Correctness.
P(Decision Theory Danger | Basilisk Belief) = 26.6%
P(Decision Theory And Social Danger | Basilisk Belief) = 21.3%
So of the people who say the Basilisk is correct, only half of them believe it is a decision theory based danger at all. (In theory this could be because they believe the Basilisk is a good thing and therefore not dangerous, but I refuse to lose that much faith in humanity.3)
Basilisk Anxiety
Have you ever felt any sort of anxiety about the Basilisk?
Yes: 142 8.8%
Yes but only because I worry about everything: 189 11.8%
No: 1275 79.4%
20.6% of respondents have felt some kind of Basilisk Anxiety. It should be noted that the exact wording of the question permits any anxiety, even for a second. And as we'll see in the next question that nuance is very important.
Degree Of Basilisk Worry
What is the longest span of time you've spent worrying about the Basilisk?
I haven't: 714 47%
A few seconds: 237 15.6%
A minute: 298 19.6%
An hour: 176 11.6%
A day: 40 2.6%
Two days: 16 1.05%
Three days: 12 0.79%
A week: 12 0.79%
A month: 5 0.32%
One to three months: 2 0.13%
Three to six months: 0 0.0%
Six to nine months: 0 0.0%
Nine months to a year: 1 0.06%
Over a year: 1 0.06%
Years: 4 0.26%
These numbers provide some pretty sobering context for the previous ones. Of all the people who worried about the Basilisk, 93.8% didn't worry about it for more than an hour. The next 3.65% didn't worry about it for more than a day or two. The next 1.9% didn't worry about it for more than a month and the last .7% or so have worried about it for longer.
Current Basilisk Worry
Are you currently worrying about the Basilisk?
Yes: 29 1.8%
Yes but only because I worry about everything: 60 3.7%
No: 1522 94.5%
Also encouraging. We should expect a small number of people to be worried at this question just because the section is basically the word "Basilisk" and "worry" repeated over and over so it's probably a bit scary to some people. But these numbers are much lower than the "Have you ever worried" ones and back up the previous inference that Basilisk anxiety is mostly a transitory phenomena.
One article on the Basilisk asked the question of whether or not it was just a "referendum on autism". It's a good question and now I have an answer for you, as per the table below:
| Condition | Worried | Worried But They Worry About Everything | Combined Worry |
|---|---|---|---|
| Baseline (in the respondent population) | 8.8% | 11.8% | 20.6% |
| ASD | 7.3% | 17.3% | 24.7% |
| OCD | 10.0% | 32.5% | 42.5% |
| AnxietyDisorder | 6.9% | 20.3% | 27.3% |
| Schizophrenia | 0.0% | 16.7% | 16.7% |
The short answer: Autism raises your chances of Basilisk anxiety, but anxiety disorders and OCD especially raise them much more. Interestingly enough, schizophrenia seems to bring the chances down. This might just be an effect of small sample size, but my expectation was the opposite. (People who are really obsessed with Roko's Basilisk seem to present with schizophrenic symptoms at any rate.)
Before we move on, there's one last elephant in the room to contend with. The philosophical theory underlying the Basilisk is the CEV conception of friendly AI primarily espoused by Eliezer Yudkowsky. Which has led many critics to speculate on all kinds of relationships between Eliezer Yudkowsky and the Basilisk. Which of course obviously would extend to Eliezer Yudkowsky's Machine Intelligence Research Institute, a project to develop 'Friendly Artificial Intelligence' which does not implement a naive goal function that eats everything else humans actually care about once it's given sufficient optimization power.
The general thrust of these accusations is that MIRI, intentionally or not, profits from belief in the Basilisk. I think MIRI gets picked on enough, so I'm not thrilled about adding another log to the hefty pile of criticism they deal with. However this is a serious accusation which is plausible enough to be in the public interest for me to look at.
| Belief | Percentage |
|---|---|
| Believe It's Incorrect | 5.2% |
| Believe It's Structurally Correct | 5.6% |
| Believe It's Correct | 12.0% |
Basilisk belief does appear to make you twice as likely to donate to MIRI. It's important to note from the perspective of earlier investigation that thinking it is "structurally correct" appears to make you about as likely as if you don't think it's correct, implying that both of these options mean about the same thing.
| Belief | Mean | Median | Mode | Stdev | Total Donated |
|---|---|---|---|---|---|
| Believe It's Incorrect | 1365.590 | 100.0 | 100.0 | 4825.293 | 75107.5 |
| Believe It's Structurally Correct | 2644.736 | 110.0 | 20.0 | 9147.299 | 50250.0 |
| Believe It's Correct | 740.555 | 300.0 | 300.0 | 1152.541 | 6665.0 |
Take these numbers with a grain of salt, it only takes one troll to plausibly lie about their income to ruin it for everybody else.
Interestingly enough, if you sum all three total donated counts and divide by a hundred, you find that five percent of the sum is about what was donated by the Basilisk group. ($6601 to be exact) So even though the modal and median donations of Basilisk believers are higher, they donate about as much as would be naively expected by assuming donations among groups are equal.4
| Anxiety | Percentage |
|---|---|
| Never Worried | 4.3% |
| Worried But They Worry About Everything | 11.1% |
| Worried | 11.3% |
In contrast to the correctness question, merely having worried about the Basilisk at any point in time doubles your chances of donating to MIRI. My suspicion is that these people are not, as a general rule, donating because of the Basilisk per se. If you're the sort of person who is even capable of worrying about the Basilisk in principle, you're probably the kind of person who is likely to worry about AI risk in general and donate to MIRI on that basis. This hypothesis is probably unfalsifiable with the survey information I have, because Basilisk-risk is a subset of AI risk. This means that anytime somebody indicates on the survey that they're worried about AI risk this could be because they're worried about the Basilisk or because they're worried about more general AI risk.
| Anxiety | Mean | Median | Mode | Stdev | Total Donated |
|---|---|---|---|---|---|
| Never Worried | 1033.936 | 100.0 | 100.0 | 3493.373 | 56866.5 |
| Worried But They Worry About Everything | 227.047 | 75.0 | 300.0 | 438.861 | 4768.0 |
| Worried | 4539.25 | 90.0 | 10.0 | 11442.675 | 72628.0 |
| Combined Worry | 77396.0 |
Take these numbers with a grain of salt, it only takes one troll to plausibly lie about their income to ruin it for everybody else.
This particular analysis is probably the strongest evidence in the set for the hypothesis that MIRI profits (though not necessarily through any involvement on their part) from the Basilisk. People who worried from an unendorsed perspective donate less on average than everybody else. The modal donation among people who've worried about the Basilisk is ten dollars, which seems like a surefire way to torture if we're going with the hypothesis that these are people who believe the Basilisk is a real thing and they're concerned about it. So this implies that they don't, which supports my earlier hypothesis that people who are capable of feeling anxiety about the Basilisk are the core demographic to donate to MIRI anyway.
Of course, donors don't need to believe in the Basilisk for MIRI to profit from it. If exposing people to the concept of the Basilisk makes them twice as likely to donate but they don't end up actually believing the argument that would arguably be the ideal outcome for MIRI from an Evil Plot perspective. (Since after all, pursuing a strategy which involves Basilisk belief would actually incentivize torture from the perspective of the acausal game theories MIRI bases its FAI on, which would be bad.)
But frankly this is veering into very speculative territory. I don't think there's an evil plot, nor am I convinced that MIRI is profiting from Basilisk belief in a way that outweighs the resulting lost donations and damage to their cause.5 If anybody would like to assert otherwise I invite them to 'put up or shut up' with hard evidence. The world has enough criticism based on idle speculation and you're peeing in the pool.
Blogs and Media
Since this was the LessWrong diaspora survey, I felt it would be in order to reach out a bit to ask not just where the community is at but what it's reading. I went around to various people I knew and asked them about blogs for this section. However the picks were largely based on my mental 'map' of the blogs that are commonly read/linked in the community with a handful of suggestions thrown in. The same method was used for stories.
Blogs Read
LessWrong
Regular Reader: 239 13.4%
Sometimes: 642 36.1%
Rarely: 537 30.2%
Almost Never: 272 15.3%
Never: 70 3.9%
Never Heard Of It: 14 0.7%
SlateStarCodex (Scott Alexander)
Regular Reader: 1137 63.7%
Sometimes: 264 14.7%
Rarely: 90 5%
Almost Never: 61 3.4%
Never: 51 2.8%
Never Heard Of It: 181 10.1%
[These two results together pretty much confirm the results I talked about in part two of the survey analysis. A supermajority of respondents are 'regular readers' of SlateStarCodex. By contrast LessWrong itself doesn't even have a quarter of SlateStarCodexes readership.]
Overcoming Bias (Robin Hanson)
Regular Reader: 206 11.751%
Sometimes: 365 20.821%
Rarely: 391 22.305%
Almost Never: 385 21.962%
Never: 239 13.634%
Never Heard Of It: 167 9.527%
Minding Our Way (Nate Soares)
Regular Reader: 151 8.718%
Sometimes: 134 7.737%
Rarely: 139 8.025%
Almost Never: 175 10.104%
Never: 214 12.356%
Never Heard Of It: 919 53.06%
Agenty Duck (Brienne Yudkowsky)
Regular Reader: 55 3.181%
Sometimes: 132 7.634%
Rarely: 144 8.329%
Almost Never: 213 12.319%
Never: 254 14.691%
Never Heard Of It: 931 53.846%
Eliezer Yudkowsky's Facebook Page
Regular Reader: 325 18.561%
Sometimes: 316 18.047%
Rarely: 231 13.192%
Almost Never: 267 15.248%
Never: 361 20.617%
Never Heard Of It: 251 14.335%
Luke Muehlhauser (Eponymous)
Regular Reader: 59 3.426%
Sometimes: 106 6.156%
Rarely: 179 10.395%
Almost Never: 231 13.415%
Never: 312 18.118%
Never Heard Of It: 835 48.49%
Gwern.net (Gwern Branwen)
Regular Reader: 118 6.782%
Sometimes: 281 16.149%
Rarely: 292 16.782%
Almost Never: 224 12.874%
Never: 230 13.218%
Never Heard Of It: 595 34.195%
Siderea (Sibylla Bostoniensis)
Regular Reader: 29 1.682%
Sometimes: 49 2.842%
Rarely: 59 3.422%
Almost Never: 104 6.032%
Never: 183 10.615%
Never Heard Of It: 1300 75.406%
Ribbon Farm (Venkatesh Rao)
Regular Reader: 64 3.734%
Sometimes: 123 7.176%
Rarely: 111 6.476%
Almost Never: 150 8.751%
Never: 150 8.751%
Never Heard Of It: 1116 65.111%
Bayesed And Confused (Michael Rupert)
Regular Reader: 2 0.117%
Sometimes: 10 0.587%
Rarely: 24 1.408%
Almost Never: 68 3.988%
Never: 167 9.795%
Never Heard Of It: 1434 84.106%
[This was the 'troll' answer to catch out people who claim to read everything.]
The Unit Of Caring (Anonymous)
Regular Reader: 281 16.452%
Sometimes: 132 7.728%
Rarely: 126 7.377%
Almost Never: 178 10.422%
Never: 216 12.646%
Never Heard Of It: 775 45.375%
GiveWell Blog (Multiple Authors)
Regular Reader: 75 4.438%
Sometimes: 197 11.657%
Rarely: 243 14.379%
Almost Never: 280 16.568%
Never: 412 24.379%
Never Heard Of It: 482 28.521%
Thing Of Things (Ozy Frantz)
Regular Reader: 363 21.166%
Sometimes: 201 11.72%
Rarely: 143 8.338%
Almost Never: 171 9.971%
Never: 176 10.262%
Never Heard Of It: 661 38.542%
The Last Psychiatrist (Anonymous)
Regular Reader: 103 6.023%
Sometimes: 94 5.497%
Rarely: 164 9.591%
Almost Never: 221 12.924%
Never: 302 17.661%
Never Heard Of It: 826 48.304%
Hotel Concierge (Anonymous)
Regular Reader: 29 1.711%
Sometimes: 35 2.065%
Rarely: 49 2.891%
Almost Never: 88 5.192%
Never: 179 10.56%
Never Heard Of It: 1315 77.581%
The View From Hell (Sister Y)
Regular Reader: 34 1.998%
Sometimes: 39 2.291%
Rarely: 75 4.407%
Almost Never: 137 8.049%
Never: 250 14.689%
Never Heard Of It: 1167 68.566%
Xenosystems (Nick Land)
Regular Reader: 51 3.012%
Sometimes: 32 1.89%
Rarely: 64 3.78%
Almost Never: 175 10.337%
Never: 364 21.5%
Never Heard Of It: 1007 59.48%
I tried my best to have representation from multiple sections of the diaspora, if you look at the different blogs you can probably guess which blogs represent which section.
Stories Read
Harry Potter And The Methods Of Rationality (Eliezer Yudkowsky)
Whole Thing: 1103 61.931%
Partially And Intend To Finish: 145 8.141%
Partially And Abandoned: 231 12.97%
Never: 221 12.409%
Never Heard Of It: 81 4.548%
Significant Digits (Alexander D)
Whole Thing: 123 7.114%
Partially And Intend To Finish: 105 6.073%
Partially And Abandoned: 91 5.263%
Never: 333 19.26%
Never Heard Of It: 1077 62.29%
Three Worlds Collide (Eliezer Yudkowsky)
Whole Thing: 889 51.239%
Partially And Intend To Finish: 35 2.017%
Partially And Abandoned: 36 2.075%
Never: 286 16.484%
Never Heard Of It: 489 28.184%
The Fable of the Dragon-Tyrant (Nick Bostrom)
Whole Thing: 728 41.935%
Partially And Intend To Finish: 31 1.786%
Partially And Abandoned: 15 0.864%
Never: 205 11.809%
Never Heard Of It: 757 43.606%
The World of Null-A (A. E. van Vogt)
Whole Thing: 92 5.34%
Partially And Intend To Finish: 18 1.045%
Partially And Abandoned: 25 1.451%
Never: 429 24.898%
Never Heard Of It: 1159 67.266%
[Wow, I never would have expected this many people to have read this. I mostly included it on a lark because of its historical significance.]
Synthesis (Sharon Mitchell)
Whole Thing: 6 0.353%
Partially And Intend To Finish: 2 0.118%
Partially And Abandoned: 8 0.47%
Never: 217 12.75%
Never Heard Of It: 1469 86.31%
[This was the 'troll' option to catch people who just say they've read everything.]
Worm (Wildbow)
Whole Thing: 501 28.843%
Partially And Intend To Finish: 168 9.672%
Partially And Abandoned: 184 10.593%
Never: 430 24.755%
Never Heard Of It: 454 26.137%
Pact (Wildbow)
Whole Thing: 138 7.991%
Partially And Intend To Finish: 59 3.416%
Partially And Abandoned: 148 8.57%
Never: 501 29.01%
Never Heard Of It: 881 51.013%
Twig (Wildbow)
Whole Thing: 55 3.192%
Partially And Intend To Finish: 132 7.661%
Partially And Abandoned: 65 3.772%
Never: 560 32.501%
Never Heard Of It: 911 52.873%
Ra (Sam Hughes)
Whole Thing: 269 15.558%
Partially And Intend To Finish: 80 4.627%
Partially And Abandoned: 95 5.495%
Never: 314 18.161%
Never Heard Of It: 971 56.16%
My Little Pony: Friendship Is Optimal (Iceman)
Whole Thing: 424 24.495%
Partially And Intend To Finish: 16 0.924%
Partially And Abandoned: 65 3.755%
Never: 559 32.293%
Never Heard Of It: 667 38.533%
Friendship Is Optimal: Caelum Est Conterrens (Chatoyance)
Whole Thing: 217 12.705%
Partially And Intend To Finish: 16 0.937%
Partially And Abandoned: 24 1.405%
Never: 411 24.063%
Never Heard Of It: 1040 60.89%
Ender's Game (Orson Scott Card)
Whole Thing: 1177 67.219%
Partially And Intend To Finish: 22 1.256%
Partially And Abandoned: 43 2.456%
Never: 395 22.559%
Never Heard Of It: 114 6.511%
[This is the most read story according to survey respondents, beating HPMOR by 5%.]
The Diamond Age (Neal Stephenson)
Whole Thing: 440 25.346%
Partially And Intend To Finish: 37 2.131%
Partially And Abandoned: 55 3.168%
Never: 577 33.237%
Never Heard Of It: 627 36.118%
Consider Phlebas (Iain Banks)
Whole Thing: 302 17.507%
Partially And Intend To Finish: 52 3.014%
Partially And Abandoned: 47 2.725%
Never: 439 25.449%
Never Heard Of It: 885 51.304%
The Metamorphosis Of Prime Intellect (Roger Williams)
Whole Thing: 226 13.232%
Partially And Intend To Finish: 10 0.585%
Partially And Abandoned: 24 1.405%
Never: 322 18.852%
Never Heard Of It: 1126 65.925%
Accelerando (Charles Stross)
Whole Thing: 293 17.045%
Partially And Intend To Finish: 46 2.676%
Partially And Abandoned: 66 3.839%
Never: 425 24.724%
Never Heard Of It: 889 51.716%
A Fire Upon The Deep (Vernor Vinge)
Whole Thing: 343 19.769%
Partially And Intend To Finish: 31 1.787%
Partially And Abandoned: 41 2.363%
Never: 508 29.28%
Never Heard Of It: 812 46.801%
I also did a k-means cluster analysis of the data to try and determine demographics and the ultimate conclusion I drew from it is that I need to do more analysis. Which I would do, except that the initial analysis was a whole bunch of work and jumping further down the rabbit hole in the hopes I reach an oasis probably isn't in the best interests of myself or my readers.
Footnotes
-
This is a general trend I notice with accessibility. Not always, but very often measures taken to help a specific group end up having positive effects for others as well. Many of the accessibility suggestions of the W3C are things you wish every website did.↩
-
I hadn't read this particular SSC post at the time I compiled the survey, but I was already familiar with the concept of a lizardman constant and should have accounted for it.↩
-
I've been informed by a member of the freenode #lesswrong IRC channel that this is in fact Roko's opinion, because you can 'timelessly trade with the future superintelligence for rewards, not just punishment' according to a conversation they had with him last summer. Remember kids: Don't do drugs, including Max Tegmark.↩
-
You might think that this conflicts with the hypothesis that the true rate of Basilisk belief is lower than 5%. It does a bit, but you also need to remember that these people are in the LessWrong demographic, which means regardless of what the Basilisk belief question means we should naively expect them to donate five percent of the MIRI donation pot.↩
-
That is to say, it does seem plausible that MIRI 'profits' from Basilisk belief based on this data, but I'm fairly sure any profit is outweighed by the significant opportunity cost associated with it. I should also take this moment to remind the reader that the original Basilisk argument was supposed to prove that CEV is a flawed concept from the perspective of not having deleterious outcomes for people, so MIRI using it as a way to justify donating to them would be weird.↩
2016 LessWrong Diaspora Survey Analysis: Part Two (LessWrong Use, Successorship, Diaspora)
2016 LessWrong Diaspora Survey Analysis
Overview
- Results and Dataset
- Meta
- Demographics
- LessWrong Usage and Experience
- LessWrong Criticism and Successorship
- Diaspora Community Analysis (You are here)
- Mental Health Section
- Basilisk Section/Analysis
- Blogs and Media analysis
- Politics
- Calibration Question And Probability Question Analysis
- Charity And Effective Altruism Analysis
Introduction
Before it was the LessWrong survey, the 2016 survey was a small project I was working on as market research for a website I'm creating called FortForecast. As I was discussing the idea with others, particularly Eliot he made the suggestion that since he's doing LW 2.0 and I'm doing a site that targets the LessWrong demographic, why don't I go ahead and do the LessWrong Survey? Because of that, this years survey had a lot of questions oriented around what you would want to see in a successor to LessWrong and what you think is wrong with the site.
LessWrong Usage and Experience
How Did You Find LessWrong?
Been here since it was started in the Overcoming Bias days: 171 8.3%
Referred by a link: 275 13.4%
HPMOR: 542 26.4%
Overcoming Bias: 80 3.9%
Referred by a friend: 265 12.9%
Referred by a search engine: 131 6.4%
Referred by other fiction: 14 0.7%
Slate Star Codex: 241 11.7%
Reddit: 55 2.7%
Common Sense Atheism: 19 0.9%
Hacker News: 47 2.3%
Gwern: 22 1.1%
Other: 191 9.308%
How do you use Less Wrong?
I lurk, but never registered an account: 1120 54.4%
I've registered an account, but never posted: 270 13.1%
I've posted a comment, but never a top-level post: 417 20.3%
I've posted in Discussion, but not Main: 179 8.7%
I've posted in Main: 72 3.5%
[54.4% lurkers.]
How often do you comment on LessWrong?
I have commented more than once a week for the past year.: 24 1.2%
I have commented more than once a month for the past year but less than once a week.: 63 3.1%
I have commented but less than once a month for the past year.: 225 11.1%
I have not commented this year.: 1718 84.6%
[You could probably snarkily title this one "LW usage in one statistic". It's a pretty damning portrait of the sites vitality. A whopping 84.6% of people have not commented this year a single time.]
How Long Since You Last Posted On LessWrong?
I wrote one today.: 12 0.637%
Within the last three days.: 13 0.69%
Within the last week.: 22 1.168%
Within the last month.: 58 3.079%
Within the last three months.: 75 3.981%
Within the last six months.: 68 3.609%
Within the last year.: 84 4.459%
Within the last five years.: 295 15.658%
Longer than five years.: 15 0.796%
I've never posted on LW.: 1242 65.924%
[Supermajority of people have never commented on LW, 5.574% have within the last month.]
About how much of the Sequences have you read?
Never knew they existed until this moment: 215 10.3%
Knew they existed, but never looked at them: 101 4.8%
Some, but less than 25% : 442 21.2%
About 25%: 260 12.5%
About 50%: 283 13.6%
About 75%: 298 14.3%
All or almost all: 487 23.3%
[10.3% of people taking the survey have never heard of the sequences. 36.3% have not read a quarter of them.]
Do you attend Less Wrong meetups?
Yes, regularly: 157 7.5%
Yes, once or a few times: 406 19.5%
No: 1518 72.9%
[However the in-person community seems to be non-dead.]
Is physical interaction with the Less Wrong community otherwise a part of your everyday life, for example do you live with other Less Wrongers, or you are close friends and frequently go out with them?
Yes, all the time: 158 7.6%
Yes, sometimes: 258 12.5%
No: 1652 79.9%
About the same number say they hang out with LWers 'all the time' as say they go to meetups. I wonder if people just double counted themselves here. Or they may go to meetups and have other interactions with LWers outside of that. Or it could be a coincidence and these are different demographics. Let's find out.
P(Community part of daily life | Meetups) = 40%
Significant overlap, but definitely not exclusive overlap. I'll go ahead and chalk this one up up to coincidence.
Have you ever been in a romantic relationship with someone you met through the Less Wrong community?
Yes: 129 6.2%
I didn't meet them through the community but they're part of the community now: 102 4.9%
No: 1851 88.9%
LessWrong Usage Differences Between 2016 and 2014 Surveys
How do you use Less Wrong?
I lurk, but never registered an account: +19.300% 1125 54.400%
I've registered an account, but never posted: -1.600% 271 13.100%
I've posted a comment, but never a top-level post: -7.600% 419 20.300%
I've posted in Discussion, but not Main: -5.100% 179 8.700%
I've posted in Main: -3.300% 73 3.500%
About how much of the sequences have you read?
Never knew they existed until this moment: +3.300% 217 10.400%
Knew they existed, but never looked at them: +2.100% 103 4.900%
Some, but less than 25%: +3.100% 442 21.100%
About 25%: +0.400% 260 12.400%
About 50%: -0.400% 284 13.500%
About 75%: -1.800% 299 14.300%
All or almost all: -5.000% 491 23.400%
Do you attend Less Wrong meetups?
Yes, regularly: -2.500% 160 7.700%
Yes, once or a few times: -2.100% 407 19.500%
No: +7.100% 1524 72.900%
Is physical interaction with the Less Wrong community otherwise a part of your everyday life, for example do you live with other Less Wrongers, or you are close friends and frequently go out with them?
Yes, all the time: +0.200% 161 7.700%
Yes, sometimes: -0.300% 258 12.400%
No: +2.400% 1659 79.800%
Have you ever been in a romantic relationship with someone you met through the Less Wrong community?
Yes: +0.800% 132 6.300%
I didn't meet them through the community but they're part of the community now: -0.400% 102 4.900%
No: +1.600% 1858 88.800%
Write Ins
In a bit of a silly oversight I forgot to ask survey participants what was good about the community, so the following is going to be a pretty one sided picture. Below are the complete write ins respondents submitted
Issues With LessWrong At It's Peak
Philosophical Issues With LessWrong At It's Peak[Part One]
Philosophical Issues With LessWrong At It's Peak[Part Two]
Community Issues With LessWrong At It's Peak[Part One]
Community Issues With LessWrong At It's Peak[Part Two]
Issues With LessWrong Now
Philosophical Issues With LessWrong Now[Part One]
Philosophical Issues With LessWrong Now[Part Two]
Community Issues With LessWrong Now[Part One]
Community Issues With LessWrong Now[Part Two]
Peak Philosophy Issue Tallies
| Label | Code | Tally |
|---|---|---|
| Arrogance | A | 16 |
| Bad Aesthetics | BA | 3 |
| Bad Norms | BN | 3 |
| Bad Politics | BP | 5 |
| Bad Tech Platform | BTP | 1 |
| Cultish | C | 5 |
| Cargo Cult | CC | 3 |
| Doesn't Accept Criticism | DAC | 3 |
| Don't Know Where to Start | DKWS | 5 |
| Damaged Me Mentally | DMM | 1 |
| Esoteric | E | 3 |
| Eliezer Yudkowsky | EY | 6 |
| Improperly Indexed | II | 7 |
| Impossible Mission | IM | 4 |
| Insufficient Social Support | ISS | 1 |
| Jargon | ||
| Literal Cult | LC | 1 |
| Lack of Rigor | LR | 14 |
| Misfocused | M | 13 |
| Mixed Bag | MB | 3 |
| Nothing | N | 13 |
| Not Enough Jargon | NEJ | 1 |
| Not Enough Roko's Basilisk | NERB | 1 |
| Not Enough Theory | NET | 1 |
| No Intuition | NI | 6 |
| Not Progressive Enough | NPE | 7 |
| Narrow Scholarship | NS | 20 |
| Other | O | 3 |
| Personality Cult | PC | 10 |
| None of the Above | ||
| Quantum Mechanics Sequence | QMS | 2 |
| Reinvention | R | 10 |
| Rejects Expertise | RE | 5 |
| Spoiled | S | 7 |
| Small Competent Authorship | SCA | 6 |
| Suggestion For Improvement | SFI | 1 |
| Socially Incompetent | SI | 9 |
| Stupid Philosophy | SP | 4 |
| Too Contrarian | TC | 2 |
| Typical Mind | TM | 1 |
| Too Much Roko's Basilisk | TMRB | 1 |
| Too Much Theory | TMT | 14 |
| Too Progressive | TP | 2 |
| Too Serious | TS | 2 |
| Unwelcoming | U | 8 |
Well, those are certainly some results. Top answers are:
Narrow Scholarship: 20
Arrogance: 16
Too Much Theory: 14
Lack of Rigor: 14
Misfocused: 13
Nothing: 13
Reinvention (reinvents the wheel too much): 10
Personality Cult: 10
So condensing a bit: Pay more attention to mainstream scholarship and ideas, try to do better about intellectual rigor, be more practical and focus on results, be more humble. (Labeled Dataset)
Peak Community Issue Tallies
| Label | Code | Tally |
|---|---|---|
| Arrogance | A | 7 |
| Assumes Reader Is Male | ARIM | 1 |
| Bad Aesthetics | BA | 1 |
| Bad At PR | BAP | 5 |
| Bad Norms | BN | 5 |
| Bad Politics | BP | 2 |
| Cultish | C | 9 |
| Cliqueish Tendencies | CT | 1 |
| Diaspora | D | 1 |
| Defensive Attitude | DA | 1 |
| Doesn't Accept Criticism | DAC | 3 |
| Dunning Kruger | DK | 1 |
| Elitism | E | 3 |
| Eliezer Yudkowsky | EY | 2 |
| Groupthink | G | 11 |
| Insufficiently Indexed | II | 9 |
| Impossible Mission | IM | 1 |
| Imposter Syndrome | IS | 1 |
| Jargon | J | 2 |
| Lack of Rigor | LR | 1 |
| Mixed Bag | MB | 1 |
| Nothing | N | 5 |
| ??? | NA | 1 |
| Not Big Enough | NBE | 3 |
| Not Enough of A Cult | NEAC | 1 |
| Not Enough Content | NEC | 7 |
| Not Enough Community Infrastructure | NECI | 10 |
| Not Enough Meetups | NEM | 5 |
| No Goals | NG | 2 |
| Not Nerdy Enough | NNE | 3 |
| None Of the Above | NOA | 1 |
| Not Progressive Enough | NPE | 3 |
| Not Rational | NR | 3 |
| NRx (Neoreaction) | NRx | 1 |
| Narrow Scholarship | NS | 4 |
| Not Stringent Enough | NSE | 3 |
| Parochialism | P | 1 |
| Pickup Artistry | PA | 2 |
| Personality Cult | PC | 7 |
| Reinvention | R | 1 |
| Recurring Arguments | RA | 3 |
| Rejects Expertise | RE | 2 |
| Sequences | S | 2 |
| Small Competent Authorship | SCA | 5 |
| Suggestion For Improvement | SFI | 1 |
| Spoiled Issue | SI | 9 |
| Socially INCOMpetent | SINCOM | 2 |
| Too Boring | TB | 1 |
| Too Contrarian | TC | 10 |
| Too COMbative | TCOM | 4 |
| Too Cis/Straight/Male | TCSM | 5 |
| Too Intolerant of Cranks | TIC | 1 |
| Too Intolerant of Politics | TIP | 2 |
| Too Long Winded | TLW | 2 |
| Too Many Idiots | TMI | 3 |
| Too Much Math | TMM | 1 |
| Too Much Theory | TMT | 12 |
| Too Nerdy | TN | 6 |
| Too Rigorous | TR | 1 |
| Too Serious | TS | 1 |
| Too Tolerant of Cranks | TTC | 1 |
| Too Tolerant of Politics | TTP | 3 |
| Too Tolerant of POSers | TTPOS | 2 |
| Too Tolerant of PROGressivism | TTPROG | 2 |
| Too Weird | TW | 2 |
| Unwelcoming | U | 12 |
| UTILitarianism | UTIL | 1 |
Top Answers:
Unwelcoming: 12
Too Much Theory: 12
Groupthink: 11
Not Enough Community Infrastructure: 10
Too Contrarian: 10
Insufficiently Indexed: 9
Cultish: 9
Again condensing a bit: Work on being less intimidating/aggressive/etc to newcomers, spend less time on navel gazing and more time on actually doing things and collecting data, work on getting the structures in place that will onboard people into the community, stop being so nitpicky and argumentative, spend more time on getting content indexed in a form where people can actually find it, be more accepting of outside viewpoints and remember that you're probably more likely to be wrong than you think. (Labeled Dataset)
One last note before we finish up, these tallies are a very rough executive summary. The tagging process basically involves trying to fit points into clusters and is prone to inaccuracy through laziness, adding another category being undesirable, square-peg into round-hole fitting, and my personal political biases. So take these with a grain of salt, if you really want to know what people wrote in my advice would be to read through the write in sets I have above in HTML format. If you want to evaluate for yourself how well I tagged things you can see the labeled datasets above.
I won't bother tallying the "issues now" sections, all you really need to know is that it's basically the same as the first sections except with lots more "It's dead." comments and from eyeballing it a higher proportion of people arguing that LessWrong has been taken over by the left/social justice and complaints about effective altruism. (I infer that the complaints about being taken over by the left are mostly referring to effective altruism.)
Traits Respondents Would Like To See In A Successor Community
Philosophically
Attention Paid To Outside Sources
More: 1042 70.933%
Same: 414 28.182%
Less: 13 0.885%
Self Improvement Focus
More: 754 50.706%
Same: 598 40.215%
Less: 135 9.079%
AI Focus
More: 184 12.611%
Same: 821 56.271%
Less: 454 31.117%
Political
More: 330 22.837%
Same: 770 53.287%
Less: 345 23.875%
Academic/Formal
More: 455 31.885%
Same: 803 56.272%
Less: 169 11.843%
In summary, people want a site that will engage with outside ideas, acknowledge where it borrows from, focus on practical self improvement, less on AI and AI risk, and tighten its academic rigor. They could go either way on politics but the epistemic direction is clear.
Community
Intense Environment
More: 254 19.644%
Same: 830 64.192%
Less: 209 16.164%
Focused On 'Real World' Action
More: 739 53.824%
Same: 563 41.005%
Less: 71 5.171%
Experts
More: 749 55.605%
Same: 575 42.687%
Less: 23 1.707%
Data Driven/Testing Of Ideas
More: 1107 78.344%
Same: 291 20.594%
Less: 15 1.062%
Social
More: 583 43.507%
Same: 682 50.896%
Less: 75 5.597%
This largely backs up what I said about the previous results. People want a more practical, more active, more social and more empirical LessWrong with outside expertise and ideas brought into the fold. They could go either way on it being more intense but the epistemic trend is still clear.
Write Ins
Diaspora Communities
So where did the party go? We got twice as many respondents this year as last when we opened up the survey to the diaspora, which means that the LW community is alive and kicking it's just not on LessWrong.
LessWrong
Yes: 353 11.498%
No: 1597 52.02%
LessWrong Meetups
Yes: 215 7.003%
No: 1735 56.515%
LessWrong Facebook Group
Yes: 171 5.57%
No: 1779 57.948%
LessWrong Slack
Yes: 55 1.792%
No: 1895 61.726%
SlateStarCodex
Yes: 832 27.101%
No: 1118 36.417%
[SlateStarCodex by far has the highest proportion of active LessWrong users, over twice that of LessWrong itself, and more than LessWrong and Tumblr combined.]
Rationalist Tumblr
Yes: 350 11.401%
No: 1600 52.117%
[I'm actually surprised that Tumblr doesn't just beat LessWrong itself outright, They're only a tenth of a percentage point behind though, and if current trends continue I suspect that by 2017 Tumblr will have a large lead over the main LW site.]
Rationalist Facebook
Yes: 150 4.886%
No: 1800 58.632%
[Eliezer Yudkowsky currently resides here.]
Rationalist Twitter
Yes: 59 1.922%
No: 1891 61.596%
Effective Altruism Hub
Yes: 98 3.192%
No: 1852 60.326%
FortForecast
Yes: 4 0.13%
No: 1946 63.388%
[I included this as a 'troll' option to catch people who just check every box. Relatively few people seem to have done that, but having the option here lets me know one way or the other.]
Good Judgement(TM) Open
Yes: 29 0.945%
No: 1921 62.573%
PredictionBook
Yes: 59 1.922%
No: 1891 61.596%
Omnilibrium
Yes: 8 0.261%
No: 1942 63.257%
Hacker News
Yes: 252 8.208%
No: 1698 55.309%
#lesswrong on freenode
Yes: 76 2.476%
No: 1874 61.042%
#slatestarcodex on freenode
Yes: 36 1.173%
No: 1914 62.345%
#hplusroadmap on freenode
Yes: 4 0.13%
No: 1946 63.388%
#chapelperilous on freenode
Yes: 10 0.326%
No: 1940 63.192%
[Since people keep asking me, this is a postrational channel.]
/r/rational
Yes: 274 8.925%
No: 1676 54.593%
/r/HPMOR
Yes: 230 7.492%
No: 1720 56.026%
[Given that the story is long over, this is pretty impressive. I'd have expected it to be dead by now.]
/r/SlateStarCodex
Yes: 244 7.948%
No: 1706 55.57%
One or more private 'rationalist' groups
Yes: 192 6.254%
No: 1758 57.264%
[I almost wish I hadn't included this option, it'd have been fascinating to learn more about these through write ins.]
Of all the parties who seem like plausible candidates at the moment, Scott Alexander seems most capable to undiaspora the community. In practice he's very busy, so he would need a dedicated team of relatively autonomous people to help him. Scott could court guest posts and start to scale up under the SSC brand, and I think he would fairly easily end up with the lions share of the free floating LWers that way.
Before I call a hearse for LessWrong, there is a glimmer of hope left:
Would you consider rejoining LessWrong?
I never left: 668 40.6%
Yes: 557 33.8%
Yes, but only under certain conditions: 205 12.5%
No: 216 13.1%
A significant fraction of people say they'd be interested in an improved version of the site. And of course there were write ins for conditions to rejoin, what did people say they'd need to rejoin the site?
Rejoin Condition Write Ins [Part One]
Rejoin Condition Write Ins [Part Two]
Rejoin Condition Write Ins [Part Three]
Rejoin Condition Write Ins [Part Four]
Rejoin Condition Write Ins [Part Five]
Feel free to read these yourselves (they're not long), but I'll go ahead and summarize: It's all about the content. Content, content, content. No amount of usability improvements, A/B testing or clever trickery will let you get around content. People are overwhelmingly clear about this; they need a reason to come to the site and right now they don't feel like they have one. That means priority number one for somebody trying to revitalize LessWrong is how you deal with this.
Let's recap.
Future Improvement Wishlist Based On Survey Results
Philosophical
- Pay more attention to mainstream scholarship and ideas.
- Improved intellectual rigor.
- Acknowledge sources borrowed from.
- Be more practical and focus on results.
- Be more humble.
Community
- Less intimidating/aggressive/etc to newcomers,
- Structures that will onboard people into the community.
- Stop being so nitpicky and argumentative.
- Spend more time on getting content indexed in a form where people can actually find it.
- More accepting of outside viewpoints.
While that list seems reasonable, it's quite hard to put into practice. Rigor, as the name implies requires high-effort from participants. Frankly, it's not fun. And getting people to do un-fun things without paying them is difficult. If LessWrong is serious about it's goal of 'advancing the art of human rationality' then it needs to figure out a way to do real investigation into the subject. Not just have people 'discuss', as though the potential for Rationality is within all of us just waiting to be brought out by the right conversation.
I personally haven't been a LW regular in a long time. Assuming the points about pedanticism, snipping, "well actually"-ism and the like are true then they need to stop for the site to move forward. Personally, I'm a huge fan of Scott Alexander's comment policy: All comments must be at least two of true, kind, or necessary.
-
True and kind - Probably won't drown out the discussion signal, will help significantly decrease the hostility of the atmosphere.
-
True and necessary - Sometimes what you have to say isn't nice, but it needs to be said. This is the common core of free speech arguments for saying mean things and they're not wrong. However, something being true isn't necessarily enough to make it something you should say. In fact, in some situations saying mean things to people entirely unrelated to their arguments is known as the ad hominem fallacy.
-
Kind and necessary - The infamous 'hugbox' is essentially a place where people go to hear things which are kind but not necessarily true. I don't think anybody wants a hugbox, but occasionally it can be important to say things that might not be true but are needed for the sake of tact, reconciliation, or to prevent greater harm.
If people took that seriously and really gave it some thought before they used their keyboard, I think the on-site LessWrong community would be a significant part of the way to not driving people off as soon as they arrive.
More importantly, in places like the LessWrong Slack I see this sort of happy go lucky attitude about site improvement. "Oh that sounds nice, we should do that." without the accompanying mountain of work to actually make 'that' happen. I'm not sure people really understand the dynamics of what it means to 'revive' a website in severe decay. When you decide to 'revive' a dying site, what you're really doing once you're past a certain point is refounding the site. So the question you should be asking yourself isn't "Can I fix the site up a bit so it isn't quite so stale?". It's "Could I have founded this site?" and if the answer is no you should seriously question whether to make the time investment.
Whether or not LessWrong lives to see another day basically depends on the level of ground game its last users and administrators can muster up. And if it's not enough, it won't.
Virtus junxit mors non separabit!
Lesswrong Survey - invitation for suggestions
Given that it's been a while since the last survey (http://lesswrong.com/lw/lhg/2014_survey_results/)
It's now time to open the floor to suggestions of improvements to the last survey. If you have a question you think should be on the survey (perhaps with reasons why, predictions as to the result, or other useful commentary about a survey question)
Alternatively questions that should not be included in the next survey, with similar reasons as to why...
survey is now up (2016-03-26) http://lesswrong.com/lw/nfk/lesswrong_2016_survey/
[Link] 2015 modafinil user survey
I am running, in collaboration with ModafinilCat, a survey of modafinil users asking about their experiences, side-effects, sourcing, efficacy, and demographics:
https://docs.google.com/forms/d/1ZNyGHl6vnHD62spZyHIqyvNM_Ts_82GvZQVdAr2LrGs/viewform
This is something of a followup to the LW surveys which find substantial modafinil use, and Yvain's 2014 nootropics survey. I hope the results will be useful; the legal questions should help reduce uncertainty there, and the genetics questions (assuming any responses) may be interesting too.
A survey of the top posters on lesswrong
In a post recently someone mentioned that there was a list of "Top 15" posters by karma. That inspired me to send all of them this note:
Hi,
I am messaging you (now) because you are one of the 15 top contributors of the past 30 days of LW.
I was wondering if you do any time tracking; or if you have any idea how much time you spend on LW. (i.e. rescuetime)
I have made the choice to spend more of my time engaging with LW and am wondering how much you (and your other top peers) spend. And also why?
Maybe you want to rate each of these out of 10; the reasons you partake in LW discussions:
- Make the world better (raising the sanity waterline etc)
- Fun (spend my spare time here)
- Friends (here because my Real-Life is here; and so I come to hang with my friends - or my internet friends hang out here)
- Gather rationality (maybe you still gather rationality from LW; maybe you have gathered most of what you can and now are creating your own craft)
- here for new ideas (LW being a good place to share new ideas)
- here to advertise an idea (promoting a line of thinking from elsewhere - could be anything from; more Effective Altruism; to this book)
- Here to create my own craft. (from the craft and the community)
- other? (as many other's as you like)
In addition do you think people (others) should participate more or less in the ongoing conversation? (or stay about as much as they are?) And would you give any particular message to others?
Do you feel like your time spent is effective?
I wonder if this small sample; once gathered will yield anything useful. with your permission I would like to publish your responses (anonymised for your protection) if either something interesting comes out; or nothing interesting (publish the null result)
Please add as many comments as you can :).
I'd also like to thank you for being a part of keeping the community active. I find it a good garden with many friends.
Sincerely, E
(Disclaimer: I have no affiliation to rescue time I just like their tracking system)
As of the time of this post; I have received 10 replies. I waited an extra week or two and there were no more replies after about 2-3 days.
The funny thing about asking for something is that people don't always answer in the way that you want them to answer. (Mostly I blame myself and the way I asked; but I think its quite funny really that several replies did not include a rating out of 10)
1. Make the world better.
as was pointed out to me by one of the responses: "Mostly this is low because of ambiguity over "the world"", responses were; 0,2,6,y,y. of which I assume the other 5 were, 0,0,0,0,0.
2. fun.
Several replies included that this was a most productive time sink they could think of. replies were y,y,y,10,10,8. One other person said they used LW as procrastination. one said, "it's a reasonably interesting way of killing time".
3. friends
answers: y, 0, "4-more like acquaintances", 5. Some people mentioned local meetups but also that they don't interact online with those people. I suppose if you are here for friends you are kinda doing it wrong; here to not get yelled at and to understand things is more accurate of a description. "I treat LW like a social club and a general place to hang out"
4. learn rationality
y,y but doubt it, 5 - a bit, 5. I expected most of the top posters to have already achieved a level of rationality where they would be searching elsewhere for it. I assume the others would be 0/n or close.
5. new ideas
.3,4,8 (assume 7*0). I guess the top don't think innovation happens here. Which is interesting because I thing it does.
6. advertise ideas
.1,"6 - generally", (assuming 8*0). I was concerned that the active members might be pushing an agenda or something. Its entirely possible, but seems to not be the case.
7. create craft
.8,7. I would have thought someone motivated to be increasing the craft of rationality would be here for that purpose. I guess not.
" I'm not sure to what extent I'm creating my own craft, but it's a good question. At the very least I'm acquiring a better ability to ask whether something makes sense. "
8. other
Two people mentioned that this is a place of quality, or high thinking, they are here for the reasonableness or lack of unreasonableness of the participants.
open questions:
effective time: most responses to this were in the range of, "better than other rubbish on the internet", and "least bad time sink I can think of".
more or less posts: two people suggested more; one suggested less but of higher quality. They all understand the predicament of the thing.
Time tracking: several people track, and others estimate between 30mins and 3hours a day.
Bearing in mind that the top posting positions are selected on multiple factors including whether or not people have time, not just relating to their effectiveness or their *most rational* status. I don't believe this selection of people have said anything much helpful, other than
some quotes:
" LW gives you the opportunity to share your ideas with a large number of smart people who will help you discard or sharpen them without you having to go to the trouble of maintaining a personal blog. A good post has the opportunity to deliver a lot of value to some very smart and altruistically motivated people. Becoming a respected LW contributor takes a lot of intelligence, thought, research, writing skill, and hard work. Many try but few succeed. "
" I am pretty much an internet discussion board addict"
"Suppose LW is just a forum where a bunch of smart people hang out and talk about whatever interests them, which is frequently potentially-important (effective altruism, AI safety) or intellectually interesting (decision theory, maths of AI safety) or practically useful (akrasia, best-textbooks-on-X threads). That seems to me like enough to be valuable"
"My karma comes from thousands of comments, not from meaningful articles."
"I feel there is a power law distribution to LW contributor value with some people like Eliezer, Yvain, and lukeprog making many high-quality posts. So I think the most important thing is for people like that to get “discovered”. It may take some leveling up for them to get to that point though, and encouragement for them to spend time cranking out lots of posts that are high-quality."
" I feel like if we gave top LW posters more recognition that could incentivize the production of more great content, and becoming a top poster with a high % upvoted genuinely seems like a strong challenge and an indicator of superior potential, if achieved, to me."
"As a rule, though, I do not believe that LW has much to do with refining human rationality."
"I think that written reflection is a useful way to engage with new ideas. LW provides a venue to discuss ideas with smart people who care about published evidence."
"I post on Less Wrong primarily because I'm a forum-poster, and this is the forum most relevant to my interests. If I stopped finding forum-posting satisfying, or found a more relevant forum, I'd probably move there and only rarely check LW."
"I think people should participate more. I view LW as a forum and not as a library."
In summary: what I think I have gathered.
The top posters don't think they or lesswrong is effective at changing the world; however this is a nice place to hang out. I don't know what an effective place would look like but it is almost certainly not this place. I don't see LW as being worth quitting or shutting down without a *better* alternative. As a place striving to propagate rationality; that is debatable. As a garden of healthy discussions and reasonable people remembering that their opposing factions are also reasonable people with different idea - this place deserves a medal. If only we could hone the essence of reasonableness and share it around to people. I feel that might be the value of lesswrong.
LW is a system built of people staying, "while its good" as soon as it is no longer as nice of a garden they will be gone.
I hope this helps someone else as well as me.
In light of the discussions about improving this place; I hope this helps contribute to the discussion.
LW survey: Effective Altruists and donations
Analysis of 2013-2014 LessWrong survey results on how much more self-identified EAers donate
Tally of LessWrong experience on Alcohol
As a follow up post to: http://lesswrong.com/lw/m2r/lesswrong_experience_on_alcohol/
I tallied the responses.
In rought categories:
Doesnt drink: 11
Drinks: 19
Drinks heavily: 4
Disclaimers; I had to make judegements as to people who didn't like alcohol who drink very rarely (but are not morally opposed to the thought of it), and people who drink regularly as to how much would put them into the "drinks heavily" category. I think I did an okay job of it.
I wonder if LW (and other bodies) can make money for itself using similar click-thru tactics used on book buying but for online alcohol stores. Drink responsibly!
I will try to update this tally if any more responses are received.
I hope the following question finds its way onto the lesswrong survey:
How regularly do you drink?
daily (or almost daily)
5 days a week
3 days a week
twice a week
once a week
a few times a month
less than 12 time a year
less than 2 times a year
never
(and possibly)
Has your drinking habits changed since last year?
I drink more
I drink the same
I drink different things but about the same amount of drinks
I drink less
(follow up post on spice preferences coming in a few hours)
(Edit*2: 25/4/15 added some commenters to the tally)
How has lesswrong changed your life?
I've been wondering what effect joining lesswrong and reading the sequences has on people.
How has lesswrong changed your life?
What have you done differently?
What have you done?
LW Supplement use survey
I've put together a very basic survey using google forms inspired by NancyLebovitz recent discussion post on supplement use
Survey includes options for "other" and "do not use supplements." Results are anonymous and you can view all the results once you have filled it in, or use this link.
Link to the Survey
Feedback from Less Wrong Community
Hi from Castify.
We're continuing our work to turn the Less Wrong sequences into audio. Could you spare about 30 seconds to help us decide which one to do next?
A Quick and Dirty Survey: Textbook Learning
Hello, folks. I'm one of those long-time lurkers.
I've decided to conduct, as the title suggests, a quick and dirty survey in hopes of better understanding a problem I have (or rather, whether or not what I have is actually a problem).
Here's some context: I'm a Physics & Mathematics major, currently taking multi-variable. Lately, I've been unsatisfied with my understanding and usage of mathematics—mainly calculus. I've decided to go through what's been recommended as a much more rigorous Calculus textbook, Calculus by Michael Spivak. So far I'm really enjoying it, but it's taking me a long time to get through the exercises. I can be very meticulous about things like this and want to do every exercise through every chapter; I feel that there's benefit to actually doing them regardless of whether or not I look at the problem and think "Yeah, I can do this." Sometimes actually doing the problem is much more difficult than it seems, and I learn a lot from doing them. When flipping through the exercises, I also notice that—regardless of how well I think I know the material—there ends up being a section of exercises focused on something I've never heard of before; something very clever or, I think, mathematically enlightening, that's dependent on the exercises before it.
I'm somewhat embarrassed to admit that the exercises of the first chapter alone had taken me hours upon hours upon hours of combined work. I consider myself slow when it comes to reading mathematics and physics literature—I have to carefully comb through all the concepts and equations and structure them intuitively in a way I see fit. I hate not having a very fundamental understanding of the things I'm working with.
At the same time, I read/hear people who apparently are familiar with multiple textbooks on the same subject. Familiar enough to judge whether or not it is a good textbook. Familiar enough to place how they fit on a hierarchy of textbooks on the same subject. I think "At the rate I'm going, it will take me a very long time to get through this."
So...
Here's (what I think is) my issue: I don't know whether or not I'm taking too long. Am I doing things inefficiently? Is there a better way to choose which exercises I do and don't work through so that I learn a similar amount of material in less time? Or is it just fine that I'm taking this long? Am I slow and inefficient or am I just new to this process of working through a textbook cover-to-cover, which is supposed to take a very long time anyway?
I spend more time than I should learning about learning, instead of learning the material itself. I find myself using up lots of time trying to figure out how to learn more efficiently, how to think more efficiently, how to work more efficiently, and such things—as opposed to actually learning and actually thinking and actually working, which ends up being an inefficient use of my time. I think part of this problem stems from the fact that I don't have much of a comparison for when I can say "Ok, I'm satisfied and can stop focusing on improve how I do this act—and just do it already." I want to solve that issue now.
Which brings us to...
Here's my attempted solution: A survey! I assume many people here at LessWrong have worked through a science or mathematics textbook on their own. Mainly I'd like to gauge whether or not you thought you were taking a very long time, how long it took you, etc. I'd also like to know what your approach was: Did you perform every exercise, or skim through the book finding things you knew you didn't know? Did you skip around or go from the first chapter to the last? Do you have any advice on how one should approach a given textbook?
Here's the survey: https://docs.google.com/forms/d/1S4_-7_dxgmgprMbNhL1dNmX_0Zq9QrA9lpTl9ZHHxMI/viewform
I'm not sure how interested anyone but me is in this, but on a later date I could make another post showing the data. I considered checking "Publish and show a link to the results of this form", but I wasn't sure if that kept everyone anonymous or not. Also, feel more than free to post any criticism, shortcomings, improvements, etc. Have I left anything out? Is there anything you'd like to see me add? This is my first attempt at a survey like this and I'd appreciate any feedback (though I know it's not necessarily a rigorous survey, just a quick data-collection, I suppose).
I strongly encourage the posting of any textbook-reading tips or guidelines in the comments. I left that out of the survey so that anyone who's interested has immediate access to tips.
Here's an edit: Thanks for all the responses, everyone. Not only was my original question sufficiently answered (that is, it doesn't seem like I'm taking too long; there were only a few survey takers, but in between the comments and the survey answers, I'm not going at an extraordinarily slow rate). There's some very solid advice for different methods I might try to optimize my learning process. One that especially hit home was the suggestion that the large amounts of time spent "learning about learning" are such because it feels more comfortable than actually learning the material. In short, it's a safety blanket that makes me feel like I'm doing something productive when I'm really just avoiding what needs to be done. Some other useful pieces of advice are:
- Try being open to learning a broader range of materials without necessarily mastering each one. It might be the case that you need to know one thing in order to master the other, and need to know the other in order to master the one—trying to master either of them in isolation ends up being somewhat futile. Not everything needs to be "brick by brick" structured. (This was a lesson I found useful when I first learned that a number raised to the "one half" power was the square root of that number: Trying to master it in terms of the rules I already knew ended up in a thought like, "... Two to the third power is two times two times two. Two to the one-half power is two... times two one half times?"
- Though it may be uncomfortable at first, it could make learning easier to try the exercises before reading the chapter super-carefully; trying them before you feel ready to try them. You don't necessarily have to fully comprehend all of the proofs in the chapter to get through some exercises.
- Textbooks might just be the wrong way to go in the first place. Try resources like Wikipedia, math blogs, and math forums.
- "Don't use the answer key unless you've spent a significant amount of time trying to find the answer yourself!" (This may seem obvious, but a few years ago, I'd spend a couple of minutes on the problem, not understand it, look to the answer key, and wonder why I wasn't learning anything.)
- Skip exercises when you feel you could solve them, but randomly check whether this estimate is correct by doing the problem anyway. (I like this one a lot).
- Talk to a professor!
- It may be the case that you learn well via just reading, and not spending so much time on the exercises.
Here are some websites/blogs mentioned:
(Blog) Math for Programmers - http://steve-yegge.blogspot.com/2006/03/math-for-programmers.html
(Blog) Annoying Precision - http://qchu.wordpress.com/
(Math Forum) Mathematics - http://math.stackexchange.com/
Excellent, excellent stuff, though. Thank you. :) There's a lot of material and advice for me to work with—while simultaneously making sure I don't avoid my work by hiding under the guise of productivity.
Participation in the LW Community Associated with Less Bias
Summary
CFAR included 5 questions on the 2012 LW Survey which were adapted from the heuristics and biases literature, based on five different cognitive biases or reasoning errors. LWers, on the whole, showed less bias than is typical in the published research (on all 4 questions where this was testable), but did show clear evidence of bias on 2-3 of those 4 questions. Further, those with closer ties to the LW community (e.g., those who had read more of the sequences) showed significantly less bias than those with weaker ties (on 3 out of 4-5 questions where that was testable). These results all held when controlling for measures of intelligence.
METHOD & RESULTS
Being less susceptible to cognitive biases or reasoning errors is one sign of rationality (see the work of Keith Stanovich & his colleagues, for example). You'd hope that a community dedicated to rationality would be less prone to these biases, so I selected 5 cognitive biases and reasoning errors from the heuristics & biases literature to include on the LW survey. There are two possible patterns of results which would point in this direction:
- high scores: LWers show less bias than other populations that have answered these questions (like students at top universities)
- correlation with strength of LW exposure: those who have read the sequences (or have been around LW a long time, have high karma, attend meetups, make posts) score better than those who have not.
The 5 biases were selected in part because they can be tested with everyone answering the same questions; I also preferred biases that haven't been discussed in detail on LW. On some questions there is a definitive wrong answer and on others there is reason to believe that a bias will tend to lead people towards one answer (so that, even though there might be good reasons for a person to choose that answer, in the aggregate it is evidence of bias if more people choose that answer).
This is only one quick, rough survey. If the results are as predicted, that could be because LW makes people more rational, or because LW makes people more familiar with the heuristics & biases literature (including how to avoid falling for the standard tricks used to test for biases), or because the people who are attracted to LW are already unusually rational (or just unusually good at avoiding standard biases). Susceptibility to standard biases is just one angle on rationality. Etc.
Here are the question-by-question results, in brief. The next section contains the exact text of the questions, and more detailed explanations.
Question 1 was a disjunctive reasoning task, which had a definitive correct answer. Only 13% of undergraduates got the answer right in the published paper that I took it from. 46% of LWers got it right, which is much better but still a very high error rate. Accuracy was 58% for those high in LW exposure vs. 31% for those low in LW exposure. So for this question, that's:
1. LWers biased: yes
2. LWers less biased than others: yes
3. Less bias with more LW exposure: yes
Question 2 was a temporal discounting question; in the original paper about half the subjects chose money-now (which reflects a very high discount rate). Only 8% of LWers did; that did not leave much room for differences among LWers (and there was only a weak & nonsignificant trend in the predicted direction). So for this question:
1. LWers biased: not really
2. LWers less biased than others: yes
3. Less bias with more LW exposure: n/a (or no)
Question 3 was about the law of large numbers. Only 22% got it right in Tversky & Kahneman's original paper. 84% of LWers did: 93% of those high in LW exposure, 75% of those low in LW exposure. So:
1. LWers biased: a bit
2. LWers less biased than others: yes
3. Less bias with more LW exposure: yes
Question 4 was based on the decoy effect aka asymmetric dominance aka attraction effect (but missing a control condition). I don't have numbers from the original study (and there is no correct answer) so I can't really answer 1 or 2 for this question, but there was a difference based on LW exposure: 57% vs. 44% selecting the less bias related answer.
1. LWers biased: n/a
2. LWers less biased than others: n/a
3. Less bias with more LW exposure: yes
Question 5 was an anchoring question. The original study found an effect (measured by slope) of 0.55 (though it was less transparent about the randomness of the anchor; transparent studies w. other questions have found effects around 0.3 on average). For LWers there was a significant anchoring effect but it was only 0.14 in magnitude, and it did not vary based on LW exposure (there was a weak & nonsignificant trend in the wrong direction).
1. LWers biased: yes
2. LWers less biased than others: yes
3. Less bias with more LW exposure: no
One thing you might wonder: how much of this is just intelligence? There were several questions on the survey about performance on IQ tests or SATs. Controlling for scores on those tests, all of the results about the effects of LW exposure held up nearly as strongly. Intelligence test scores were also predictive of lower bias, independent of LW exposure, and those two relationships were almost the same in magnitude. If we extrapolate the relationship between IQ scores and the 5 biases to someone with an IQ of 100 (on either of the 2 IQ measures), they are still less biased than the participants in the original study, which suggests that the "LWers less biased than others" effect is not based solely on IQ.
MORE DETAILED RESULTS
There were 5 questions related to strength of membership in the LW community which I standardized and combined into a single composite measure of LW exposure (LW use, sequence reading, time in community, karma, meetup attendance); this was the main predictor variable I used (time per day on LW also seems related, but I found out while analyzing last year's survey that it doesn't hang together with the others or associate the same way with other variables). I analyzed the results using a continuous measure of LW exposure, but to simplify reporting, I'll give the results below by comparing those in the top third on this measure of LW exposure with those in the bottom third.
There were 5 intelligence-related measures which I combined into a single composite measure of Intelligence (SAT out of 2400, SAT out of 1600, ACT, previously-tested IQ, extra credit IQ test); I used this to control for intelligence and to compare the effects of LW exposure with the effects of Intelligence (for the latter, I did a similar split into thirds). Sample sizes: 1101 people answered at least one of the CFAR questions; 1099 of those answered at least one LW exposure question and 835 of those answered at least one of the Intelligence questions. Further details about method available on request.
Here are the results, question by question.
Question 1: Jack is looking at Anne, but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person?
- Yes
- No
- Cannot be determined
This is a "disjunctive reasoning" question, which means that getting the correct answer requires using "or". That is, it requires considering multiple scenarios. In this case, either Anne is married or Anne is unmarried. If Anne is married then married Anne is looking at unmarried George; if Anne is unmarried then married Jack is looking at unmarried Anne. So the correct answer is "yes". A study by Toplak & Stanovich (2002) of students at a large Canadian university found that only 13% correctly answered "yes" while 86% answered "cannot be determined" (2% answered "no").
On this LW survey, 46% of participants correctly answered "yes"; 54% chose "cannot be determined" (and 0.4% said"no"). Further, correct answers were much more common among those high in LW exposure: 58% of those in the top third of LW exposure answered "yes", vs. only 31% of those in the bottom third. The effect remains nearly as big after controlling for Intelligence (the gap between the top third and the bottom third shrinks from 27% to 24% when Intelligence is included as a covariate). The effect of LW exposure is very close in magnitude to the effect of Intelligence; 60% of those in the top third in Intelligence answered correctly vs. 37% of those in the bottom third.
original study: 13%
weakly-tied LWers: 31%
strongly-tied LWers: 58%
Question 2: Would you prefer to receive $55 today or $75 in 60 days?
This is a temporal discounting question. Preferring $55 today implies an extremely (and, for most people, implausibly) high discount rate, is often indicative of a pattern of discounting that involves preference reversals, and is correlated with other biases. The question was used in a study by Kirby (2009) of undergraduates at Williams College (with a delay of 61 days instead of 60; I took it from a secondary source that said "60" without checking the original), and based on the graph of parameter values in that paper it looks like just under half of participants chose the larger later option of $75 in 61 days.
LW survey participants almost uniformly showed a low discount rate: 92% chose $75 in 61 days. This is near ceiling, which didn't leave much room for differences among LWers. For LW exposure, top third vs. bottom third was 93% vs. 90%, and this relationship was not statistically significant (p=.15); for Intelligence it was 96% vs. 91% and the relationship was statistically significant (p=.007). (EDITED: I originally described the Intelligence result as nonsignificant.)
original study: ~47%
weakly-tied LWers: 90%
strongly-tied LWers: 93%
Question 3: A certain town is served by two hospitals. In the larger hospital, about 45 babies are born each day. In the smaller one, about 15 babies are born each day. Although the overall proportion of girls is about 50%, the actual proportion at either hospital may be greater or less on any day. At the end of a year, which hospital will have the greater number of days on which more than 60% of the babies born were girls?
- The larger hospital
- The smaller hospital
- Neither - the number of these days will be about the same
This is a statistical reasoning question, which requires applying the law of large numbers. In Tversky & Kahneman's (1974) original paper, only 22% of participants correctly chose the smaller hospital; 57% said "about the same" and 22% chose the larger hospital.
On the LW survey, 84% of people correctly chose the smaller hospital; 15% said "about the same" and only 1% chose the larger hospital. Further, this was strongly correlated with strength of LW exposure: 93% of those in the top third answered correctly vs. 75% of those in the bottom third. As with #1, controlling for Intelligence barely changed this gap (shrinking it from 18% to 16%), and the measure of Intelligence produced a similarly sized gap: 90% for the top third vs. 79% for the bottom third.
original study: 22%
weakly-tied LWers: 75%
strongly-tied LWers: 93%
Question 4: Imagine that you are a doctor, and one of your patients suffers from migraine headaches that last about 3 hours and involve intense pain, nausea, dizziness, and hyper-sensitivity to bright lights and loud noises. The patient usually needs to lie quietly in a dark room until the headache passes. This patient has a migraine headache about 100 times each year. You are considering three medications that you could prescribe for this patient. The medications have similar side effects, but differ in effectiveness and cost. The patient has a low income and must pay the cost because her insurance plan does not cover any of these medications. Which medication would you be most likely to recommend?
- Drug A: reduces the number of headaches per year from 100 to 30. It costs $350 per year.
- Drug B: reduces the number of headaches per year from 100 to 50. It costs $100 per year.
- Drug C: reduces the number of headaches per year from 100 to 60. It costs $100 per year.
This question is based on research on the decoy effect (aka "asymmetric dominance" or the "attraction effect"). Drug C is obviously worse than Drug B (it is strictly dominated by it) but it is not obviously worse than Drug A, which tends to make B look more attractive by comparison. This is normally tested by comparing responses to the three-option question with a control group that gets a two-option question (removing option C), but I cut a corner and only included the three-option question. The assumption is that more-biased people would make similar choices to unbiased people in the two-option question, and would be more likely to choose Drug B on the three-option question. The model behind that assumption is that there are various reasons for choosing Drug A and Drug B; the three-option question gives biased people one more reason to choose Drug B but other than that the reasons are the same (on average) for more-biased people and unbiased people (and for the three-option question and the two-option question).
Based on the discussion on the original survey thread, this assumption might not be correct. Cost-benefit reasoning seems to favor Drug A (and those with more LW exposure or higher intelligence might be more likely to run the numbers). Part of the problem is that I didn't update the costs for inflation - the original problem appears to be from 1995 which means that the real price difference was over 1.5 times as big then.
I don't know the results from the original study; I found this particular example online (and edited it heavily for length) with a reference to Chapman & Malik (1995), but after looking for that paper I see that it's listed on Chapman's CV as only a "published abstract".
49% of LWers chose Drug A (the one that is more likely for unbiased reasoners), vs. 50% for Drug B (which benefits from the decoy effect) and 1% for Drug C (the decoy). There was a strong effect of LW exposure: 57% of those in the top third chose Drug A vs. only 44% of those in the bottom third. Again, this gap remained nearly the same when controlling for Intelligence (shrinking from 14% to 13%), and differences in Intelligence were associated with a similarly sized effect: 59% for the top third vs. 44% for the bottom third.
original study: ??
weakly-tied LWers: 44%
strongly-tied LWers: 57%
Question 5: Get a random three digit number (000-999) from http://goo.gl/x45un and enter the number here.
Treat the three digit number that you just wrote down as a length, in feet. Is the height of the tallest redwood tree in the world more or less than the number that you wrote down?
What is your best guess about the height of the tallest redwood tree in the world (in feet)?
This is an anchoring question; if there are anchoring effects then people's responses will be positively correlated with the random number they were given (and a regression analysis can estimate the size of the effect to compare with published results, which used two groups instead of a random number).
Asking a question with the answer in feet was a mistake which generated a great deal of controversy and discussion. Dealing with unfamiliar units could interfere with answers in various ways so the safest approach is to look at only the US respondents; I'll also see if there are interaction effects based on country.
The question is from a paper by Jacowitz & Kahneman (1995), who provided anchors of 180 ft. and 1200 ft. to two groups and found mean estimates of 282 ft. and 844 ft., respectively. One natural way of expressing the strength of an anchoring effect is as a slope (change in estimates divided by change in anchor values), which in this case is 562/1020 = 0.55. However, that study did not explicitly lead participants through the randomization process like the LW survey did. The classic Tversky & Kahneman (1974) anchoring question did use an explicit randomization procedure (spinning a wheel of fortune; though it was actually rigged to create two groups) and found a slope of 0.36. Similarly, several studies by Ariely & colleagues (2003) which used the participant's Social Security number to explicitly randomize the anchor value found slopes averaging about 0.28.
There was a significant anchoring effect among US LWers (n=578), but it was much weaker, with a slope of only 0.14 (p=.0025). That means that getting a random number that is 100 higher led to estimates that were 14 ft. higher, on average. LW exposure did not moderate this effect (p=.88); looking at the pattern of results, if anything the anchoring effect was slightly higher among the top third (slope of 0.17) than among the bottom third (slope of 0.09). Intelligence did not moderate the results either (slope of 0.12 for both the top third and bottom third). It's not relevant to this analysis, but in case you're curious, the median estimate was 350 ft. and the actual answer is 379.3 ft. (115.6 meters).
Among non-US LWers (n=397), the anchoring effect was slightly smaller in magnitude compared with US LWers (slope of 0.08), and not significantly different from the US LWers or from zero.
original study: slope of 0.55 (0.36 and 0.28 in similar studies)
weakly-tied LWers: slope of 0.09
strongly-tied LWers: slope of 0.17
If we break the LW exposure variable down into its 5 components, every one of the five is strongly predictive of lower susceptibility to bias. We can combine the first four CFAR questions into a composite measure of unbiasedness, by taking the percentage of questions on which a person gave the "correct" answer (the answer suggestive of lower bias). Each component of LW exposure is correlated with lower bias on that measure, with r ranging from 0.18 (meetup attendance) to 0.23 (LW use), all p < .0001 (time per day on LW is uncorrelated with unbiasedness, r=0.03, p=.39). For the composite LW exposure variable the correlation is 0.28; another way to express this relationship is that people one standard deviation above average on LW exposure 75% of CFAR questions "correct" while those one standard deviation below average got 61% "correct". Alternatively, focusing on sequence-reading, the accuracy rates were:
75% Nearly all of the Sequences (n = 302)
70% About 75% of the Sequences (n = 186)
67% About 50% of the Sequences (n = 156)
64% About 25% of the Sequences (n = 137)
64% Some, but less than 25% (n = 210)
62% Know they existed, but never looked at them (n = 19)
57% Never even knew they existed until this moment (n = 89)
Another way to summarize is that, on 4 of the 5 questions (all but question 4 on the decoy effect) we can make comparisons to the results of previous research, and in all 4 cases LWers were much less susceptible to the bias or reasoning error. On 1 of the 5 questions (question 2 on temporal discounting) there was a ceiling effect which made it extremely difficult to find differences within LWers; on 3 of the other 4 LWers with a strong connection to the LW community were much less susceptible to the bias or reasoning error than those with weaker ties.
REFERENCES
Ariely, Loewenstein, & Prelec (2003), "Coherent Arbitrariness: Stable demand curves without stable preferences"
Chapman & Malik (1995), "The attraction effect in prescribing decisions and consumer choice"
Jacowitz & Kahneman (1995), "Measures of Anchoring in Estimation Tasks"
Kirby (2009), "One-year temporal stability of delay-discount rates"
Toplak & Stanovich (2002), "The Domain Specificity and Generality of Disjunctive Reasoning: Searching for a Generalizable Critical Thinking Skill"
Tversky & Kahneman's (1974), "Judgment under Uncertainty: Heuristics and Biases"
[Poll] Less Wrong and Mainstream Philosophy: How Different are We?
Despite being (IMO) a philosophy blog, many Less Wrongers tend to disparage mainstream philosophy and emphasize the divergence between our beliefs and theirs. But, how different are we really? My intention with this post is to quantify this difference.
The questions I will post as comments to this article are from the 2009 PhilPapers Survey. If you answer "other" on any of the questions, then please reply to that comment in order to elaborate your answer. Later, I'll post another article comparing the answers I obtain from Less Wrongers with those given by the professional philosophers. This should give us some indication about the differences in belief between Less Wrong and mainstream philosophy.
Glossary
analytic-synthetic distinction, A-theory and B-theory, atheism, compatibilism, consequentialism, contextualism, correspondence theory of truth, deontology, egalitarianism, empiricism, Humeanism, libertarianism, mental content externalism, moral realism, moral motivation internalism and externalism, naturalism, nominalism, Newcomb's problem, physicalism, Platonism, rationalism, relativism, scientific realism, trolley problem, theism, virtue ethics
Note
Thanks pragmatist, for attaching short (mostly accurate) descriptions of the philosophical positions under the poll comments.
[link] One-question survey from Robin Hanson
As many of you probably know, Robin Hanson is writing a book, and it will be geared toward a popular audience. He wants a term that encompasses both humans and AI, so he's soliciting your opinions on the matter. Here's the link: http://www.quicksurveys.com/tqsruntime.aspx?surveyData=AYtdr2WMwCzB981F0qkivSNwbj1tn+xvU6rnauc83iU=
H/T Bryan Caplan at EconLog.
Take Part in CFAR Rationality Surveys
Posted By: Dan Keys, CFAR Survey Coordinator
The Center for Applied Rationality is trying to develop better methods for measuring and studying the benefits of rationality. We want to be able to test if this rationality stuff actually works.
One way that the Less Wrong community can help us with this process is by taking part in online surveys, which we can use for a variety of purposes including:
- seeing what rationality techniques people actually use in their day-to-day lives
- developing & testing measures of how rational people are, and seeing if potential rationality measures correlate with the other variables that you'd expect them to
- comparing people who attend a minicamp with others in the LW community, so that we can learn what value-added the minicamps provide beyond what you get elsewhere
- trying out some of the rationality techniques that we are trying to teach, so we can see how they work
We have a couple of surveys ready to go now which cover some of these bullet points, and will be developing other surveys over the coming months.
If you're interested in taking part in online surveys for CFAR, please go here to fill out a brief form with your contact info; then we will contact you about participating in specific surveys.
If you have previously filled out a form like this one to participate in CFAR surveys, then we already have your information so you don't need to sign up again.
Questions/Issues can be posted in the comments here, PMed to me, or emailed to us at CFARsurveys@gmail.com.
[Poll] Who looks better in your eyes?
This is thread where I'm trying to figure out a few things about signalling on LessWrong and need some information, so please immediately after reading about the two individuals please answer the poll. The two individuals:
A. Sees that an interpretation of reality shared by others is not correct, but tries to pretend otherwise for personal gain and/or safety.
B. Fails to see that an interpretation of reality is shared by others is flawed. He is therefore perfectly honest in sharing the interpretation of reality with others. The reward regime for outward behaviour is the same as with A.
To add a trivial inconvenience that matches the inconvenience of answering the poll before reading on, comments on what I think the two individuals signal,what the trade off is and what I speculate the results might be here versus the general population, is behind this link.
Survey: Risks from AI
Related to: lesswrong.com/lw/fk/survey_results/
I am currently emailing experts in order to raise and estimate the academic awareness and perception of risks from AI and ask them for permission to publish and discuss their responses. User:Thomas suggested to also ask you, everyone who is reading lesswrong.com, and I thought this was a great idea. If I ask experts to publicly answer questions, to publish and discuss them here on LW, I think it is only fair to do the same.
Answering the questions below will help the SIAI and everyone interested to mitigate risks from AI to estimate the effectiveness with which the risks are communicated.
Questions:
- Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? Feel free to answer 'never' if you believe such a milestone will never be reached.
- What probability do you assign to the possibility of a negative/extremely negative Singularity as a result of badly done AI?
- What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?
- Does friendly AI research, as being conducted by the SIAI, currently require less/no more/little more/much more/vastly more support?
- Do risks from AI outweigh other existential risks, e.g. advanced nanotechnology? Please answer with yes/no/don't know.
- Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?
Note: Please do not downvote comments that are solely answering the above questions.
Sandberg, A. and Bostrom, N. (2011): Machine Intelligence Survey
As some readers may recall, we had a conference this January about intelligence, and in particular the future of machine intelligence. We did a quick survey among participants about their estimates of when and how human-level machine intelligence would be developed. Now we can announce the results: Sandberg, A. and Bostrom, N. (2011): Machine Intelligence Survey, Technical Report #2011-1, Future of Humanity Institute, Oxford University.
[...]
The median estimate of when there will be 50% chance of human level machine intelligence was 2050.
People estimated 10% chance of AI in 2028, and 90% chance in 2150.
[...]
All in all, a small study of a self selected group, so it doesn't prove anything in particular. But it fits in with earlier studies like Ben Goertzel, Seth Baum, Ted Goertzel, How Long Till Human-Level AI? and Bruce Klein, When will AI surpass human-level intelligence? - people who tend to answer this kind of surveys seem to have a fairly similar mental model.
Link: Machine Intelligence Survey (PDF)
[Link] A review of proposals toward safe AI
Eliezer Yudkowsky set out to define more precisely what it means for an entity to have “what people really want” as a goal. Coherent Extrapolated Volition was his proposal. Though CEV was never meant as more than a working proposal; his write-up provides the best insights to date into the challenges of the Friendly AI problem, the pitfalls and possible paths to a solution.
[...]
Ben Goertzel responded with Coherent Aggregated Volition, a simplified variant of CEV. In CAV, the entity’s goal is a balance between the desires of all humans, but it looks at the volition of humans directly, without extrapolation to a wiser future. This omission is not just to make the computation easier (it is still quite intractable), but rather to show some respect to humanity’s desires as they are, without extrapolation to a hypothetical improved morality.
[...]
Stuart Armstrong’s “Chaining God” is a different approach, aimed at the problem of interacting with and trusting the good will of an ultraintelligence so far beyond us that we have nothing in common with it. A succession of AIs, of gradually increasing intelligence, each guarantees the trustworthiness of one which is slightly smarter than it. This resembles Yudkowsy’s idea of a self-improving machine which verifies that its next stage has the same goals, but the successive levels of intelligence remain active simultaneously, so that they can continue to verify Friendliness.
Ray Kurzweil thinks that we will achieve safe ultraintelligence by gradually becoming that ultraintelligence. We will merge with the rising new intelligence, whether by interfacing with computers or by uploading our brains to a computer substrate.
Link: adarti.blogspot.com/2011/04/review-of-proposals-toward-safe-ai.html
View more: Next
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)