Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family

25 ingres 08 October 2017 03:43AM

Let's talk about the LessWrong Survey.

First and foremost, if you took the survey and hit 'submit', your information was saved and you don't have to take it again.

Your data is safe, nobody took it or anything it's not like that. If you took the survey and hit the submit button, this post isn't for you.

For the rest of you, I'll put it plainly: I screwed up.

This LessWrong Survey had the lowest turnout since Scott's original survey in 2009. I'll admit I'm not entirely sure why that is, but I have a hunch and most of the footprints lead back to me. The causes I can finger seem to be the diaspora, poor software, poor advertising, and excessive length.

The Diaspora

As it stands, this years LessWrong survey got about 300 completed responses. This can be compared with the previous one in 2016 which got over 1600. I think one critical difference between this survey and the last was its name. Last year the survey focused on figuring out where the 'Diaspora' was and what venues had gotten users now that LessWrong was sort of the walking dead. It accomplished that well I think, and part of the reason why is I titled it the LessWrong Diaspora Survey. That magic word got far off venues to promote it even when I hadn't asked them to. The survey was posted by Scott Alexander, Ozy Frantz, and others to their respective blogs and pretty much everyone 'involved in LessWrong' to one degree or another felt like it was meant for them to take. By contrast, this survey was focused on LessWrong's recovery and revitalization, so I dropped the word Diaspora from it and this seemed to have caused a ton of confusion. Many people I interviewed to ask why they hadn't taken the survey flat out told me that even though they were sitting in a chatroom dedicated to SSC, and they'd read the sequences, the survey wasn't about them because they had no affiliation with LessWrong. Certainly that wasn't the intent I was trying to communicate.

Poor Software

When I first did the survey in 2016, taking over from Scott I faced a fairly simple problem: How do I want to host the survey? I could do it the way Scott had done it, using Google Forms as a survey engine, but this made me wary for a few reasons. One was that I didn't really have a Google account set up that I'd feel comfortable hosting the survey from, another was that I had been unimpressed with what I'd seen from the Google Forms software up to that point in terms of keeping data sanitized on entry. More importantly, it kind of bothered me that I'd be basically handing your data over to Google. This dataset includes a large number of personal questions that I'm not sure most people want Google to have definitive answers on. Moreover I figured: Why the heck do I need Google for this anyway? This is essentially just a webform backed by a datastore, i.e some of the simplest networking technology known to man in 2016. But I didn't want to write it myself, didn't need to write it myself this is the sort of thing there should be a dozen good self hosted solutions for.

There should be, but there's really only LimeSurvey. If I had to give this post an alternate title, it would be "LimeSurvey: An anti endorsement".

I could go on for pages about what's wrong with LimeSurvey, but it can probably be summed up as "the software is bloated and resists customization". It's slow, it uses slick graphics but fails to entirely deliver on functionality, its inner workings are kind of baroque, it's the sort of thing I probably should have rejected on principle and written my own. However at that time the survey was incredibly overdue, so I felt it would be better to just get out something expedient since everyone was already waiting for it anyway. And the thing is, in 2016 it went well. We got over 3000 responses including both partial and complete. So walking away from that victory and going into 2017, I didn't really think too hard about the choice to continue using it.

A couple of things changed between 2016 and our running the survey in 2017:

Hosting - My hosting provider, a single individual who sets up strong networking architectures in his basement, had gotten a lot busier since 2016 and wasn't immediately available to handle any issues. The 2016 survey had a number of birthing pains, and his dedicated attention was part of the reason why we were able to make it go at all. Since he wasn't here this time, I was more on my own in fixing things.

Myself - I had also gotten a lot busier since 2016. I didn't have nearly as much slack as I did the last time I did it. So I was sort of relying on having done the whole process in 2016 to insulate me from opening the thing up to a bunch of problems.

Both of these would prove disastrous, as when I started the survey this time it was slow, it had a variety of bugs and issues I had only limited time to fix, and the issues just kept coming, even more than in 2016 like it had decided now when I truly didn't have the energy to spare was when things should break down. These mostly weren't show stopping bugs though, they were minor annoyances. But every minor annoyance reduced turnout, and I was slowly bleeding through the pool of potential respondents by leaving them unfixed.

The straw that finally broke the camels back for me was when I woke up to find that this message was being shown to most users coming to take the survey:

Message Shown To Survey Respondents Telling Them Their Responses 'cannot be saved'.

"Your responses cannot be saved"? This error meant for when someone had messed up cookies was telling users a vicious lie: That the survey wasn't working right now and there was no point in them taking it.

Looking at this in horror and outrage, after encountering problem after problem mixed with low turnout, I finally pulled the plug.

Poor Advertising

As one email to me mentioned, the 2017 survey didn't even get promoted to the main section of the LessWrong website. This time there were no links from Scott Alexander, nor the myriad small stakeholders that made it work last time. I'm not blaming them or anything, but as a consequence many people who I interviewed to ask about why they hadn't taken the survey had not even heard it existed. Certainly this had to have been significantly responsible for reduced turnout compared to last time.

Excessive Length

Of all the things people complained about when I interviewed them on why they hadn't taken the survey, this was easily the most common response. "It's too long."

This year I made the mistake of moving back to a single page format. The problem with a single page format is that it makes it clear to respondents just how long the survey really is. It's simply too long to expect most people to complete it. And before I start getting suggestions for it in the comments, the problem isn't actually that it needs to be shortened, per se. The problem is that to investigate every question we might want to know about the community, it really needs to be broken into more than one survey. Especially when there are stakeholders involved who would like to see a particular section added to satisfy some questions they have.

Right now I'm exploring the possibility of setting up a site similar to yourmorals so that the survey can be effectively broken up and hosted in a way where users can sign in and take different portions of it at their leisure. Further gamification could be added to help make it a little more fun for people. Which leads into...

The Survey Is Too Much Work For One Person

What we need isn't a guardian of the survey, it's really more like a survey committee. I would be perfectly willing (and plan to) chair such a committee, but I frankly need help. Writing the survey, hosting it without flaws, theming it so that it looks nice, writing any new code or web things so that we can host it without bugs, comprehensively analyzing the thing, it's a damn lot of work to do it right and so far I've kind of been relying on the generosity of my friends for it. If there are other people who really care about the survey and my ability to do it, consider this my recruiting call for you to come and help. You can mail me here on LessWrong, post in the comments, or email me at jd@fortforecast.com. If that's something you would be interested in I could really use the assistance.

What Now?

Honestly? I'm not sure. The way I see it my options look something like:

Call It A Day And Analyze What I've Got - N=300 is nothing to sneeze at, theoretically I could just call this whole thing a wash and move on to analysis.

Try And Perform An Emergency Migration - For example, I could try and set this up again on Google Forms. Having investigated that option, there's no 'import' button on Google forms so the survey would need to be reentered manually for all hundred-and-a-half questions.

Fix Some Of The Errors In LimeSurvey And Try Again On Different Hosting - I considered doing this too, but it seemed to me like the software was so clunky that there was simply no reasonable expectation this wouldn't happen again. LimeSurvey also has poor separation between being able to edit the survey and view the survey results, I couldn't delegate the work to someone else because that could theoretically violate users privacy.

These seem to me like the only things that are possible for this survey cycle, at any rate an extension of time would be required for another round. In the long run I would like to organize a project to write a new software from scratch that fixes these issues and gives us a site multiple stakeholders can submit surveys to which might be too niche to include in the current LessWrong Survey format.

I'm welcome to other suggestions in the comments, consider this my SOS.

 

2017 LessWrong Survey

21 ingres 13 September 2017 06:26AM

The 2017 LessWrong Survey is here! This year we're interested in community response to the LessWrong 2.0 initiative. I've also gone through and fixed as many bugs as I could find reported on the last survey, and reintroduced items that were missing from the 2016 edition. Furthermore new items have been introduced in multiple sections and some cut in others to make room. You can now export your survey results after finishing by choosing the 'print my results' option on the page displayed after submission. The survey will run from today until the 15th of October.

You can take the survey below, thanks for your time. (It's back in single page format, please allow some seconds for it to load):

Click here to take the survey

Requesting Questions For A 2017 LessWrong Survey

6 ingres 09 April 2017 12:48AM

It's been twelve months since the last LessWrong Survey, which means we're due for a new one. But before I can put out a new survey in earnest, I feel obligated to solicit questions from community members and check in on any ideas that might be floating around for what we should ask.

The basic format of the thread isn't too complex, just pitch questions. For best chances of inclusion, however, it's best to include:

  • A short cost/benefit analysis of including the question. Keep in mind that some questions are too invasive or embarrassing to be reasonably included. Other questions might leak too many bits. There is limited survey space and some items might be too marginal to include at the cost of others.
  • An example of a useful analysis that could be done with this question(s), especially interesting analysis in concert with other questions. eg. It's best to start with a larger question like "how does parental religious denomination affect the cohorts current religion?" and then translate that into concrete questions about religion.
  • Some idea of how the question can be done without using write-ins. Unfortunately write-in questions add massive amounts of man-hours to the total analysis time for a survey and make it harder to get out a final product when all is said and done.

The last survey included 148 questions; some sections will not be repeated in the 2017 survey, which gives us an estimate about our question budget. I would prefer to not go over 150 questions, and if at all possible come in at many fewer than that. Removed sections are:

  • The Basilisk section on the last survey provided adequate information on the phenomena it was surveying, and I do not currently plan to include it again on the 2017 survey. This frees up six questions.
  • The LessWrong Feedback portion of the last survey also adequately provided information, and I would prefer to replace it on the 2017 survey with a section measuring the site's recovery, if any. This frees up 19 questions.

I also plan to do significant reform to multiple portions of the survey. I'm particularly interested in making changes to:

  • The politics section. In particular I would like to update the questions about feelings on political issues with new entries and overhaul some of the options on various questions.
  • I handled the calibration section poorly last year, and would like to replace it this year with an easily scored set of questions. To be more specific, a good calibration section should:
    • Good calibration questions should be fermi estimable with no more than a standard 5th grade education. They should not rely on particular hidden knowledge or overly specific information. eg. "Who wrote the foundation novels?" is a terrible calibration question and "What is the height of the Eiffel Tower in meters within a multiple of 1.5?" is decent.
    • Good calibration questions should have a measurable distance component, so that even if an answer is wrong (as the vast majority of answers will be) it can still be scored.
    • A measure of distance should get proportionately smaller the closer an answer is to being correct and proportionately larger the further it is from being correct.
    • It should be easily (or at least sanely) calculable by programmatic methods.
  • The probabilities section is probably due for some revision, I know in previous years I haven't even answered it because I found the wording of some questions too confusing to even consider.

So for maximum chances of inclusion, it would be best to keep these proposed reforms in mind with your suggestions.

(Note: If you have suggestions on questions to eliminate, I'd be glad to hear those too.)

2016 LessWrong Diaspora Survey Analysis: Part Four (Politics, Calibration & Probability, Futurology, Charity & Effective Altruism)

11 ingres 10 September 2016 03:51AM

Politics

The LessWrong survey has a very involved section dedicated to politics. In previous analysis the benefits of this weren't fully realized. In the 2016 analysis we can look at not just the political affiliation of a respondent, but what beliefs are associated with a certain affiliation. The charts below summarize most of the results.

Political Opinions By Political Affiliation



































Miscellaneous Politics

There were also some other questions in this section which aren't covered by the above charts.

PoliticalInterest

On a scale from 1 (not interested at all) to 5 (extremely interested), how would you describe your level of interest in politics?

1: 67 (2.182%)

2: 257 (8.371%)

3: 461 (15.016%)

4: 595 (19.381%)

5: 312 (10.163%)

Voting

Did you vote in your country's last major national election? (LW Turnout Versus General Election Turnout By Country)
Group Turnout
LessWrong 68.9%
Austrailia 91%
Brazil 78.90%
Britain 66.4%
Canada 68.3%
Finland 70.1%
France 79.48%
Germany 71.5%
India 66.3%
Israel 72%
New Zealand 77.90%
Russia 65.25%
United States 54.9%
Numbers taken from Wikipedia, accurate as of the last general election in each country listed at time of writing.

AmericanParties

If you are an American, what party are you registered with?

Democratic Party: 358 (24.5%)

Republican Party: 72 (4.9%)

Libertarian Party: 26 (1.8%)

Other third party: 16 (1.1%)

Not registered for a party: 451 (30.8%)

(option for non-Americans who want an option): 541 (37.0%)

Calibration And Probability Questions

Calibration Questions

I just couldn't analyze these, sorry guys. I put many hours into trying to get them into a decent format I could even read and that sucked up an incredible amount of time. It's why this part of the survey took so long to get out. Thankfully another LessWrong user, Houshalter, has kindly done their own analysis.

All my calibration questions were meant to satisfy a few essential properties:

  1. They should be 'self contained'. I.E, something you can reasonably answer or at least try to answer with a 5th grade science education and normal life experience.
  2. They should, at least to a certain extent, be Fermi Estimable.
  3. They should progressively scale in difficulty so you can see whether somebody understands basic probability or not. (eg. In an 'or' question do they put a probability of less than 50% of being right?)

At least one person requested a workbook, so I might write more in the future. I'll obviously write more for the survey.

Probability Questions

Question Mean Median Mode Stdev
Please give the obvious answer to this question, so I can automatically throw away all surveys that don't follow the rules: What is the probability of a fair coin coming up heads? 49.821 50.0 50.0 3.033
What is the probability that the Many Worlds interpretation of quantum mechanics is more or less correct? 44.599 50.0 50.0 29.193
What is the probability that non-human, non-Earthly intelligent life exists in the observable universe? 75.727 90.0 99.0 31.893
...in the Milky Way galaxy? 45.966 50.0 10.0 38.395
What is the probability that supernatural events (including God, ghosts, magic, etc) have occurred since the beginning of the universe? 13.575 1.0 1.0 27.576
What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe? 15.474 1.0 1.0 27.891
What is the probability that any of humankind's revealed religions is more or less correct? 10.624 0.5 1.0 26.257
What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then? 21.225 10.0 5.0 26.782
What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time? 25.263 10.0 1.0 30.510
What is the probability that our universe is a simulation? 25.256 10.0 50.0 28.404
What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions? 83.307 90.0 90.0 23.167
What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity? 76.310 80.0 80.0 22.933

 

Probability questions is probably the area of the survey I put the least effort into. My plan for next year is to overhaul these sections entirely and try including some Tetlock-esque forecasting questions, a link to some advice on how to make good predictions, etc.

Futurology

This section got a bit of a facelift this year. Including new cryonics questions, genetic engineering, and technological unemployment in addition to the previous years.

Cryonics

Cryonics

Are you signed up for cryonics?

Yes - signed up or just finishing up paperwork: 48 (2.9%)

No - would like to sign up but unavailable in my area: 104 (6.3%)

No - would like to sign up but haven't gotten around to it: 180 (10.9%)

No - would like to sign up but can't afford it: 229 (13.8%)

No - still considering it: 557 (33.7%)

No - and do not want to sign up for cryonics: 468 (28.3%)

Never thought about it / don't understand: 68 (4.1%)

CryonicsNow

Do you think cryonics, as currently practiced by Alcor/Cryonics Institute will work?

Yes: 106 (6.6%)

Maybe: 1041 (64.4%)

No: 470 (29.1%)

Interestingly enough, of those who think it will work with enough confidence to say 'yes', only 14 are actually signed up for cryonics.

sqlite> select count(*) from data where CryonicsNow="Yes" and Cryonics="Yes - signed up or just finishing up paperwork";

14

sqlite> select count(*) from data where CryonicsNow="Yes" and (Cryonics="Yes - signed up or just finishing up paperwork" OR Cryonics="No - would like to sign up but unavailable in my area" OR "No - would like to sign up but haven't gotten around to it" OR "No - would like to sign up but can't afford it");

34

CryonicsPossibility

Do you think cryonics works in principle?

Yes: 802 (49.3%)

Maybe: 701 (43.1%)

No: 125 (7.7%)

LessWrongers seem to be very bullish on the underlying physics of cryonics even if they're not as enthusiastic about current methods in use.

The Brain Preservation Foundation also did an analysis of cryonics responses to the LessWrong Survey.

Singularity

SingularityYear

By what year do you think the Singularity will occur? Answer such that you think, conditional on the Singularity occurring, there is an even chance of the Singularity falling before or after this year. If you think a singularity is so unlikely you don't even want to condition on it, leave this question blank.

Mean: 8.110300081581755e+16

Median: 2080.0

Mode: 2100.0

Stdev: 2.847858859055733e+18

I didn't bother to filter out the silly answers for this.

Obviously it's a bit hard to see without filtering out the uber-large answers, but the median doesn't seem to have changed much from the 2014 survey.

Genetic Engineering

ModifyOffspring

Would you ever consider having your child genetically modified for any reason?

Yes: 1552 (95.921%)

No: 66 (4.079%)

Well that's fairly overwhelming.

GeneticTreament

Would you be willing to have your child genetically modified to prevent them from getting an inheritable disease?

Yes: 1387 (85.5%)

Depends on the disease: 207 (12.8%)

No: 28 (1.7%)

I find it amusing how the strict "No" group shrinks considerably after this question.

GeneticImprovement

Would you be willing to have your child genetically modified for improvement purposes? (eg. To heighten their intelligence or reduce their risk of schizophrenia.)

Yes : 0 (0.0%)

Maybe a little: 176 (10.9%)

Depends on the strength of the improvements: 262 (16.2%)

No: 84 (5.2%)

Yes I know 'yes' is bugged, I don't know what causes this bug and despite my best efforts I couldn't track it down. There is also an issue here where 'reduce your risk of schizophrenia' is offered as an example which might confuse people, but the actual science of things cuts closer to that than it does to a clean separation between disease risk and 'improvement'.

 

This question is too important to just not have an answer to so I'll do it manually. Unfortunately I can't easily remove the 'excluded' entries so that we're dealing with the exact same distribution but only 13 or so responses are filtered out anyway.

sqlite> select count(*) from data where GeneticImprovement="Yes";

1100

>>> 1100 + 176 + 262 + 84
1622
>>> 1100 / 1622
0.6781750924784217

67.8% are willing to genetically engineer their children for improvements.

GeneticCosmetic

Would you be willing to have your child genetically modified for cosmetic reasons? (eg. To make them taller or have a certain eye color.)

Yes: 500 (31.0%)

Maybe a little: 381 (23.6%)

Depends on the strength of the improvements: 277 (17.2%)

No: 455 (28.2%)

These numbers go about how you would expect, with people being progressively less interested the more 'shallow' a genetic change is seen as.


GeneticOpinionD

What's your overall opinion of other people genetically modifying their children for disease prevention purposes?

Positive: 1177 (71.7%)

Mostly Positive: 311 (19.0%)

No strong opinion: 112 (6.8%)

Mostly Negative: 29 (1.8%)

Negative: 12 (0.7%)

GeneticOpinionI

What's your overall opinion of other people genetically modifying their children for improvement purposes?

Positive: 737 (44.9%)

Mostly Positive: 482 (29.4%)

No strong opinion: 273 (16.6%)

Mostly Negative: 111 (6.8%)

Negative: 38 (2.3%)

GeneticOpinionC

What's your overall opinion of other people genetically modifying their children for cosmetic reasons?

Positive: 291 (17.7%)

Mostly Positive: 290 (17.7%)

No strong opinion: 576 (35.1%)

Mostly Negative: 328 (20.0%)

Negative: 157 (9.6%)

All three of these seem largely consistent with peoples personal preferences about modification. Were I inclined I could do a deeper analysis that actually takes survey respondents row by row and looks at correlation between preference for ones own children and preference for others.

Technological Unemployment

LudditeFallacy

Do you think the Luddite's Fallacy is an actual fallacy?

Yes: 443 (30.936%)

No: 989 (69.064%)

We can use this as an overall measure of worry about technological unemployment, which would seem to be high among the LW demographic.

UnemploymentYear

By what year do you think the majority of people in your country will have trouble finding employment for automation related reasons? If you think this is something that will never happen leave this question blank.

Mean: 2102.9713740458014

Median: 2050.0

Mode: 2050.0

Stdev: 1180.2342850727339

Question is flawed because you can't distinguish answers of "never happen" from people who just didn't see it.

Interesting question that would be fun to take a look at in comparison to the estimates for the singularity.

EndOfWork

Do you think the "end of work" would be a good thing?

Yes: 1238 (81.287%)

No: 285 (18.713%)

Fairly overwhelming consensus, but with a significant minority of people who have a dissenting opinion.

EndOfWorkConcerns

If machines end all or almost all employment, what are your biggest worries? Pick two.

Question Count Percent
People will just idle about in destructive ways 513 16.71%
People need work to be fulfilled and if we eliminate work we'll all feel deep existential angst 543 17.687%
The rich are going to take all the resources for themselves and leave the rest of us to starve or live in poverty 1066 34.723%
The machines won't need us, and we'll starve to death or be otherwise liquidated 416 13.55%
Question is flawed because it demanded the user 'pick two' instead of up to two.

The plurality of worries are about elites who refuse to share their wealth.

Existential Risk

XRiskType

Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?

Nuclear war: +4.800% 326 (20.6%)

Asteroid strike: -0.200% 64 (4.1%)

Unfriendly AI: +1.000% 271 (17.2%)

Nanotech / grey goo: -2.000% 18 (1.1%)

Pandemic (natural): +0.100% 120 (7.6%)

Pandemic (bioengineered): +1.900% 355 (22.5%)

Environmental collapse (including global warming): +1.500% 252 (16.0%)

Economic / political collapse: -1.400% 136 (8.6%)

Other: 35 (2.217%)

Significantly more people worried about Nuclear War than last year. Effect of new respondents, or geopolitical situation? Who knows.

Charity And Effective Altruism

Charitable Giving

Income

What is your approximate annual income in US dollars (non-Americans: convert at www.xe.com)? Obviously you don't need to answer this question if you don't want to. Please don't include commas or dollar signs.

Sum: 66054140.47384

Mean: 64569.052271593355

Median: 40000.0

Mode: 30000.0

Stdev: 107297.53606321265

IncomeCharityPortion

How much money, in number of dollars, have you donated to charity over the past year? (non-Americans: convert to dollars at http://www.xe.com/ ). Please don't include commas or dollar signs in your answer. For example, 4000

Sum: 2389900.6530000004

Mean: 2914.5129914634144

Median: 353.0

Mode: 100.0

Stdev: 9471.962766896671

XriskCharity

How much money have you donated to charities aiming to reduce existential risk (other than MIRI/CFAR) in the past year?

Sum: 169300.89

Mean: 1991.7751764705883

Median: 200.0

Mode: 100.0

Stdev: 9219.941506342007

CharityDonations

How much have you donated in US dollars to the following charities in the past year? (Non-americans: convert to dollars at http://www.xe.com/) Please don't include commas or dollar signs in your answer. Options starting with "any" aren't the name of a charity but a category of charity.

Question Sum Mean Median Mode Stdev
Against Malaria Foundation 483935.027 1905.256 300.0 None 7216.020
Schistosomiasis Control Initiative 47908.0 840.491 200.0 1000.0 1618.785
Deworm the World Initiative 28820.0 565.098 150.0 500.0 1432.712
GiveDirectly 154410.177 1429.723 450.0 50.0 3472.082
Any kind of animal rights charity 83130.47 1093.821 154.235 500.0 2313.493
Any kind of bug rights charity 1083.0 270.75 157.5 None 353.396
Machine Intelligence Research Institute 141792.5 1417.925 100.0 100.0 5370.485
Any charity combating nuclear existential risk 491.0 81.833 75.0 100.0 68.060
Any charity combating global warming 13012.0 245.509 100.0 10.0 365.542
Center For Applied Rationality 127101.0 3177.525 150.0 100.0 12969.096
Strategies for Engineered Negligible Senescence Research Foundation 9429.0 554.647 100.0 20.0 1156.431
Wikipedia 12765.5 53.189 20.0 10.0 126.444
Internet Archive 2975.04 80.406 30.0 50.0 173.791
Any campaign for political office 38443.99 366.133 50.0 50.0 1374.305
Other 564890.46 1661.442 200.0 100.0 4670.805
"Bug Rights" charity was supposed to be a troll fakeout but apparently...

This table is interesting given the recent debates about how much money certain causes are 'taking up' in Effective Altruism.

Effective Altruism

Vegetarian

Do you follow any dietary restrictions related to animal products?

Yes, I am vegan: 54 (3.4%)

Yes, I am vegetarian: 158 (10.0%)

Yes, I restrict meat some other way (pescetarian, flexitarian, try to only eat ethically sourced meat): 375 (23.7%)

No: 996 (62.9%)

EAKnowledge

Do you know what Effective Altruism is?

Yes: 1562 (89.3%)

No but I've heard of it: 114 (6.5%)

No: 74 (4.2%)

EAIdentity

Do you self-identify as an Effective Altruist?

Yes: 665 (39.233%)

No: 1030 (60.767%)

The distribution given by the 2014 survey results does not sum to one, so it's difficult to determine if Effective Altruism's membership actually went up or not but if we take the numbers at face value it experienced an 11.13% increase in membership.

EACommunity

Do you participate in the Effective Altruism community?

Yes: 314 (18.427%)

No: 1390 (81.573%)

Same issue as last, taking the numbers at face value community participation went up by 5.727%

EADonations

Has Effective Altruism caused you to make donations you otherwise wouldn't?

Yes: 666 (39.269%)

No: 1030 (60.731%)

Wowza!

Effective Altruist Anxiety

EAAnxiety

Have you ever had any kind of moral anxiety over Effective Altruism?

Yes: 501 (29.6%)

Yes but only because I worry about everything: 184 (10.9%)

No: 1008 (59.5%)


There's an ongoing debate in Effective Altruism about what kind of rhetorical strategy is best for getting people on board and whether Effective Altruism is causing people significant moral anxiety.

It certainly appears to be. But is moral anxiety effective? Let's look:

Sample Size: 244
Average amount of money donated by people anxious about EA who aren't EAs: 257.5409836065574

Sample Size: 679
Average amount of money donated by people who aren't anxious about EA who aren't EAs: 479.7501384388807

Sample Size: 249 Average amount of money donated by EAs anxious about EA: 1841.5292369477913

Sample Size: 314
Average amount of money donated by EAs not anxious about EA: 1837.8248407643312

It seems fairly conclusive that anxiety is not a good way to get people to donate more than they already are, but is it a good way to get people to become Effective Altruists?

Sample Size: 1685
P(Effective Altruist): 0.3940652818991098
P(EA Anxiety): 0.29554896142433235
P(Effective Altruist | EA Anxiety): 0.5

Maybe. There is of course an argument to be made that sufficient good done by causing people anxiety outweighs feeding into peoples scrupulosity, but it can be discussed after I get through explaining it on the phone to wealthy PR-conscious donors and telling the local all-kill shelter where I want my shipment of dead kittens.

EAOpinion

What's your overall opinion of Effective Altruism?

Positive: 809 (47.6%)

Mostly Positive: 535 (31.5%)

No strong opinion: 258 (15.2%)

Mostly Negative: 75 (4.4%)

Negative: 24 (1.4%)

EA appears to be doing a pretty good job of getting people to like them.

Interesting Tables

Charity Donations By Political Affilation
Affiliation Income Charity Contributions % Income Donated To Charity Total Survey Charity % Sample Size
Anarchist 1677900.0 72386.0 4.314% 3.004% 50
Communist 298700.0 19190.0 6.425% 0.796% 13
Conservative 1963000.04 62945.04 3.207% 2.612% 38
Futarchist 1497494.1099999999 166254.0 11.102% 6.899% 31
Left-Libertarian 9681635.613839999 416084.0 4.298% 17.266% 245
Libertarian 11698523.0 214101.0 1.83% 8.885% 190
Moderate 3225475.0 90518.0 2.806% 3.756% 67
Neoreactionary 1383976.0 30890.0 2.232% 1.282% 28
Objectivist 399000.0 1310.0 0.328% 0.054% 10
Other 3150618.0 85272.0 2.707% 3.539% 132
Pragmatist 5087007.609999999 266836.0 5.245% 11.073% 131
Progressive 8455500.440000001 368742.78 4.361% 15.302% 217
Social Democrat 8000266.54 218052.5 2.726% 9.049% 237
Socialist 2621693.66 78484.0 2.994% 3.257% 126


Number Of Effective Altruists In The Diaspora Communities
Community Count % In Community Sample Size
LessWrong 136 38.418% 354
LessWrong Meetups 109 50.463% 216
LessWrong Facebook Group 83 48.256% 172
LessWrong Slack 22 39.286% 56
SlateStarCodex 343 40.98% 837
Rationalist Tumblr 175 49.716% 352
Rationalist Facebook 89 58.94% 151
Rationalist Twitter 24 40.0% 60
Effective Altruism Hub 86 86.869% 99
Good Judgement(TM) Open 23 74.194% 31
PredictionBook 31 51.667% 60
Hacker News 91 35.968% 253
#lesswrong on freenode 19 24.675% 77
#slatestarcodex on freenode 9 24.324% 37
#chapelperilous on freenode 2 18.182% 11
/r/rational 117 42.545% 275
/r/HPMOR 110 47.414% 232
/r/SlateStarCodex 93 37.959% 245
One or more private 'rationalist' groups 91 47.15% 193


Effective Altruist Donations By Political Affiliation
Affiliation EA Income EA Charity Sample Size
Anarchist 761000.0 57500.0 18
Futarchist 559850.0 114830.0 15
Left-Libertarian 5332856.0 361975.0 112
Libertarian 2725390.0 114732.0 53
Moderate 583247.0 56495.0 22
Other 1428978.0 69950.0 49
Pragmatist 1442211.0 43780.0 43
Progressive 4004097.0 304337.78 107
Social Democrat 3423487.45 149199.0 93
Socialist 678360.0 34751.0 41

2016 LessWrong Diaspora Survey Analysis: Part Three (Mental Health, Basilisk, Blogs and Media)

15 ingres 25 June 2016 03:40AM

2016 LessWrong Diaspora Survey Analysis

Overview


Mental Health

We decided to move the Mental Health section up closer in the survey this year so that the data could inform accessibility decisions.

LessWrong Mental Health As Compared To Base Rates In The General Population
Condition Base Rate LessWrong Rate LessWrong Self dx Rate Combined LW Rate Base/LW Rate Spread Relative Risk
Depression 17% 25.37% 27.04% 52.41% +8.37 1.492
Obsessive Compulsive Disorder 2.3% 2.7% 5.6% 8.3% +0.4 1.173
Autism Spectrum Disorder 1.47% 8.2% 12.9% 21.1% +6.73 5.578
Attention Deficit Disorder 5% 13.6% 10.4% 24% +8.6 2.719
Bipolar Disorder 3% 2.2% 2.8% 5% -0.8 0.733
Anxiety Disorder(s) 29% 13.7% 17.4% 31.1% -15.3 0.472
Borderline Personality Disorder 5.9% 0.6% 1.2% 1.8% -5.3 0.101
Schizophrenia 1.1% 0.8% 0.4% 1.2% -0.3 0.727
Substance Use Disorder 10.6% 1.3% 3.6% 4.9% -9.3 0.122

Base rates are taken from Wikipedia, US rates were favored over global rates where immediately available.

Accessibility Suggestions

So of the conditions we asked about, LessWrongers are at significant extra risk for three of them: Autism, ADHD, Depression.

LessWrong probably doesn't need to concern itself with being more accessible to those with autism as it likely already is. Depression is a complicated disorder with no clear interventions that can be easily implemented as site or community policy. It might be helpful to encourage looking more at positive trends in addition to negative ones, but the community already seems to do a fairly good job of this. (We could definitely use some more of it though.)

Attention Deficit Disorder - Public Service Announcement

That leaves ADHD, which we might be able to do something about, starting with this:

A lot of LessWrong stuff ends up falling into the same genre as productivity advice or 'self help'. If you have trouble with getting yourself to work, find yourself reading these things and completely unable to implement them, it's entirely possible that you have a mental health condition which impacts your executive function.

The best overview I've been able to find on ADD is this talk from Russell Barkely.

30 Essential Ideas For Parents

Ironically enough, this is a long talk, over four hours in total. Barkely is an entertaining speaker and the talk is absolutely fascinating. If you're even mildly interested in the subject I wholeheartedly recommend it. Many people who have ADHD just assume that they're lazy, or not trying hard enough, or just haven't found the 'magic bullet' yet. It never even occurs to them that they might have it because they assume that adult ADHD looks like childhood ADHD, or that ADHD is a thing that psychiatrists made up so they can give children powerful stimulants.

ADD is real, if you're in the demographic that takes this survey there's a decent enough chance you have it.

Attention Deficit Disorder - Accessibility

So with that in mind, is there anything else we can do?

Yes, write better.

Scott Alexander has written a blog post with writing advice for non-fiction, and the interesting thing about it is just how much of the advice is what I would tell you to do if your audience has ADD.

  • Reward the reader quickly and often. If your prose isn't rewarding to read it won't be read.

  • Make sure the overall article has good sectioning and indexing, people might be only looking for a particular thing and they won't want to wade through everything else to get it. Sectioning also gives the impression of progress and reduces eye strain.

  • Use good data visualization to compress information, take away mental effort where possible. Take for example the condition table above. It saves space and provides additional context. Instead of a long vertical wall of text with sections for each condition, it removes:

    • The extraneous information of how many people said they did not have a condition.

    • The space that would be used by creating a section for each condition. In fact the specific improvement of the table is that it takes extra advantage of space in the horizontal plane as well as the vertical plane.

    And instead of just presenting the raw data, it also adds:

    • The normal rate of incidence for each condition, so that the reader understands the extent to which rates are abnormal or unexpected.

    • Easy comparison between the clinically diagnosed, self diagnosed, and combined rates of the condition in the LW demographic. This preserves the value of the original raw data presentation while also easing the mental arithmetic of how many people claim to have a condition.

    • Percentage spread between the clinically diagnosed and the base rate, which saves the effort of figuring out the difference between the two values.

    • Relative risk between the clinically diagnosed and the base rate, which saves the effort of figuring out how much more or less likely a LessWronger is to have a given condition.

    Add all that together and you've created a compelling presentation that significantly improves on the 'naive' raw data presentation.

  • Use visuals in general, they help draw and maintain interest.

None of these are solely for the benefit of people with ADD. ADD is an exaggerated profile of normal human behavior. Following this kind of advice makes your article more accessible to everybody, which should be more than enough incentive if you intend to have an audience.1

Roko's Basilisk

This year we finally added a Basilisk question! In fact, it kind of turned into a whole Basilisk section. A fairly common question about this years survey is why the Basilisk section is so large. The basic reason is that asking only one or two questions about it would leave the results open to rampant speculation in one direction or another. By making the section comprehensive and covering every base, we've pretty much gotten about as complete of data as we'd want on the Basilisk phenomena.

Basilisk Knowledge
Do you know what Roko's Basilisk thought experiment is?

Yes: 1521 73.2%
No but I've heard of it: 158 7.6%
No: 398 19.2%

Basilisk Etiology
Where did you read Roko's argument for the Basilisk?

Roko's post on LessWrong: 323 20.2%
Reddit: 171 10.7%
XKCD: 61 3.8%
LessWrong Wiki: 234 14.6%
A news article: 71 4.4%
Word of mouth: 222 13.9%
RationalWiki: 314 19.6%
Other: 194 12.1%

Basilisk Correctness
Do you think Roko's argument for the Basilisk is correct?

Yes: 75 5.1%
Yes but I don't think it's logical conclusions apply for other reasons: 339 23.1%
No: 1055 71.8%

Basilisks And Lizardmen

One of the biggest mistakes I made with this years survey was not including "Do you believe Barack Obama is a hippopotamus?" as a control question in this section.2 Five percent is just outside of the infamous lizardman constant. This was the biggest survey surprise for me. I thought there was no way that 'yes' could go above a couple of percentage points. As far as I can tell this result is not caused by brigading but I've by no means investigated the matter so thoroughly that I would rule it out.

Higher?

Of course, we also shouldn't forget to investigate the hypothesis that the number might be higher than 5%. After all, somebody who thinks the Basilisk is correct could skip the questions entirely so they don't face potential stigma. So how many people skipped the questions but filled out the rest of the survey?

Eight people refused to answer whether they'd heard of Roko's Basilisk but went on to answer the depression question immediately after the Basilisk section. This gives us a decent proxy for how many people skipped the section and took the rest of the survey. So if we're pessimistic the number is a little higher, but it pays to keep in mind that there are other reasons to want to skip this section. (It is also possible that people took the survey up until they got to the Basilisk section and then quit so they didn't have to answer it, but this seems unlikely.)

Of course this assumes people are being strictly truthful with their survey answers. It's also plausible that people who think the Basilisk is correct said they'd never heard of it and then went on with the rest of the survey. So the number could in theory be quite large. My hunch is that it's not. I personally know quite a few LessWrongers and I'm fairly sure none of them would tell me that the Basilisk is 'correct'. (In fact I'm fairly sure they'd all be offended at me even asking the question.) Since 5% is one in twenty I'd think I'd know at least one or two people who thought the Basilisk was correct by now.

Lower?

One partial explanation for the surprisingly high rate here is that ten percent of the people who said yes by their own admission didn't know what they were saying yes to. Eight people said they've heard of the Basilisk but don't know what it is, and that it's correct. The lizardman constant also plausibly explains a significant portion of the yes responses, but that explanation relies on you already having a prior belief that the rate should be low.


Basilisk-Like Danger
Do you think Basilisk-like thought experiments are dangerous?

Yes, I think they're dangerous for decision theory reasons: 63 4.2%
Yes I think they're dangerous for social reasons (eg. A cult might use them): 194 12.8%
Yes I think they're dangerous for decision theory and social reasons: 136 9%
Yes I think they're socially dangerous because they make everybody involved look foolish: 253 16.7%
Yes I think they're dangerous for other reasons: 54 3.6%
No: 809 53.4%

Most people don't think Basilisk-Like thought experiments are dangerous at all. Of those that think they are, most of them think they're socially dangerous as opposed to a raw decision theory threat. The 4.2% number for pure decision theory threat is interesting because it lines up with the 5% number in the previous question for Basilisk Correctness.

P(Decision Theory Danger | Basilisk Belief) = 26.6%
P(Decision Theory And Social Danger | Basilisk Belief) = 21.3%

So of the people who say the Basilisk is correct, only half of them believe it is a decision theory based danger at all. (In theory this could be because they believe the Basilisk is a good thing and therefore not dangerous, but I refuse to lose that much faith in humanity.3)

Basilisk Anxiety
Have you ever felt any sort of anxiety about the Basilisk?

Yes: 142 8.8%
Yes but only because I worry about everything: 189 11.8%
No: 1275 79.4%

20.6% of respondents have felt some kind of Basilisk Anxiety. It should be noted that the exact wording of the question permits any anxiety, even for a second. And as we'll see in the next question that nuance is very important.

Degree Of Basilisk Worry
What is the longest span of time you've spent worrying about the Basilisk?

I haven't: 714 47%
A few seconds: 237 15.6%
A minute: 298 19.6%
An hour: 176 11.6%
A day: 40 2.6%
Two days: 16 1.05%
Three days: 12 0.79%
A week: 12 0.79%
A month: 5 0.32%
One to three months: 2 0.13%
Three to six months: 0 0.0%
Six to nine months: 0 0.0%
Nine months to a year: 1 0.06%
Over a year: 1 0.06%
Years: 4 0.26%

These numbers provide some pretty sobering context for the previous ones. Of all the people who worried about the Basilisk, 93.8% didn't worry about it for more than an hour. The next 3.65% didn't worry about it for more than a day or two. The next 1.9% didn't worry about it for more than a month and the last .7% or so have worried about it for longer.

Current Basilisk Worry
Are you currently worrying about the Basilisk?

Yes: 29 1.8%
Yes but only because I worry about everything: 60 3.7%
No: 1522 94.5%

Also encouraging. We should expect a small number of people to be worried at this question just because the section is basically the word "Basilisk" and "worry" repeated over and over so it's probably a bit scary to some people. But these numbers are much lower than the "Have you ever worried" ones and back up the previous inference that Basilisk anxiety is mostly a transitory phenomena.

One article on the Basilisk asked the question of whether or not it was just a "referendum on autism". It's a good question and now I have an answer for you, as per the table below:

Mental Health Conditions Versus Basilisk Worry
Condition Worried Worried But They Worry About Everything Combined Worry
Baseline (in the respondent population) 8.8% 11.8% 20.6%
ASD 7.3% 17.3% 24.7%
OCD 10.0% 32.5% 42.5%
AnxietyDisorder 6.9% 20.3% 27.3%
Schizophrenia 0.0% 16.7% 16.7%

 

The short answer: Autism raises your chances of Basilisk anxiety, but anxiety disorders and OCD especially raise them much more. Interestingly enough, schizophrenia seems to bring the chances down. This might just be an effect of small sample size, but my expectation was the opposite. (People who are really obsessed with Roko's Basilisk seem to present with schizophrenic symptoms at any rate.)

Before we move on, there's one last elephant in the room to contend with. The philosophical theory underlying the Basilisk is the CEV conception of friendly AI primarily espoused by Eliezer Yudkowsky. Which has led many critics to speculate on all kinds of relationships between Eliezer Yudkowsky and the Basilisk. Which of course obviously would extend to Eliezer Yudkowsky's Machine Intelligence Research Institute, a project to develop 'Friendly Artificial Intelligence' which does not implement a naive goal function that eats everything else humans actually care about once it's given sufficient optimization power.

The general thrust of these accusations is that MIRI, intentionally or not, profits from belief in the Basilisk. I think MIRI gets picked on enough, so I'm not thrilled about adding another log to the hefty pile of criticism they deal with. However this is a serious accusation which is plausible enough to be in the public interest for me to look at.

 

Percentage Of People Who Donate To MIRI Versus Basilisk Belief
Belief Percentage
Believe It's Incorrect 5.2%
Believe It's Structurally Correct 5.6%
Believe It's Correct 12.0%

Basilisk belief does appear to make you twice as likely to donate to MIRI. It's important to note from the perspective of earlier investigation that thinking it is "structurally correct" appears to make you about as likely as if you don't think it's correct, implying that both of these options mean about the same thing.

 

Sum Money Donated To MIRI Versus Basilisk Belief
Belief Mean Median Mode Stdev Total Donated
Believe It's Incorrect 1365.590 100.0 100.0 4825.293 75107.5
Believe It's Structurally Correct 2644.736 110.0 20.0 9147.299 50250.0
Believe It's Correct 740.555 300.0 300.0 1152.541 6665.0

Take these numbers with a grain of salt, it only takes one troll to plausibly lie about their income to ruin it for everybody else.

Interestingly enough, if you sum all three total donated counts and divide by a hundred, you find that five percent of the sum is about what was donated by the Basilisk group. ($6601 to be exact) So even though the modal and median donations of Basilisk believers are higher, they donate about as much as would be naively expected by assuming donations among groups are equal.4

 

Percentage Of People Who Donate To MIRI Versus Basilisk Worry
Anxiety Percentage
Never Worried 4.3%
Worried But They Worry About Everything 11.1%
Worried 11.3%

In contrast to the correctness question, merely having worried about the Basilisk at any point in time doubles your chances of donating to MIRI. My suspicion is that these people are not, as a general rule, donating because of the Basilisk per se. If you're the sort of person who is even capable of worrying about the Basilisk in principle, you're probably the kind of person who is likely to worry about AI risk in general and donate to MIRI on that basis. This hypothesis is probably unfalsifiable with the survey information I have, because Basilisk-risk is a subset of AI risk. This means that anytime somebody indicates on the survey that they're worried about AI risk this could be because they're worried about the Basilisk or because they're worried about more general AI risk.

 

Sum Money Donated To MIRI Versus Basilisk Worry
Anxiety Mean Median Mode Stdev Total Donated
Never Worried 1033.936 100.0 100.0 3493.373 56866.5
Worried But They Worry About Everything 227.047 75.0 300.0 438.861 4768.0
Worried 4539.25 90.0 10.0 11442.675 72628.0
Combined Worry         77396.0

Take these numbers with a grain of salt, it only takes one troll to plausibly lie about their income to ruin it for everybody else.

This particular analysis is probably the strongest evidence in the set for the hypothesis that MIRI profits (though not necessarily through any involvement on their part) from the Basilisk. People who worried from an unendorsed perspective donate less on average than everybody else. The modal donation among people who've worried about the Basilisk is ten dollars, which seems like a surefire way to torture if we're going with the hypothesis that these are people who believe the Basilisk is a real thing and they're concerned about it. So this implies that they don't, which supports my earlier hypothesis that people who are capable of feeling anxiety about the Basilisk are the core demographic to donate to MIRI anyway.

Of course, donors don't need to believe in the Basilisk for MIRI to profit from it. If exposing people to the concept of the Basilisk makes them twice as likely to donate but they don't end up actually believing the argument that would arguably be the ideal outcome for MIRI from an Evil Plot perspective. (Since after all, pursuing a strategy which involves Basilisk belief would actually incentivize torture from the perspective of the acausal game theories MIRI bases its FAI on, which would be bad.)

But frankly this is veering into very speculative territory. I don't think there's an evil plot, nor am I convinced that MIRI is profiting from Basilisk belief in a way that outweighs the resulting lost donations and damage to their cause.5 If anybody would like to assert otherwise I invite them to 'put up or shut up' with hard evidence. The world has enough criticism based on idle speculation and you're peeing in the pool.

Blogs and Media

Since this was the LessWrong diaspora survey, I felt it would be in order to reach out a bit to ask not just where the community is at but what it's reading. I went around to various people I knew and asked them about blogs for this section. However the picks were largely based on my mental 'map' of the blogs that are commonly read/linked in the community with a handful of suggestions thrown in. The same method was used for stories.

Blogs Read

LessWrong
Regular Reader: 239 13.4%
Sometimes: 642 36.1%
Rarely: 537 30.2%
Almost Never: 272 15.3%
Never: 70 3.9%
Never Heard Of It: 14 0.7%

SlateStarCodex (Scott Alexander)
Regular Reader: 1137 63.7%
Sometimes: 264 14.7%
Rarely: 90 5%
Almost Never: 61 3.4%
Never: 51 2.8%
Never Heard Of It: 181 10.1%

[These two results together pretty much confirm the results I talked about in part two of the survey analysis. A supermajority of respondents are 'regular readers' of SlateStarCodex. By contrast LessWrong itself doesn't even have a quarter of SlateStarCodexes readership.]

Overcoming Bias (Robin Hanson)
Regular Reader: 206 11.751%
Sometimes: 365 20.821%
Rarely: 391 22.305%
Almost Never: 385 21.962%
Never: 239 13.634%
Never Heard Of It: 167 9.527%

Minding Our Way (Nate Soares)
Regular Reader: 151 8.718%
Sometimes: 134 7.737%
Rarely: 139 8.025%
Almost Never: 175 10.104%
Never: 214 12.356%
Never Heard Of It: 919 53.06%

Agenty Duck (Brienne Yudkowsky)
Regular Reader: 55 3.181%
Sometimes: 132 7.634%
Rarely: 144 8.329%
Almost Never: 213 12.319%
Never: 254 14.691%
Never Heard Of It: 931 53.846%

Eliezer Yudkowsky's Facebook Page
Regular Reader: 325 18.561%
Sometimes: 316 18.047%
Rarely: 231 13.192%
Almost Never: 267 15.248%
Never: 361 20.617%
Never Heard Of It: 251 14.335%

Luke Muehlhauser (Eponymous)
Regular Reader: 59 3.426%
Sometimes: 106 6.156%
Rarely: 179 10.395%
Almost Never: 231 13.415%
Never: 312 18.118%
Never Heard Of It: 835 48.49%

Gwern.net (Gwern Branwen)
Regular Reader: 118 6.782%
Sometimes: 281 16.149%
Rarely: 292 16.782%
Almost Never: 224 12.874%
Never: 230 13.218%
Never Heard Of It: 595 34.195%

Siderea (Sibylla Bostoniensis)
Regular Reader: 29 1.682%
Sometimes: 49 2.842%
Rarely: 59 3.422%
Almost Never: 104 6.032%
Never: 183 10.615%
Never Heard Of It: 1300 75.406%

Ribbon Farm (Venkatesh Rao)
Regular Reader: 64 3.734%
Sometimes: 123 7.176%
Rarely: 111 6.476%
Almost Never: 150 8.751%
Never: 150 8.751%
Never Heard Of It: 1116 65.111%

Bayesed And Confused (Michael Rupert)
Regular Reader: 2 0.117%
Sometimes: 10 0.587%
Rarely: 24 1.408%
Almost Never: 68 3.988%
Never: 167 9.795%
Never Heard Of It: 1434 84.106%

[This was the 'troll' answer to catch out people who claim to read everything.]

The Unit Of Caring (Anonymous)
Regular Reader: 281 16.452%
Sometimes: 132 7.728%
Rarely: 126 7.377%
Almost Never: 178 10.422%
Never: 216 12.646%
Never Heard Of It: 775 45.375%

GiveWell Blog (Multiple Authors)
Regular Reader: 75 4.438%
Sometimes: 197 11.657%
Rarely: 243 14.379%
Almost Never: 280 16.568%
Never: 412 24.379%
Never Heard Of It: 482 28.521%

Thing Of Things (Ozy Frantz)
Regular Reader: 363 21.166%
Sometimes: 201 11.72%
Rarely: 143 8.338%
Almost Never: 171 9.971%
Never: 176 10.262%
Never Heard Of It: 661 38.542%

The Last Psychiatrist (Anonymous)
Regular Reader: 103 6.023%
Sometimes: 94 5.497%
Rarely: 164 9.591%
Almost Never: 221 12.924%
Never: 302 17.661%
Never Heard Of It: 826 48.304%

Hotel Concierge (Anonymous)
Regular Reader: 29 1.711%
Sometimes: 35 2.065%
Rarely: 49 2.891%
Almost Never: 88 5.192%
Never: 179 10.56%
Never Heard Of It: 1315 77.581%

The View From Hell (Sister Y)
Regular Reader: 34 1.998%
Sometimes: 39 2.291%
Rarely: 75 4.407%
Almost Never: 137 8.049%
Never: 250 14.689%
Never Heard Of It: 1167 68.566%

Xenosystems (Nick Land)
Regular Reader: 51 3.012%
Sometimes: 32 1.89%
Rarely: 64 3.78%
Almost Never: 175 10.337%
Never: 364 21.5%
Never Heard Of It: 1007 59.48%

I tried my best to have representation from multiple sections of the diaspora, if you look at the different blogs you can probably guess which blogs represent which section.

Stories Read

Harry Potter And The Methods Of Rationality (Eliezer Yudkowsky)
Whole Thing: 1103 61.931%
Partially And Intend To Finish: 145 8.141%
Partially And Abandoned: 231 12.97%
Never: 221 12.409%
Never Heard Of It: 81 4.548%

Significant Digits (Alexander D)
Whole Thing: 123 7.114%
Partially And Intend To Finish: 105 6.073%
Partially And Abandoned: 91 5.263%
Never: 333 19.26%
Never Heard Of It: 1077 62.29%

Three Worlds Collide (Eliezer Yudkowsky)
Whole Thing: 889 51.239%
Partially And Intend To Finish: 35 2.017%
Partially And Abandoned: 36 2.075%
Never: 286 16.484%
Never Heard Of It: 489 28.184%

The Fable of the Dragon-Tyrant (Nick Bostrom)
Whole Thing: 728 41.935%
Partially And Intend To Finish: 31 1.786%
Partially And Abandoned: 15 0.864%
Never: 205 11.809%
Never Heard Of It: 757 43.606%

The World of Null-A (A. E. van Vogt)
Whole Thing: 92 5.34%
Partially And Intend To Finish: 18 1.045%
Partially And Abandoned: 25 1.451%
Never: 429 24.898%
Never Heard Of It: 1159 67.266%

[Wow, I never would have expected this many people to have read this. I mostly included it on a lark because of its historical significance.]

Synthesis (Sharon Mitchell)
Whole Thing: 6 0.353%
Partially And Intend To Finish: 2 0.118%
Partially And Abandoned: 8 0.47%
Never: 217 12.75%
Never Heard Of It: 1469 86.31%

[This was the 'troll' option to catch people who just say they've read everything.]

Worm (Wildbow)
Whole Thing: 501 28.843%
Partially And Intend To Finish: 168 9.672%
Partially And Abandoned: 184 10.593%
Never: 430 24.755%
Never Heard Of It: 454 26.137%

Pact (Wildbow)
Whole Thing: 138 7.991%
Partially And Intend To Finish: 59 3.416%
Partially And Abandoned: 148 8.57%
Never: 501 29.01%
Never Heard Of It: 881 51.013%

Twig (Wildbow)
Whole Thing: 55 3.192%
Partially And Intend To Finish: 132 7.661%
Partially And Abandoned: 65 3.772%
Never: 560 32.501%
Never Heard Of It: 911 52.873%

Ra (Sam Hughes)
Whole Thing: 269 15.558%
Partially And Intend To Finish: 80 4.627%
Partially And Abandoned: 95 5.495%
Never: 314 18.161%
Never Heard Of It: 971 56.16%

My Little Pony: Friendship Is Optimal (Iceman)
Whole Thing: 424 24.495%
Partially And Intend To Finish: 16 0.924%
Partially And Abandoned: 65 3.755%
Never: 559 32.293%
Never Heard Of It: 667 38.533%

Friendship Is Optimal: Caelum Est Conterrens (Chatoyance)
Whole Thing: 217 12.705%
Partially And Intend To Finish: 16 0.937%
Partially And Abandoned: 24 1.405%
Never: 411 24.063%
Never Heard Of It: 1040 60.89%

Ender's Game (Orson Scott Card)
Whole Thing: 1177 67.219%
Partially And Intend To Finish: 22 1.256%
Partially And Abandoned: 43 2.456%
Never: 395 22.559%
Never Heard Of It: 114 6.511%

[This is the most read story according to survey respondents, beating HPMOR by 5%.]

The Diamond Age (Neal Stephenson)
Whole Thing: 440 25.346%
Partially And Intend To Finish: 37 2.131%
Partially And Abandoned: 55 3.168%
Never: 577 33.237%
Never Heard Of It: 627 36.118%

Consider Phlebas (Iain Banks)
Whole Thing: 302 17.507%
Partially And Intend To Finish: 52 3.014%
Partially And Abandoned: 47 2.725%
Never: 439 25.449%
Never Heard Of It: 885 51.304%

The Metamorphosis Of Prime Intellect (Roger Williams)
Whole Thing: 226 13.232%
Partially And Intend To Finish: 10 0.585%
Partially And Abandoned: 24 1.405%
Never: 322 18.852%
Never Heard Of It: 1126 65.925%

Accelerando (Charles Stross)
Whole Thing: 293 17.045%
Partially And Intend To Finish: 46 2.676%
Partially And Abandoned: 66 3.839%
Never: 425 24.724%
Never Heard Of It: 889 51.716%

A Fire Upon The Deep (Vernor Vinge)
Whole Thing: 343 19.769%
Partially And Intend To Finish: 31 1.787%
Partially And Abandoned: 41 2.363%
Never: 508 29.28%
Never Heard Of It: 812 46.801%

I also did a k-means cluster analysis of the data to try and determine demographics and the ultimate conclusion I drew from it is that I need to do more analysis. Which I would do, except that the initial analysis was a whole bunch of work and jumping further down the rabbit hole in the hopes I reach an oasis probably isn't in the best interests of myself or my readers.

Footnotes


  1. This is a general trend I notice with accessibility. Not always, but very often measures taken to help a specific group end up having positive effects for others as well. Many of the accessibility suggestions of the W3C are things you wish every website did.

  2. I hadn't read this particular SSC post at the time I compiled the survey, but I was already familiar with the concept of a lizardman constant and should have accounted for it.

  3. I've been informed by a member of the freenode #lesswrong IRC channel that this is in fact Roko's opinion, because you can 'timelessly trade with the future superintelligence for rewards, not just punishment' according to a conversation they had with him last summer. Remember kids: Don't do drugs, including Max Tegmark.

  4. You might think that this conflicts with the hypothesis that the true rate of Basilisk belief is lower than 5%. It does a bit, but you also need to remember that these people are in the LessWrong demographic, which means regardless of what the Basilisk belief question means we should naively expect them to donate five percent of the MIRI donation pot.

  5. That is to say, it does seem plausible that MIRI 'profits' from Basilisk belief based on this data, but I'm fairly sure any profit is outweighed by the significant opportunity cost associated with it. I should also take this moment to remind the reader that the original Basilisk argument was supposed to prove that CEV is a flawed concept from the perspective of not having deleterious outcomes for people, so MIRI using it as a way to justify donating to them would be weird.

2016 LessWrong Diaspora Survey Analysis: Part Two (LessWrong Use, Successorship, Diaspora)

28 ingres 10 June 2016 07:40PM

2016 LessWrong Diaspora Survey Analysis

Overview

  • Results and Dataset
  • Meta
  • Demographics
  • LessWrong Usage and Experience
  • LessWrong Criticism and Successorship
  • Diaspora Community Analysis (You are here)
  • Mental Health Section
  • Basilisk Section/Analysis
  • Blogs and Media analysis
  • Politics
  • Calibration Question And Probability Question Analysis
  • Charity And Effective Altruism Analysis

Introduction

Before it was the LessWrong survey, the 2016 survey was a small project I was working on as market research for a website I'm creating called FortForecast. As I was discussing the idea with others, particularly Eliot he made the suggestion that since he's doing LW 2.0 and I'm doing a site that targets the LessWrong demographic, why don't I go ahead and do the LessWrong Survey? Because of that, this years survey had a lot of questions oriented around what you would want to see in a successor to LessWrong and what you think is wrong with the site.

LessWrong Usage and Experience

How Did You Find LessWrong?

Been here since it was started in the Overcoming Bias days: 171 8.3%
Referred by a link: 275 13.4%
HPMOR: 542 26.4%
Overcoming Bias: 80 3.9%
Referred by a friend: 265 12.9%
Referred by a search engine: 131 6.4%
Referred by other fiction: 14 0.7%
Slate Star Codex: 241 11.7%
Reddit: 55 2.7%
Common Sense Atheism: 19 0.9%
Hacker News: 47 2.3%
Gwern: 22 1.1%
Other: 191 9.308%

How do you use Less Wrong?

I lurk, but never registered an account: 1120 54.4%
I've registered an account, but never posted: 270 13.1%
I've posted a comment, but never a top-level post: 417 20.3%
I've posted in Discussion, but not Main: 179 8.7%
I've posted in Main: 72 3.5%

[54.4% lurkers.]

How often do you comment on LessWrong?

I have commented more than once a week for the past year.: 24 1.2%
I have commented more than once a month for the past year but less than once a week.: 63 3.1%
I have commented but less than once a month for the past year.: 225 11.1%
I have not commented this year.: 1718 84.6%

[You could probably snarkily title this one "LW usage in one statistic". It's a pretty damning portrait of the sites vitality. A whopping 84.6% of people have not commented this year a single time.]

How Long Since You Last Posted On LessWrong?

I wrote one today.: 12 0.637%
Within the last three days.: 13 0.69%
Within the last week.: 22 1.168%
Within the last month.: 58 3.079%
Within the last three months.: 75 3.981%
Within the last six months.: 68 3.609%
Within the last year.: 84 4.459%
Within the last five years.: 295 15.658%
Longer than five years.: 15 0.796%
I've never posted on LW.: 1242 65.924%

[Supermajority of people have never commented on LW, 5.574% have within the last month.]

About how much of the Sequences have you read?

Never knew they existed until this moment: 215 10.3%
Knew they existed, but never looked at them: 101 4.8%
Some, but less than 25% : 442 21.2%
About 25%: 260 12.5%
About 50%: 283 13.6%
About 75%: 298 14.3%
All or almost all: 487 23.3%

[10.3% of people taking the survey have never heard of the sequences. 36.3% have not read a quarter of them.]

Do you attend Less Wrong meetups?

Yes, regularly: 157 7.5%
Yes, once or a few times: 406 19.5%
No: 1518 72.9%

[However the in-person community seems to be non-dead.]

Is physical interaction with the Less Wrong community otherwise a part of your everyday life, for example do you live with other Less Wrongers, or you are close friends and frequently go out with them?

Yes, all the time: 158 7.6%
Yes, sometimes: 258 12.5%
No: 1652 79.9%

About the same number say they hang out with LWers 'all the time' as say they go to meetups. I wonder if people just double counted themselves here. Or they may go to meetups and have other interactions with LWers outside of that. Or it could be a coincidence and these are different demographics. Let's find out.

P(Community part of daily life | Meetups) = 40%

Significant overlap, but definitely not exclusive overlap. I'll go ahead and chalk this one up up to coincidence.

Have you ever been in a romantic relationship with someone you met through the Less Wrong community?

Yes: 129 6.2%
I didn't meet them through the community but they're part of the community now: 102 4.9%
No: 1851 88.9%

LessWrong Usage Differences Between 2016 and 2014 Surveys

How do you use Less Wrong?

I lurk, but never registered an account: +19.300% 1125 54.400%
I've registered an account, but never posted: -1.600% 271 13.100%
I've posted a comment, but never a top-level post: -7.600% 419 20.300%
I've posted in Discussion, but not Main: -5.100% 179 8.700%
I've posted in Main: -3.300% 73 3.500%

About how much of the sequences have you read?

Never knew they existed until this moment: +3.300% 217 10.400%
Knew they existed, but never looked at them: +2.100% 103 4.900%
Some, but less than 25%: +3.100% 442 21.100%
About 25%: +0.400% 260 12.400%
About 50%: -0.400% 284 13.500%
About 75%: -1.800% 299 14.300%
All or almost all: -5.000% 491 23.400%

Do you attend Less Wrong meetups?

Yes, regularly: -2.500% 160 7.700%
Yes, once or a few times: -2.100% 407 19.500%
No: +7.100% 1524 72.900%

Is physical interaction with the Less Wrong community otherwise a part of your everyday life, for example do you live with other Less Wrongers, or you are close friends and frequently go out with them?

Yes, all the time: +0.200% 161 7.700%
Yes, sometimes: -0.300% 258 12.400%
No: +2.400% 1659 79.800%

Have you ever been in a romantic relationship with someone you met through the Less Wrong community?

Yes: +0.800% 132 6.300%
I didn't meet them through the community but they're part of the community now: -0.400% 102 4.900%
No: +1.600% 1858 88.800%

Write Ins

In a bit of a silly oversight I forgot to ask survey participants what was good about the community, so the following is going to be a pretty one sided picture. Below are the complete write ins respondents submitted

Issues With LessWrong At It's Peak

Philosophical Issues With LessWrong At It's Peak[Part One]
Philosophical Issues With LessWrong At It's Peak[Part Two]
Community Issues With LessWrong At It's Peak[Part One]
Community Issues With LessWrong At It's Peak[Part Two]

Issues With LessWrong Now

Philosophical Issues With LessWrong Now[Part One]
Philosophical Issues With LessWrong Now[Part Two]
Community Issues With LessWrong Now[Part One]
Community Issues With LessWrong Now[Part Two]

Peak Philosophy Issue Tallies

Philosophy Issues (Sample Size: 233)
Label Code Tally
Arrogance A 16
Bad Aesthetics BA 3
Bad Norms BN 3
Bad Politics BP 5
Bad Tech Platform BTP 1
Cultish C 5
Cargo Cult CC 3
Doesn't Accept Criticism DAC 3
Don't Know Where to Start DKWS 5
Damaged Me Mentally DMM 1
Esoteric E 3
Eliezer Yudkowsky EY 6
Improperly Indexed II 7
Impossible Mission IM 4
Insufficient Social Support ISS 1
Jargon  
Literal Cult LC 1
Lack of Rigor LR 14
Misfocused M 13
Mixed Bag MB 3
Nothing N 13
Not Enough Jargon NEJ 1
Not Enough Roko's Basilisk NERB 1
Not Enough Theory NET 1
No Intuition NI 6
Not Progressive Enough NPE 7
Narrow Scholarship NS 20
Other O 3
Personality Cult PC 10
None of the Above  
Quantum Mechanics Sequence QMS 2
Reinvention R 10
Rejects Expertise RE 5
Spoiled S 7
Small Competent Authorship SCA 6
Suggestion For Improvement SFI 1
Socially Incompetent SI 9
Stupid Philosophy SP 4
Too Contrarian TC 2
Typical Mind TM 1
Too Much Roko's Basilisk TMRB 1
Too Much Theory TMT 14
Too Progressive TP 2
Too Serious TS 2
Unwelcoming U 8

Well, those are certainly some results. Top answers are:

Narrow Scholarship: 20
Arrogance: 16
Too Much Theory: 14
Lack of Rigor: 14
Misfocused: 13
Nothing: 13
Reinvention (reinvents the wheel too much): 10
Personality Cult: 10

So condensing a bit: Pay more attention to mainstream scholarship and ideas, try to do better about intellectual rigor, be more practical and focus on results, be more humble. (Labeled Dataset)

Peak Community Issue Tallies

Community Issues (Sample Size: 227)
Label Code Tally
Arrogance A 7
Assumes Reader Is Male ARIM 1
Bad Aesthetics BA 1
Bad At PR BAP 5
Bad Norms BN 5
Bad Politics BP 2
Cultish C 9
Cliqueish Tendencies CT 1
Diaspora D 1
Defensive Attitude DA 1
Doesn't Accept Criticism DAC 3
Dunning Kruger DK 1
Elitism E 3
Eliezer Yudkowsky EY 2
Groupthink G 11
Insufficiently Indexed II 9
Impossible Mission IM 1
Imposter Syndrome IS 1
Jargon J 2
Lack of Rigor LR 1
Mixed Bag MB 1
Nothing N 5
??? NA 1
Not Big Enough NBE 3
Not Enough of A Cult NEAC 1
Not Enough Content NEC 7
Not Enough Community Infrastructure NECI 10
Not Enough Meetups NEM 5
No Goals NG 2
Not Nerdy Enough NNE 3
None Of the Above NOA 1
Not Progressive Enough NPE 3
Not Rational NR 3
NRx (Neoreaction) NRx 1
Narrow Scholarship NS 4
Not Stringent Enough NSE 3
Parochialism P 1
Pickup Artistry PA 2
Personality Cult PC 7
Reinvention R 1
Recurring Arguments RA 3
Rejects Expertise RE 2
Sequences S 2
Small Competent Authorship SCA 5
Suggestion For Improvement SFI 1
Spoiled Issue SI 9
Socially INCOMpetent SINCOM 2
Too Boring TB 1
Too Contrarian TC 10
Too COMbative TCOM 4
Too Cis/Straight/Male TCSM 5
Too Intolerant of Cranks TIC 1
Too Intolerant of Politics TIP 2
Too Long Winded TLW 2
Too Many Idiots TMI 3
Too Much Math TMM 1
Too Much Theory TMT 12
Too Nerdy TN 6
Too Rigorous TR 1
Too Serious TS 1
Too Tolerant of Cranks TTC 1
Too Tolerant of Politics TTP 3
Too Tolerant of POSers TTPOS 2
Too Tolerant of PROGressivism TTPROG 2
Too Weird TW 2
Unwelcoming U 12
UTILitarianism UTIL 1

Top Answers:

Unwelcoming: 12
Too Much Theory: 12
Groupthink: 11
Not Enough Community Infrastructure: 10
Too Contrarian: 10
Insufficiently Indexed: 9
Cultish: 9

Again condensing a bit: Work on being less intimidating/aggressive/etc to newcomers, spend less time on navel gazing and more time on actually doing things and collecting data, work on getting the structures in place that will onboard people into the community, stop being so nitpicky and argumentative, spend more time on getting content indexed in a form where people can actually find it, be more accepting of outside viewpoints and remember that you're probably more likely to be wrong than you think. (Labeled Dataset)

One last note before we finish up, these tallies are a very rough executive summary. The tagging process basically involves trying to fit points into clusters and is prone to inaccuracy through laziness, adding another category being undesirable, square-peg into round-hole fitting, and my personal political biases. So take these with a grain of salt, if you really want to know what people wrote in my advice would be to read through the write in sets I have above in HTML format. If you want to evaluate for yourself how well I tagged things you can see the labeled datasets above.

I won't bother tallying the "issues now" sections, all you really need to know is that it's basically the same as the first sections except with lots more "It's dead." comments and from eyeballing it a higher proportion of people arguing that LessWrong has been taken over by the left/social justice and complaints about effective altruism. (I infer that the complaints about being taken over by the left are mostly referring to effective altruism.)

Traits Respondents Would Like To See In A Successor Community

Philosophically

Attention Paid To Outside Sources
More: 1042 70.933%
Same: 414 28.182%
Less: 13 0.885%

Self Improvement Focus
More: 754 50.706%
Same: 598 40.215%
Less: 135 9.079%

AI Focus
More: 184 12.611%
Same: 821 56.271%
Less: 454 31.117%

Political
More: 330 22.837%
Same: 770 53.287%
Less: 345 23.875%

Academic/Formal
More: 455 31.885%
Same: 803 56.272%
Less: 169 11.843%

In summary, people want a site that will engage with outside ideas, acknowledge where it borrows from, focus on practical self improvement, less on AI and AI risk, and tighten its academic rigor. They could go either way on politics but the epistemic direction is clear.

Community

Intense Environment
More: 254 19.644%
Same: 830 64.192%
Less: 209 16.164%

Focused On 'Real World' Action
More: 739 53.824%
Same: 563 41.005%
Less: 71 5.171%

Experts
More: 749 55.605%
Same: 575 42.687%
Less: 23 1.707%

Data Driven/Testing Of Ideas
More: 1107 78.344%
Same: 291 20.594%
Less: 15 1.062%

Social
More: 583 43.507%
Same: 682 50.896%
Less: 75 5.597%

This largely backs up what I said about the previous results. People want a more practical, more active, more social and more empirical LessWrong with outside expertise and ideas brought into the fold. They could go either way on it being more intense but the epistemic trend is still clear.

Write Ins

Diaspora Communities

So where did the party go? We got twice as many respondents this year as last when we opened up the survey to the diaspora, which means that the LW community is alive and kicking it's just not on LessWrong.

LessWrong
Yes: 353 11.498%
No: 1597 52.02%

LessWrong Meetups
Yes: 215 7.003%
No: 1735 56.515%

LessWrong Facebook Group
Yes: 171 5.57%
No: 1779 57.948%

LessWrong Slack
Yes: 55 1.792%
No: 1895 61.726%

SlateStarCodex
Yes: 832 27.101%
No: 1118 36.417%

[SlateStarCodex by far has the highest proportion of active LessWrong users, over twice that of LessWrong itself, and more than LessWrong and Tumblr combined.]

Rationalist Tumblr
Yes: 350 11.401%
No: 1600 52.117%

[I'm actually surprised that Tumblr doesn't just beat LessWrong itself outright, They're only a tenth of a percentage point behind though, and if current trends continue I suspect that by 2017 Tumblr will have a large lead over the main LW site.]

Rationalist Facebook
Yes: 150 4.886%
No: 1800 58.632%

[Eliezer Yudkowsky currently resides here.]

Rationalist Twitter
Yes: 59 1.922%
No: 1891 61.596%

Effective Altruism Hub
Yes: 98 3.192%
No: 1852 60.326%

FortForecast
Yes: 4 0.13%
No: 1946 63.388%

[I included this as a 'troll' option to catch people who just check every box. Relatively few people seem to have done that, but having the option here lets me know one way or the other.]

Good Judgement(TM) Open
Yes: 29 0.945%
No: 1921 62.573%

PredictionBook
Yes: 59 1.922%
No: 1891 61.596%

Omnilibrium
Yes: 8 0.261%
No: 1942 63.257%

Hacker News
Yes: 252 8.208%
No: 1698 55.309%

#lesswrong on freenode
Yes: 76 2.476%
No: 1874 61.042%

#slatestarcodex on freenode
Yes: 36 1.173%
No: 1914 62.345%

#hplusroadmap on freenode
Yes: 4 0.13%
No: 1946 63.388%

#chapelperilous on freenode
Yes: 10 0.326%
No: 1940 63.192%

[Since people keep asking me, this is a postrational channel.]

/r/rational
Yes: 274 8.925%
No: 1676 54.593%

/r/HPMOR
Yes: 230 7.492%
No: 1720 56.026%

[Given that the story is long over, this is pretty impressive. I'd have expected it to be dead by now.]

/r/SlateStarCodex
Yes: 244 7.948%
No: 1706 55.57%

One or more private 'rationalist' groups
Yes: 192 6.254%
No: 1758 57.264%

[I almost wish I hadn't included this option, it'd have been fascinating to learn more about these through write ins.]

Of all the parties who seem like plausible candidates at the moment, Scott Alexander seems most capable to undiaspora the community. In practice he's very busy, so he would need a dedicated team of relatively autonomous people to help him. Scott could court guest posts and start to scale up under the SSC brand, and I think he would fairly easily end up with the lions share of the free floating LWers that way.

Before I call a hearse for LessWrong, there is a glimmer of hope left:

Would you consider rejoining LessWrong?

I never left: 668 40.6%
Yes: 557 33.8%
Yes, but only under certain conditions: 205 12.5%
No: 216 13.1%

A significant fraction of people say they'd be interested in an improved version of the site. And of course there were write ins for conditions to rejoin, what did people say they'd need to rejoin the site?

Rejoin Condition Write Ins [Part One]
Rejoin Condition Write Ins [Part Two]
Rejoin Condition Write Ins [Part Three]
Rejoin Condition Write Ins [Part Four]
Rejoin Condition Write Ins [Part Five]

Feel free to read these yourselves (they're not long), but I'll go ahead and summarize: It's all about the content. Content, content, content. No amount of usability improvements, A/B testing or clever trickery will let you get around content. People are overwhelmingly clear about this; they need a reason to come to the site and right now they don't feel like they have one. That means priority number one for somebody trying to revitalize LessWrong is how you deal with this.

Let's recap.

Future Improvement Wishlist Based On Survey Results

Philosophical

  • Pay more attention to mainstream scholarship and ideas.
  • Improved intellectual rigor.
  • Acknowledge sources borrowed from.
  • Be more practical and focus on results.
  • Be more humble.

Community

  • Less intimidating/aggressive/etc to newcomers,
  • Structures that will onboard people into the community.
  • Stop being so nitpicky and argumentative.
  • Spend more time on getting content indexed in a form where people can actually find it.
  • More accepting of outside viewpoints.

While that list seems reasonable, it's quite hard to put into practice. Rigor, as the name implies requires high-effort from participants. Frankly, it's not fun. And getting people to do un-fun things without paying them is difficult. If LessWrong is serious about it's goal of 'advancing the art of human rationality' then it needs to figure out a way to do real investigation into the subject. Not just have people 'discuss', as though the potential for Rationality is within all of us just waiting to be brought out by the right conversation.

I personally haven't been a LW regular in a long time. Assuming the points about pedanticism, snipping, "well actually"-ism and the like are true then they need to stop for the site to move forward. Personally, I'm a huge fan of Scott Alexander's comment policy: All comments must be at least two of true, kind, or necessary.

  • True and kind - Probably won't drown out the discussion signal, will help significantly decrease the hostility of the atmosphere.

  • True and necessary - Sometimes what you have to say isn't nice, but it needs to be said. This is the common core of free speech arguments for saying mean things and they're not wrong. However, something being true isn't necessarily enough to make it something you should say. In fact, in some situations saying mean things to people entirely unrelated to their arguments is known as the ad hominem fallacy.

  • Kind and necessary - The infamous 'hugbox' is essentially a place where people go to hear things which are kind but not necessarily true. I don't think anybody wants a hugbox, but occasionally it can be important to say things that might not be true but are needed for the sake of tact, reconciliation, or to prevent greater harm.

If people took that seriously and really gave it some thought before they used their keyboard, I think the on-site LessWrong community would be a significant part of the way to not driving people off as soon as they arrive.

More importantly, in places like the LessWrong Slack I see this sort of happy go lucky attitude about site improvement. "Oh that sounds nice, we should do that." without the accompanying mountain of work to actually make 'that' happen. I'm not sure people really understand the dynamics of what it means to 'revive' a website in severe decay. When you decide to 'revive' a dying site, what you're really doing once you're past a certain point is refounding the site. So the question you should be asking yourself isn't "Can I fix the site up a bit so it isn't quite so stale?". It's "Could I have founded this site?" and if the answer is no you should seriously question whether to make the time investment.

Whether or not LessWrong lives to see another day basically depends on the level of ground game its last users and administrators can muster up. And if it's not enough, it won't.

Virtus junxit mors non separabit!

2016 LessWrong Diaspora Survey Results

32 ingres 14 May 2016 05:38PM

Foreword:

As we wrap up the 2016 survey, I'd like to start by thanking everybody who took
the time to fill it out. This year we had 3083 respondents, more than twice the
number we had last year. (Source: http://lesswrong.com/lw/lhg/2014_survey_results/)
This seems consistent with the hypothesis that the LW community hasn't declined
in population so much as migrated into different communities. Being the *diaspora*
survey I had expectations for more responses than usual, but twice as many was
far beyond them.

Before we move on to the survey results, I feel obligated to put a few affairs
in order in regards to what should be done next time. The copyright situation
for the survey was ambiguous this year, and to prevent that from happening again
I'm pleased to announce that this years survey questions will be released jointly
by me and Scott Alexander as Creative Commons licensed content. We haven't
finalized the details of this yet so expect it sometime this month.

I would also be remiss not to mention the large amount of feedback we received
on the survey. Some of which led to actionable recommendations I'm going to
preserve here for whoever does it next:

- Put free response form at the very end to suggest improvements/complain.

- Fix metaethics question in general, lots of options people felt were missing.

- Clean up definitions of political affilations in the short politics section.
  In particular, 'Communist' has an overly aggressive/negative definition.

- Possibly completely overhaul short politics section.

- Everywhere that a non-answer is taken as an answer should be changed so that
  non answer means what it ought to, no answer or opinion. "Absence of a signal
  should never be used as a signal." - Julian Bigelow, 1947

- Give a definition for the singularity on the question asking when you think it
  will occur.

- Ask if people are *currently* suffering from depression. Possibly add more
  probing questions on depression in general since the rates are so extraordinarily
  high.

- Include a link to what cisgender means on the gender question.

- Specify if the income question is before or after taxes.

- Add charity questions about time donated.

- Add "ineligible to vote" option to the voting question.

- Adding some way for those who are pregnant to indicate it on the number of
  children question would be nice. It might be onerous however so don't feel
  obligated. (Remember that it's more important to have a smooth survey than it
  is to catch every edge case.)

And read this thread: http://lesswrong.com/lw/nfk/lesswrong_2016_survey/,
it's full of suggestions, corrections and criticism.

Without further ado,

Basic Results:

2016 LessWrong Diaspora Survey Questions (PDF Format)

2016 LessWrong Diaspora Survey Results (PDF Format, Missing 23 Responses)

2016 LessWrong Diaspora Survey Results Complete (Text Format, Null Entries Included)

2016 LessWrong Diaspora Survey Results Complete (Text Format, Null Entries Excluded)

2016 LessWrong Diaspora Survey Results Complete (Text Format, Null Entries Included, 13 Responses Filtered, Percentages)

2016 LessWrong Diaspora Survey Results Complete (Text Format, Null Entries Excluded, 13 Responses Filtered, Percentages)

2016 LessWrong Diaspora Survey Results Complete (HTML Format, Null Entries Excluded)

Our report system is currently on the fritz and isn't calculating numeric questions. If I'd known this earlier I'd have prepared the results for said questions ahead of time. Instead they'll be coming out later today or tomorrow. (EDIT: These results are now in the text format survey results.)

 

Philosophy and Community Issues At LessWrong's Peak (Write Ins)

Peak Philosophy Issues Write Ins (Part One)

Peak Philosophy Issues Write Ins (Part Two)

Peak Community Issues Write Ins (Part One)

Peak Community Issues Write Ins (Part Two)


Philosophy and Community Issues Now (Write Ins)

Philosophy Issues Now Write Ins (Part One)

Philosophy Issues Now Write Ins (Part Two)

Community Issues Now Write Ins (Part One)

Community Issues Now Write Ins (Part Two)

 

Rejoin Conditions

Rejoin Condition Write Ins (Part One)

Rejoin Condition Write Ins (Part Two)

Rejoin Condition Write Ins (Part Three)

Rejoin Condition Write Ins (Part Four)

Rejoin Condition Write Ins (Part Five)

 

CC-Licensed Machine Readable Survey and Public Data

2016 LessWrong Diaspora Survey Structure (License)

2016 LessWrong Diaspora Survey Public Dataset

(Note for people looking to work with the dataset: My survey analysis code repository includes a sqlite converter, examples, and more coming soon. It's a great way to get up and running with the dataset really quickly.)

In depth analysis:

Analysis Posts

Part One: Meta and Demographics

Part Two: LessWrong Use, Successorship, Diaspora

Part Three: Mental Health, Basilisk, Blogs and Media

Part Four: Politics, Calibration & Probability, Futurology, Charity & Effective Altruism

Aggregated Data

Effective Altruism and Charitable Giving Analysis

Mental Health Stats By Diaspora Community (Including self dxers)

How Diaspora Communities Compare On Mental Health Stats (I suspect these charts are subtly broken somehow, will investigate later)

Improved Mental Health Charts By Obormot (Using public survey data)

Improved Mental Health Charts By Anonymous (Using full survey data)

Political Opinions By Political Affiliation

Political Opinions By Political Affiliation Charts (By anonymous)

Blogs And Media Demographic Clusters

Blogs And Media Demographic Clusters (HTML Format, Impossible Answers Excluded)

Calibration Question And Brier Score Analysis

More coming soon!

Survey Analysis Code

Some notes:

1. FortForecast on the communities section, Bayesed And Confused on the blogs section, and Synthesis on the stories section were all 'troll' answers designed to catch people who just put down everything. Somebody noted that the three 'fortforecast' users had the entire DSM split up between them, that's why.

2. Lots of people asked me for a list of all those cool blogs and stories and communities on the survey, they're included in the survey questions PDF above.

Public TODO:

1. Add more in depth analysis, fix the ones that decided to suddenly break at the last minute or I suspect were always broken.

2. Add a compatibility mode so that the current question codes are converted to older ones for 3rd party analysis that rely on them.

If anybody would like to help with these, write to jd@fortforecast.com

2016 LessWrong Diaspora Survey Analysis: Part One (Meta and Demographics)

19 ingres 14 May 2016 06:09AM

2016 LessWrong Diaspora Survey Analysis

Overview

  • Results and Dataset
  • Meta
  • Demographics (You are here)
  • LessWrong Usage and Experience
  • LessWrong Criticism and Successorship
  • Diaspora Community Analysis
  • What it all means for LW 2.0
  • Mental Health Section
  • Basilisk Section/Analysis
  • Blogs and Media analysis
  • Politics
  • Calibration Question And Probability Question Analysis
  • Charity And Effective Altruism Analysis

Survey Meta

Introduction

Hello everybody, this is part one in a series of posts analyzing the 2016 LessWrong Diaspora Survey. The survey ran from March 24th to May 1st and had 3083 respondents.

Almost two thousand eight hundred and fifty hours were spent surveying this year and you've all waited nearly two months from the first survey response to the results writeup. While the results have been available for over a week, they haven't seen widespread dissemination in large part because they lacked a succinct summary of their contents.

When we started the survey in march I posted this graph showing the dropoff in question responses over time:

So it seems only reasonable to post the same graph with this years survey data:

(I should note that this analysis counts certain things as questions that the other chart does not, so it says there are many more questions than the previous survey when in reality where are about as many as last year.)

2016 Diaspora Survey Stats

Survey hours spent in total: 2849.818888888889

Average number of minutes spent on survey: 102.14404619673437

Median number of minutes spent on survey: 39.775

Mode minutes spent on survey: 20.266666666666666

The takeaway here seems to be that some people take a long time with the survey, raising the average. However, most people's survey time is somewhere below the forty five minute mark. LessWrong does a very long survey, and I wanted to make sure that investment was rewarded with a deep detailed analysis. Weighing in at over four thousand lines of python code, I hope the analysis I've put together is worth the wait.

Credits

I'd like to thank people who contributed to the analysis effort:

Bartosz Wroblewski

Kuudes on #lesswrong

Obormot on #lesswrong

Two anonymous contributors

And anybody else who I may have forgotten. Thanks again to Scott Alexander, who wrote the majority of the survey and ran it in 2014, and who has also been generous enough to license his part of the survey under a creative commons license along with mine.


Demographics

Age

The 2014 survey gave these numbers for age:

Age: 27.67 + 8.679 (22, 26, 31) [1490]

In 2016 the numbers were:

Mean: 28.108772669759592
Median: 26.0
Mode: 23.0

Most LWers are in their early to mid twenties, with some older LWers bringing up the average. The average is close enough to the former figure that we can probably say the LW demographic is in their 20's or 30's as a general rule.

Sex and Gender

In 2014 our gender ratio looked like this:

Female: 179, 11.9%
Male: 1311, 87.2%

In 2016 the proportion of women in the community went up by over four percent:

Male: 2021 83.5%
Female: 393 16.2%

One hypothesis on why this happened is that the 2016 survey focused on the diaspora rather than just LW. Diaspora communities plausibly have marginally higher rates of female membership. If I had more time I would write an analysis investigating the demographics of each diaspora community, but to answer this particular question I think a couple of SQL queries are illustrative:

(Note: ActiveMemberships one and two are 'LessWrong' and 'LessWrong Meetups' respectively.)
sqlite> select count(birthsex) from data where (ActiveMemberships_1 = "Yes" OR ActiveMemberships_2 = "Yes") AND birthsex="Male";
425
sqlite> select count(birthsex) from data where (ActiveMemberships_1 = "Yes" OR ActiveMemberships_2 = "Yes") AND birthsex="Female";
66
>>> 66 / (425 + 66)
0.13441955193482688

Well, maybe. Of course, before we wring our hands too much on this question it pays to remember that assigned sex at birth isn't the whole story. The gender question in 2014 had these results:

F (cisgender): 150, 10.0%
F (transgender MtF): 24, 1.6%
M (cisgender): 1245, 82.8%
M (transgender FtM): 5, 0.3%
Other: 64, 4.3%

In 2016:

F (cisgender): 321 13.3%
F (transgender MtF): 65 2.7%
M (cisgender): 1829 76%
M (transgender FtM): 23 1%
Other: 156 6.48%

Some things to note here. 16.2% of respondents were assigned female at birth but only 13.3% still identify as women. 1% are transmen, but where did the other 1.9% go? Presumably into the 'Other' field. Let's find out.

sqlite> select count(birthsex) from data where birthsex = "Female" AND gender = "Other";
57
sqlite> select count(*) from data;
3083
>>> 57 / 3083
0.018488485241647746

Seems to be the case. In general the proportion of men is down 6.1% from 2014. We also gained 1.1% transwomen and .7% transmen in 2016. Moving away from binary genders, this surveys nonbinary gender count gained in proportion by nearly 2.2%. This means that over one in twenty LWers identified as a nonbinary gender, making it a larger demographic than binary transgender LWers! As exciting as that may sound to some ears the numbers tell one story and the write ins tell quite another.

It pays to keep in mind that nonbinary genders are a common troll option for people who want to write in criticism of the question. A quick look at the write ins accompanying the other option indicates that this is what many people used it for, but by no means all. At 156 responses, that's small enough to be worth doing a quick manual tally.

"Other" Genders, Sample Size: 156
ClassificationCount
Agender 35
Esoteric 6
Female 6
Male 21
Male-To-Female 1
Nonbinary 55
Objection on Basis Gender Doesn't Exist 6
Objection on Basis Gender Is Binary 7
in Process of Transitioning 2
Refusal 7
Undecided 10

So depending on your comfort zone as to what constitutes a countable gender, there are 90 to 96 valid 'other' answers in the survey dataset. (Labeled dataset)

>>> 90 / 3083
0.029192345118391177

With some cleanup the number trails behind the binary transgender one by the greater part of a percentage point, but only by. I bet that if you went through and did the same sort of tally on the 2014 survey results you'd find that the proportion of valid nonbinary gender write ins has gone up between then and now.

Some interesting 'esoteric' answers: Attack Helocopter, Blackstar, Elizer, spiderman, Agenderfluid

For the rest of this section I'm going to just focus on differences between the 2016 and 2014 surveys.

2014 Demographics Versus 2016 Demographics

Country

United States: -1.000% 1298 53.700%
United Kingdom: -0.100% 183 7.600%
Canada: +0.100% 144 6.000%
Australia: +0.300% 141 5.800%
Germany: -0.600% 85 3.500%
Russia: +0.700% 57 2.400%
Finland: -0.300% 25 1.000%
New Zealand: -0.200% 26 1.100%
India: -0.100% 24 1.000%
Brazil: -0.300% 16 0.700%
France: +0.400% 34 1.400%
Israel: +0.200% 29 1.200%
Other: 354 14.646%

[Summing these all up to one shows that nearly 1% of change is unaccounted for. My hypothesis is that this 1% went into the other countries not in the list, this can't be easily confirmed because the 2014 analysis does not list the other country percentage.]

Race

Asian (East Asian): -0.600% 80 3.300%
Asian (Indian subcontinent): +0.300% 60 2.500%
Middle Eastern: 0.000% 14 0.600%
Black: -0.300% 12 0.500%
White (non-Hispanic): -0.300% 2059 85.800%
Hispanic: +0.300% 57 2.400%
Other: +1.200% 108 4.500%

Sexual Orientation

Heterosexual: -5.000% 1640 70.400%
Homosexual: +1.300% 103 4.400%
Bisexual: +4.000% 428 18.400%
Other: +3.880% 144 6.180%

[LessWrong got 5.3% more gay, 9.1% if you're more loose with the definition. Before we start any wild speculation, the 2014 question included asexuality as an option and it got 3.9% of the responses, we spun this off into a separate question on the 2016 survey which should explain a significant portion of the change.]

Are you asexual?

Yes: 171 0.074
No: 2129 0.926

[Scott said in 2014 that he'd probably 'vastly undercounted' our asexual readers, a near doubling in our count would seem to support this.]

Relationship Style

Prefer monogomous: -0.900% 1190 50.900%
Prefer polyamorous: +3.100% 426 18.200%
Uncertain/no preference: -2.100% 673 28.800%
Other: +0.426% 45 1.926%

[Polyamorous gained three points, presumably the drop in uncertain people went into that bin.]

Number of Partners

0: -2.300% 1094 46.800%
1: -0.400% 1039 44.400%
2: +1.200% 107 4.600%
3: +0.900% 46 2.000%
4: +0.100% 15 0.600%
5: +0.200% 8 0.300%
Lots and lots: +1.000% 29 1.200%

Relationship Goals

...and seeking more relationship partners: +0.200% 577 24.800%
...and possibly open to more relationship partners: -0.300% 716 30.800%
...and currently not looking for more relationship partners: +1.300% 1034 44.400%

Are you married?

Yes: 443 0.19
No: 1885 0.81

[This question appeared in a different form on the previous survey. Marriage went up by .8% from last year.]

Who do you currently live with most of the time?

Alone: -2.200% 487 20.800%
With parents and/or guardians: +0.100% 476 20.300%
With partner and/or children: +2.100% 687 29.400%
With roommates: -2.000% 619 26.500%

[This would seem to line up with the result that single LWers went down by 2.3%]

How many children do you have?

Sum: 598 or greater
0: +5.400% 2042 87.000%
1: +0.500% 115 4.900%
2: +0.100% 124 5.300%
3: +0.900% 48 2.000%
4: -0.100% 7 0.300%
5: +0.100% 6 0.300%
6: 0.000% 2 0.100%
Lots and lots: 0.000% 3 0.100%

[Interestingly enough, childless LWers went up by 5.4%. This would seem incongruous with the previous results. Not sure how to investigate though.]

Are you planning on having more children?

Yes: -5.400% 720 30.700%
Uncertain: +3.900% 755 32.200%
No: +2.800% 869 37.100%

[This is an interesting result, either nearly 4% of LWers are suddenly less enthusiastic about having kids, or new entrants to the survey are less likely and less sure if they want to. Possibly both.]

Work Status

Student: -5.402% 968 31.398%
Academics: +0.949% 205 6.649%
Self-employed: +4.223% 309 10.023%
Independently wealthy: +0.762% 42 1.362%
Non-profit work: +1.030% 152 4.930%
For-profit work: -1.756% 954 30.944%
Government work: +0.479% 135 4.379%
Homemaker: +1.024% 47 1.524%
Unemployed: +0.495% 228 7.395%

[Most interesting result here is that 5.4% of LWers are no longer students or new survey entrants aren't.]

Profession

Art: +0.800% 51 2.300%
Biology: +0.300% 49 2.200%
Business: -0.800% 72 3.200%
Computers (AI): +0.700% 79 3.500%
Computers (other academic, computer science): -0.100% 156 7.000%
Computers (practical): -1.200% 681 30.500%
Engineering: +0.600% 150 6.700%
Finance / Economics: +0.500% 116 5.200%
Law: -0.300% 50 2.200%
Mathematics: -1.500% 147 6.600%
Medicine: +0.100% 49 2.200%
Neuroscience: +0.100% 28 1.300%
Philosophy: 0.000% 54 2.400%
Physics: -0.200% 91 4.100%
Psychology: 0.000% 48 2.100%
Other: +2.199% 277 12.399%
Other "hard science": -0.500% 26 1.200%
Other "social science": -0.200% 48 2.100%

[The largest profession growth for LWers in 2016 was art, that or this is a consequence of new survey entrants.]

What is your highest education credential earned?

None: -0.700% 96 4.200%
High School: +3.600% 617 26.700%
2 year degree: +0.200% 105 4.500%
Bachelor's: -1.600% 815 35.300%
Master's: -0.500% 415 18.000%
JD/MD/other professional degree: 0.000% 66 2.900%
PhD: -0.700% 145 6.300%
Other: +0.288% 39 1.688%

[Hm, the academic credentials of LWers seems to have gone down some since the last survey. As usual this may also be the result of new survey entrants.]


Footnotes

  1. The 2850 hour estimate of survey hours is very naive. It measures the time between starting and turning in the survey, a person didn't necessarily sit there during all that time. For example this could easily be including people who spent multiple days doing other things before finally finishing their survey.

  2. The apache helicopter image is licensed under the Open Government License, which requires attribution. That particular edit was done by Wubbles on the LW Slack.

  3. The first published draft of this post made a basic stats error calculating the proportion of women in active memberships one and two, dividing the number of women by the number of men rather than the number of women by the number of men and women.