I'm pleased to announce the first annual survey of effective altruists. This is a short survey of around 40 questions (generally multiple choice), which several collaborators and I have put a great deal of work into and would be very grateful if you took. I'll offer $250 of my own money to one participant.
Take the survey at http://survey.effectivealtruismhub.com/
The survey should yield some interesting results such as EAs' political and religious views, what actions they take, and the causes they favour and donate to. It will also enable useful applications which will be launched immediately afterwards, such as a map of EAs with contact details and a cause-neutral register of planned donations or pledges which can be verified each year. I'll also provide an open platform for followup surveys and other actions people can take. If you'd like to suggest questions, email me or comment.
Anonymised results will be shared publicly and not belong to any individual or organisation. The most robust privacy practices will be followed, with clear opt-ins and opt-outs.
I'd like to thank Jacy Anthis, Ben Landau-Taylor, David Moss and Peter Hurford for their help.
Other surveys' results, and predictions for this one
Other surveys have had intriguing results. For example, Joey Savoie and Xio Kikauka's interviewed 42 often highly active EAs over Skype, and found that they generally had left-leaning parents, donated on average 10%, and were altruistic before becoming EAs. The time they spent on EA activities was correlated with the percentage they donated (0.4), the time their parents spend volunteering (0.3), and the percentage of their friends who were EAs (0.3).
80,000 Hours also released a questionnaire and, while this was mainly focused on their impact, it yielded a list of which careers people plan to pursue: 16% for academia, 9% for both finance and software engineering, and 8% for both medicine and non-profits.
I'd be curious to hear people's predictions as to what the results of this survey will be. You might enjoy reading or sharing them here. For my part, I'd imagine we have few conservatives or even libertarians, are over 70% male, and have directed most of our donations to poverty charities.
Uhm, upvoted the comment, but don't completely agree with the linked article.
It suggests that when fans of something are worried when it becomes too popular, they object against losing their positional good. That's just one possible explanation. Sometimes the fact that X becomes widely popular changes X, and there are people who genuinely preferred the original version. -- As a simple example, imagine that tomorrow million new readers will come to LW; would that be a good thing or a bad thing? Depends on what happens to LW. If the quality of debate remains the same, that it's obviously a huge win, and anyone who resents that is guilty of caring about their positional good too much. On the other hand, the new people could easily shift LW towards the popular (in sense: frequent in population) stuff, so we would get a lot of nonsense sprinkled by LW buzzwords.
I can imagine leftist groups believing they are working "more meta than thou"; solving a problem which taken in isolation doesn't seem so important (compared with the causes effective altruists care about), but would start a huge cascade of improvement afterwards (their model of the world says so, your model doesn't). Making mosquito nets instead is not an improvement according to their model.
That doesn't explain why the new X looks much more like an extreme version of the popular version of X rather than the original X.