I'm pleased to announce the first annual survey of effective altruists. This is a short survey of around 40 questions (generally multiple choice), which several collaborators and I have put a great deal of work into and would be very grateful if you took. I'll offer $250 of my own money to one participant.
Take the survey at http://survey.effectivealtruismhub.com/
The survey should yield some interesting results such as EAs' political and religious views, what actions they take, and the causes they favour and donate to. It will also enable useful applications which will be launched immediately afterwards, such as a map of EAs with contact details and a cause-neutral register of planned donations or pledges which can be verified each year. I'll also provide an open platform for followup surveys and other actions people can take. If you'd like to suggest questions, email me or comment.
Anonymised results will be shared publicly and not belong to any individual or organisation. The most robust privacy practices will be followed, with clear opt-ins and opt-outs.
I'd like to thank Jacy Anthis, Ben Landau-Taylor, David Moss and Peter Hurford for their help.
Other surveys' results, and predictions for this one
Other surveys have had intriguing results. For example, Joey Savoie and Xio Kikauka's interviewed 42 often highly active EAs over Skype, and found that they generally had left-leaning parents, donated on average 10%, and were altruistic before becoming EAs. The time they spent on EA activities was correlated with the percentage they donated (0.4), the time their parents spend volunteering (0.3), and the percentage of their friends who were EAs (0.3).
80,000 Hours also released a questionnaire and, while this was mainly focused on their impact, it yielded a list of which careers people plan to pursue: 16% for academia, 9% for both finance and software engineering, and 8% for both medicine and non-profits.
I'd be curious to hear people's predictions as to what the results of this survey will be. You might enjoy reading or sharing them here. For my part, I'd imagine we have few conservatives or even libertarians, are over 70% male, and have directed most of our donations to poverty charities.
I mean, the measure is going to be something like an EEG or an MRI, where we determine the amount of activity in some brain region. But while measuring the electrical properties of that region is just an engineering problem, and the units are the same from person to person, and maybe even the range is the same from person to person, that doesn't establish the ethical principle that all people deserve equal consideration (or, in the case of range differences or variance differences, that neural activity determines how much consideration one deserves).
It's not obvious to me that all agents deserve the same level of moral consideration (i.e. I am open to the possibility of utility monsters), but it is obvious to me that some ways of determining who should be the utility monsters are bad (generally because they're easily hacked or provide unproductive incentives).
Well it's not like people would go around maximizing the amount of this particular pattern of neural activity in the world: they would go around maximizing pleasure in the-kinds-of-agents-they-care-about, where the pattern is just a way of measuring and establishing what kinds of interventions actually do increase pleasure. (We are talking about humans, not FAI design, right?) If there are ways of hacking the pattern or producing it in ways that don't actually correlate with pleasure (of the kind that we care about), then those can be identified and ignored.