2016 LessWrong Diaspora Survey Analysis: Part Four (Politics, Calibration & Probability, Futurology, Charity & Effective Altruism)
Politics
The LessWrong survey has a very involved section dedicated to politics. In previous analysis the benefits of this weren't fully realized. In the 2016 analysis we can look at not just the political affiliation of a respondent, but what beliefs are associated with a certain affiliation. The charts below summarize most of the results.
Political Opinions By Political Affiliation

Miscellaneous Politics
There were also some other questions in this section which aren't covered by the above charts.
Voting
| Group | Turnout |
|---|---|
| LessWrong | 68.9% |
| Austrailia | 91% |
| Brazil | 78.90% |
| Britain | 66.4% |
| Canada | 68.3% |
| Finland | 70.1% |
| France | 79.48% |
| Germany | 71.5% |
| India | 66.3% |
| Israel | 72% |
| New Zealand | 77.90% |
| Russia | 65.25% |
| United States | 54.9% |
Calibration And Probability Questions
Calibration Questions
I just couldn't analyze these, sorry guys. I put many hours into trying to get them into a decent format I could even read and that sucked up an incredible amount of time. It's why this part of the survey took so long to get out. Thankfully another LessWrong user, Houshalter, has kindly done their own analysis.
All my calibration questions were meant to satisfy a few essential properties:
- They should be 'self contained'. I.E, something you can reasonably answer or at least try to answer with a 5th grade science education and normal life experience.
- They should, at least to a certain extent, be Fermi Estimable.
- They should progressively scale in difficulty so you can see whether somebody understands basic probability or not. (eg. In an 'or' question do they put a probability of less than 50% of being right?)
At least one person requested a workbook, so I might write more in the future. I'll obviously write more for the survey.
Probability Questions
| Question | Mean | Median | Mode | Stdev |
| Please give the obvious answer to this question, so I can automatically throw away all surveys that don't follow the rules: What is the probability of a fair coin coming up heads? | 49.821 | 50.0 | 50.0 | 3.033 |
| What is the probability that the Many Worlds interpretation of quantum mechanics is more or less correct? | 44.599 | 50.0 | 50.0 | 29.193 |
| What is the probability that non-human, non-Earthly intelligent life exists in the observable universe? | 75.727 | 90.0 | 99.0 | 31.893 |
| ...in the Milky Way galaxy? | 45.966 | 50.0 | 10.0 | 38.395 |
| What is the probability that supernatural events (including God, ghosts, magic, etc) have occurred since the beginning of the universe? | 13.575 | 1.0 | 1.0 | 27.576 |
| What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe? | 15.474 | 1.0 | 1.0 | 27.891 |
| What is the probability that any of humankind's revealed religions is more or less correct? | 10.624 | 0.5 | 1.0 | 26.257 |
| What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then? | 21.225 | 10.0 | 5.0 | 26.782 |
| What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time? | 25.263 | 10.0 | 1.0 | 30.510 |
| What is the probability that our universe is a simulation? | 25.256 | 10.0 | 50.0 | 28.404 |
| What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions? | 83.307 | 90.0 | 90.0 | 23.167 |
| What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity? | 76.310 | 80.0 | 80.0 | 22.933 |
Probability questions is probably the area of the survey I put the least effort into. My plan for next year is to overhaul these sections entirely and try including some Tetlock-esque forecasting questions, a link to some advice on how to make good predictions, etc.
Futurology
This section got a bit of a facelift this year. Including new cryonics questions, genetic engineering, and technological unemployment in addition to the previous years.
Cryonics
Interestingly enough, of those who think it will work with enough confidence to say 'yes', only 14 are actually signed up for cryonics.
sqlite> select count(*) from data where CryonicsNow="Yes" and Cryonics="Yes - signed up or just finishing up paperwork";
14
sqlite> select count(*) from data where CryonicsNow="Yes" and (Cryonics="Yes - signed up or just finishing up paperwork" OR Cryonics="No - would like to sign up but unavailable in my area" OR "No - would like to sign up but haven't gotten around to it" OR "No - would like to sign up but can't afford it");
34
LessWrongers seem to be very bullish on the underlying physics of cryonics even if they're not as enthusiastic about current methods in use.
The Brain Preservation Foundation also did an analysis of cryonics responses to the LessWrong Survey.
Singularity
SingularityYear
By what year do you think the Singularity will occur? Answer such that you think, conditional on the Singularity occurring, there is an even chance of the Singularity falling before or after this year. If you think a singularity is so unlikely you don't even want to condition on it, leave this question blank.
Mean: 8.110300081581755e+16
Median: 2080.0
Mode: 2100.0
Stdev: 2.847858859055733e+18
I didn't bother to filter out the silly answers for this.Obviously it's a bit hard to see without filtering out the uber-large answers, but the median doesn't seem to have changed much from the 2014 survey.
Genetic Engineering
Well that's fairly overwhelming.
I find it amusing how the strict "No" group shrinks considerably after this question.
This question is too important to just not have an answer to so I'll do it manually. Unfortunately I can't easily remove the 'excluded' entries so that we're dealing with the exact same distribution but only 13 or so responses are filtered out anyway.
sqlite> select count(*) from data where GeneticImprovement="Yes";
1100
>>> 1100 + 176 + 262 + 84
1622
>>> 1100 / 1622
0.6781750924784217
67.8% are willing to genetically engineer their children for improvements.
These numbers go about how you would expect, with people being progressively less interested the more 'shallow' a genetic change is seen as.
All three of these seem largely consistent with peoples personal preferences about modification. Were I inclined I could do a deeper analysis that actually takes survey respondents row by row and looks at correlation between preference for ones own children and preference for others.
Technological Unemployment
LudditeFallacy
Do you think the Luddite's Fallacy is an actual fallacy?
Yes: 443 (30.936%)
No: 989 (69.064%)
We can use this as an overall measure of worry about technological unemployment, which would seem to be high among the LW demographic.
UnemploymentYear
By what year do you think the majority of people in your country will have trouble finding employment for automation related reasons? If you think this is something that will never happen leave this question blank.
Mean: 2102.9713740458014
Median: 2050.0
Mode: 2050.0
Stdev: 1180.2342850727339
Question is flawed because you can't distinguish answers of "never happen" from people who just didn't see it.Interesting question that would be fun to take a look at in comparison to the estimates for the singularity.
EndOfWork
Do you think the "end of work" would be a good thing?
Yes: 1238 (81.287%)
No: 285 (18.713%)
Fairly overwhelming consensus, but with a significant minority of people who have a dissenting opinion.
EndOfWorkConcerns
If machines end all or almost all employment, what are your biggest worries? Pick two.
| Question | Count | Percent |
| People will just idle about in destructive ways | 513 | 16.71% |
| People need work to be fulfilled and if we eliminate work we'll all feel deep existential angst | 543 | 17.687% |
| The rich are going to take all the resources for themselves and leave the rest of us to starve or live in poverty | 1066 | 34.723% |
| The machines won't need us, and we'll starve to death or be otherwise liquidated | 416 | 13.55% |
The plurality of worries are about elites who refuse to share their wealth.
Existential Risk
XRiskType
Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?
Nuclear war: +4.800% 326 (20.6%)
Asteroid strike: -0.200% 64 (4.1%)
Unfriendly AI: +1.000% 271 (17.2%)
Nanotech / grey goo: -2.000% 18 (1.1%)
Pandemic (natural): +0.100% 120 (7.6%)
Pandemic (bioengineered): +1.900% 355 (22.5%)
Environmental collapse (including global warming): +1.500% 252 (16.0%)
Economic / political collapse: -1.400% 136 (8.6%)
Other: 35 (2.217%)
Significantly more people worried about Nuclear War than last year. Effect of new respondents, or geopolitical situation? Who knows.
Charity And Effective Altruism
Charitable Giving
Income
What is your approximate annual income in US dollars (non-Americans: convert at www.xe.com)? Obviously you don't need to answer this question if you don't want to. Please don't include commas or dollar signs.
Sum: 66054140.47384
Mean: 64569.052271593355
Median: 40000.0
Mode: 30000.0
Stdev: 107297.53606321265
IncomeCharityPortion
How much money, in number of dollars, have you donated to charity over the past year? (non-Americans: convert to dollars at http://www.xe.com/ ). Please don't include commas or dollar signs in your answer. For example, 4000
Sum: 2389900.6530000004
Mean: 2914.5129914634144
Median: 353.0
Mode: 100.0
Stdev: 9471.962766896671
XriskCharity
How much money have you donated to charities aiming to reduce existential risk (other than MIRI/CFAR) in the past year?
Sum: 169300.89
Mean: 1991.7751764705883
Median: 200.0
Mode: 100.0
Stdev: 9219.941506342007
CharityDonations
How much have you donated in US dollars to the following charities in the past year? (Non-americans: convert to dollars at http://www.xe.com/) Please don't include commas or dollar signs in your answer. Options starting with "any" aren't the name of a charity but a category of charity.
| Question | Sum | Mean | Median | Mode | Stdev |
| Against Malaria Foundation | 483935.027 | 1905.256 | 300.0 | None | 7216.020 |
| Schistosomiasis Control Initiative | 47908.0 | 840.491 | 200.0 | 1000.0 | 1618.785 |
| Deworm the World Initiative | 28820.0 | 565.098 | 150.0 | 500.0 | 1432.712 |
| GiveDirectly | 154410.177 | 1429.723 | 450.0 | 50.0 | 3472.082 |
| Any kind of animal rights charity | 83130.47 | 1093.821 | 154.235 | 500.0 | 2313.493 |
| Any kind of bug rights charity | 1083.0 | 270.75 | 157.5 | None | 353.396 |
| Machine Intelligence Research Institute | 141792.5 | 1417.925 | 100.0 | 100.0 | 5370.485 |
| Any charity combating nuclear existential risk | 491.0 | 81.833 | 75.0 | 100.0 | 68.060 |
| Any charity combating global warming | 13012.0 | 245.509 | 100.0 | 10.0 | 365.542 |
| Center For Applied Rationality | 127101.0 | 3177.525 | 150.0 | 100.0 | 12969.096 |
| Strategies for Engineered Negligible Senescence Research Foundation | 9429.0 | 554.647 | 100.0 | 20.0 | 1156.431 |
| Wikipedia | 12765.5 | 53.189 | 20.0 | 10.0 | 126.444 |
| Internet Archive | 2975.04 | 80.406 | 30.0 | 50.0 | 173.791 |
| Any campaign for political office | 38443.99 | 366.133 | 50.0 | 50.0 | 1374.305 |
| Other | 564890.46 | 1661.442 | 200.0 | 100.0 | 4670.805 |
This table is interesting given the recent debates about how much money certain causes are 'taking up' in Effective Altruism.
Effective Altruism
Vegetarian
Do you follow any dietary restrictions related to animal products?
Yes, I am vegan: 54 (3.4%)
Yes, I am vegetarian: 158 (10.0%)
Yes, I restrict meat some other way (pescetarian, flexitarian, try to only eat ethically sourced meat): 375 (23.7%)
No: 996 (62.9%)
EAKnowledge
Do you know what Effective Altruism is?
Yes: 1562 (89.3%)
No but I've heard of it: 114 (6.5%)
No: 74 (4.2%)
EAIdentity
Do you self-identify as an Effective Altruist?
Yes: 665 (39.233%)
No: 1030 (60.767%)
The distribution given by the 2014 survey results does not sum to one, so it's difficult to determine if Effective Altruism's membership actually went up or not but if we take the numbers at face value it experienced an 11.13% increase in membership.
EACommunity
Do you participate in the Effective Altruism community?
Yes: 314 (18.427%)
No: 1390 (81.573%)
Same issue as last, taking the numbers at face value community participation went up by 5.727%
EADonations
Has Effective Altruism caused you to make donations you otherwise wouldn't?
Yes: 666 (39.269%)
No: 1030 (60.731%)
Wowza!
Effective Altruist Anxiety
EAAnxiety
Have you ever had any kind of moral anxiety over Effective Altruism?
Yes: 501 (29.6%)
Yes but only because I worry about everything: 184 (10.9%)
No: 1008 (59.5%)
There's an ongoing debate in Effective Altruism about what kind of rhetorical strategy is best for getting people on board and whether Effective Altruism is causing people significant moral anxiety.
It certainly appears to be. But is moral anxiety effective? Let's look:
Sample Size: 244
Average amount of money donated by people anxious about EA who aren't EAs: 257.5409836065574
Sample Size: 679
Average amount of money donated by people who aren't anxious about EA who aren't EAs: 479.7501384388807
Sample Size: 249 Average amount of money donated by EAs anxious about EA: 1841.5292369477913
Sample Size: 314
Average amount of money donated by EAs not anxious about EA: 1837.8248407643312
It seems fairly conclusive that anxiety is not a good way to get people to donate more than they already are, but is it a good way to get people to become Effective Altruists?
Sample Size: 1685
P(Effective Altruist): 0.3940652818991098
P(EA Anxiety): 0.29554896142433235
P(Effective Altruist | EA Anxiety): 0.5
Maybe. There is of course an argument to be made that sufficient good done by causing people anxiety outweighs feeding into peoples scrupulosity, but it can be discussed after I get through explaining it on the phone to wealthy PR-conscious donors and telling the local all-kill shelter where I want my shipment of dead kittens.
EAOpinion
What's your overall opinion of Effective Altruism?
Positive: 809 (47.6%)
Mostly Positive: 535 (31.5%)
No strong opinion: 258 (15.2%)
Mostly Negative: 75 (4.4%)
Negative: 24 (1.4%)
EA appears to be doing a pretty good job of getting people to like them.
Interesting Tables
| Affiliation | Income | Charity Contributions | % Income Donated To Charity | Total Survey Charity % | Sample Size |
|---|---|---|---|---|---|
| Anarchist | 1677900.0 | 72386.0 | 4.314% | 3.004% | 50 |
| Communist | 298700.0 | 19190.0 | 6.425% | 0.796% | 13 |
| Conservative | 1963000.04 | 62945.04 | 3.207% | 2.612% | 38 |
| Futarchist | 1497494.1099999999 | 166254.0 | 11.102% | 6.899% | 31 |
| Left-Libertarian | 9681635.613839999 | 416084.0 | 4.298% | 17.266% | 245 |
| Libertarian | 11698523.0 | 214101.0 | 1.83% | 8.885% | 190 |
| Moderate | 3225475.0 | 90518.0 | 2.806% | 3.756% | 67 |
| Neoreactionary | 1383976.0 | 30890.0 | 2.232% | 1.282% | 28 |
| Objectivist | 399000.0 | 1310.0 | 0.328% | 0.054% | 10 |
| Other | 3150618.0 | 85272.0 | 2.707% | 3.539% | 132 |
| Pragmatist | 5087007.609999999 | 266836.0 | 5.245% | 11.073% | 131 |
| Progressive | 8455500.440000001 | 368742.78 | 4.361% | 15.302% | 217 |
| Social Democrat | 8000266.54 | 218052.5 | 2.726% | 9.049% | 237 |
| Socialist | 2621693.66 | 78484.0 | 2.994% | 3.257% | 126 |
| Community | Count | % In Community | Sample Size |
|---|---|---|---|
| LessWrong | 136 | 38.418% | 354 |
| LessWrong Meetups | 109 | 50.463% | 216 |
| LessWrong Facebook Group | 83 | 48.256% | 172 |
| LessWrong Slack | 22 | 39.286% | 56 |
| SlateStarCodex | 343 | 40.98% | 837 |
| Rationalist Tumblr | 175 | 49.716% | 352 |
| Rationalist Facebook | 89 | 58.94% | 151 |
| Rationalist Twitter | 24 | 40.0% | 60 |
| Effective Altruism Hub | 86 | 86.869% | 99 |
| Good Judgement(TM) Open | 23 | 74.194% | 31 |
| PredictionBook | 31 | 51.667% | 60 |
| Hacker News | 91 | 35.968% | 253 |
| #lesswrong on freenode | 19 | 24.675% | 77 |
| #slatestarcodex on freenode | 9 | 24.324% | 37 |
| #chapelperilous on freenode | 2 | 18.182% | 11 |
| /r/rational | 117 | 42.545% | 275 |
| /r/HPMOR | 110 | 47.414% | 232 |
| /r/SlateStarCodex | 93 | 37.959% | 245 |
| One or more private 'rationalist' groups | 91 | 47.15% | 193 |
| Affiliation | EA Income | EA Charity | Sample Size |
|---|---|---|---|
| Anarchist | 761000.0 | 57500.0 | 18 |
| Futarchist | 559850.0 | 114830.0 | 15 |
| Left-Libertarian | 5332856.0 | 361975.0 | 112 |
| Libertarian | 2725390.0 | 114732.0 | 53 |
| Moderate | 583247.0 | 56495.0 | 22 |
| Other | 1428978.0 | 69950.0 | 49 |
| Pragmatist | 1442211.0 | 43780.0 | 43 |
| Progressive | 4004097.0 | 304337.78 | 107 |
| Social Democrat | 3423487.45 | 149199.0 | 93 |
| Socialist | 678360.0 | 34751.0 | 41 |
Review and Thoughts on Current Version of CFAR Workshop
Outline: I will discuss my background and how I prepared for the workshop, and then how I would have prepared differently if I could go back and have the chance to do it again; I will then discuss my experience at the CFAR workshop, and what I would have done differently if I had the chance to do it again; I will then discuss what my take-aways were from the workshop, and what I am doing to integrate CFAR strategies into my life; finally, I will give my assessment of its benefits and what other folks might expect to get who attend the workshop.
Acknowledgments: Thanks to fellow CFAR alumni and CFAR staff for feedback on earlier versions of this post
Introduction
Many aspiring rationalists have heard about the Center for Applied Rationality, an organization devoted to teaching applied rationality skills to help people improve their thinking, feeling, and behavior patterns. This nonprofit does so primarily through its intense workshops, and is funded by donations and revenue from its workshop. It fulfills its social mission through conducting rationality research and through giving discounted or free workshops to those people its staff judge as likely to help make the world a better place, mainly those associated with various Effective Altruist cause areas, especially existential risk.
To be fully transparent: even before attending the workshop, I already had a strong belief that CFAR is a great organization and have been a monthly donor to CFAR for years. So keep that in mind as you read my description of my experience (you can become a donor here).
Preparation
First, some background about myself, so you know where I’m coming from in attending the workshop. I’m a professor specializing in the intersection of history, psychology, behavioral economics, sociology, and cognitive neuroscience. I discovered the rationality movement several years ago through a combination of my research and attending a LessWrong meetup in Columbus, OH, and so come from a background of both academic and LW-style rationality. Since discovering the movement, I have become an activist in the movement as the President of Intentional Insights, a nonprofit devoted to popularizing rationality and effective altruism (see here for our EA work). So I came to the workshop with some training and knowledge of rationality, including some CFAR techniques.
To help myself prepare for the workshop, I reviewed existing posts about CFAR materials, with an eye toward being careful not to assume that the actual techniques match their actual descriptions in the posts.
I also delayed a number of tasks for after the workshop, tying up loose ends. In retrospect, I wish I did not leave myself some ongoing tasks to do during the workshop. As part of my leadership of InIn, I coordinate about 50ish volunteers, and I wish I had placed those responsibilities on someone else during the workshop.
Before the workshop, I worked intensely on finishing up some projects. In retrospect, it would have been better to get some rest and come to the workshop as fresh as possible.
There were some communication snafus with logistics details before the workshop. It all worked out in the end, but I would have told myself in retrospect to get the logistics hammered out in advance to not experience anxiety before the workshop about how to get there.
Experience
The classes were well put together, had interesting examples, and provided useful techniques. FYI, my experience in the workshop was that reading these techniques in advance was not harmful, but that the techniques in the CFAR classes were quite a bit better than the existing posts about them, so don’t assume you can get the same benefits from reading posts as attending the workshop. So while I was aware of the techniques, the ones in the classes definitely had optimized versions of them - maybe because of the “broken telephone” effect or maybe because CFAR optimized them from previous workshops, not sure. I was glad to learn that CFAR considers the workshop they gave us in May as satisfactory enough to scale up their workshops, while still improving their content over time.
Just as useful as the classes were the conversations held in between and after the official classes ended. Talking about them with fellow aspiring rationalists and seeing how they were thinking about applying these to their lives was helpful for sparking ideas about how to apply them to my life. The latter half of the CFAR workshop was especially great, as it focused on pairing off people and helping others figure out how to apply CFAR techniques to themselves and how to address various problems in their lives. It was especially helpful to have conversations with CFAR staff and trained volunteers, of whom there were plenty - probably about 20 volunteers/staff for the 50ish workshop attendees.
Another super-helpful aspect of the conversations was networking and community building. Now, this may have been more useful to some participants than others, so YMMV. As an activist in the moment, I talked to many folks in the CFAR workshop about promoting EA and rationality to a broad audience. I was happy to introduce some people to EA, with my most positive conversation there being to encourage someone to switch his efforts regarding x-risk from addressing nuclear disarmament to AI safety research as a means of addressing long/medium-term risk, and promoting rationality as a means of addressing short/medium-term risk. Others who were already familiar with EA were interested in ways of promoting it broadly, while some aspiring rationalists expressed enthusiasm over becoming rationality communicators.
Looking back at my experience, I wish I was more aware of the benefits of these conversations. I went to sleep early the first couple of nights, and I would have taken supplements to enable myself to stay awake and have conversations instead.
Take-Aways and Integration
The aspects of the workshop that I think will help me most were what CFAR staff called “5-second” strategies - brief tactics and techniques that could be executed in 5 seconds or less and address various problems. The stuff that we learned at the workshops that I was already familiar with required some time to learn and practice, such as Trigger Action Plans, Goal Factoring, Murphyjitsu, Pre-Hindsight, often with pen and paper as part of the work. However, with sufficient practice, one can develop brief techniques that mimic various aspects of the more thorough techniques, and apply them quickly to in-the-moment decision-making.
Now, this doesn’t mean that the longer techniques are not helpful. They are very important, but they are things I was already generally familiar with, and already practice. The 5-second versions were more of a revelation for me, and I anticipate will be more helpful for me as I did not know about them previously.
Now, CFAR does a very nice job of helping people integrate the techniques into daily life, as this is a common failure mode of CFAR attendees, with them going home and not practicing the techniques. So they have 6 Google Hangouts with CFAR staff and all attendees who want to participate, 4 one-on-one sessions with CFAR trained volunteers or staff, and they also pair you with one attendee for post-workshop conversations. I plan to take advantage of all these, although my pairing did not work out.
For integration of CFAR techniques into my life, I found the CFAR strategy of “Overlearning” especially helpful. Overlearning refers to trying to apply a single technique intensely for a while to all aspect of one’s activities, so that it gets internalized thoroughly. I will first focus on overlearning Trigger Action Plans, following the advice of CFAR.
I also plan to teach CFAR techniques in my local rationality dojo, as teaching is a great way to learn, naturally.
Finally, I plan to integrate some CFAR techniques into Intentional Insights content, at least the more simple techniques that are a good fit for the broad audience with which InIn is communicating.
Benefits
I have a strong probabilistic belief that having attended the workshop will improve my capacity to be a person who achieves my goals for doing good in the world. I anticipate I will be able to figure out better whether the projects I am taking on are the best uses of my time and energy. I will be more capable of avoiding procrastination and other forms of akrasia. I believe I will be more capable of making better plans, and acting on them well. I will also be more in touch with my emotions and intuitions, and be able to trust them more, as I will have more alignment among different components of my mind.
Another benefit is meeting the many other people at CFAR who have similar mindsets. Here in Columbus, we have a flourishing rationality community, but it’s still relatively small. Getting to know 70ish people, attendees and staff/volunteers, passionate about rationality was a blast. It was especially great to see people who were involved in creating new rationality strategies, something that I am engaged in myself in addition to popularizing rationality - it’s really heartening to envision how the rationality movement is growing.
These benefits should resonate strongly with those who are aspiring rationalists, but they are really important for EA participants as well. I think one of the best things that EA movement members can do is studying rationality, and it’s something we promote to the EA movement as part of InIn’s work. What we offer is articles and videos, but coming to a CFAR workshop is a much more intense and cohesive way of getting these benefits. Imagine all the good you can do for the world if you are better at planning, organizing, and enacting EA-related tasks. Rationality is what has helped me and other InIn participants make the major impact that we have been able to make, and there are a number of EA movement members who have rationality training and who reported similar benefits. Remember, as an EA participant, you can likely get a scholarship with a partial or full coverage of the regular $3900 price of the workshop, as I did myself when attending it, and you are highly likely to be able to save more lives as a result of attending the workshop over time, even if you have to pay some costs upfront.
Hope these thoughts prove helpful to you all, and please contact me at gleb@intentionalinsights.org if you want to chat with me about my experience.
[Link] The Much Forgotten and Ignored Need to Have Workable Solutions
I ran across this article: The Much Forgotten and Ignored Need to Have Workable Solutions, that might interest some, either for the Rationality or the Effective Altruism aspects.
For a very rough summary: Academia (more specifically, the humanities) gives too much credit to describing problems (i.e. complaining) and not enough on thinking about good solutions, which is the difficult and important part.
Some quotes if you don't want to read the whole thing:
Of course the biggest assumption of all that is being shown to be inconsistent with actual behaviour is that of rationality – Richard Thaler’s Misbehaving and other behavioural research is showing that people are subject to various biases and often do not make rational decisions. This is especially scary for theoretical economists, whose entire universe pretty much depends on the rational representative household.
If their assumptions are rather strict and may not hold up in real-life, their call for a policy response is technically null and void. A good example is with auctions, where previously designers (economists) would rely heavily on the Revenue Equivalence Theorem in creating the rules of auctions. Yet, many of them forget that the assumptions of Revenue Equivalence aren’t always satisfied, for example the possibility of collusion, which can prove to significantly reduce the revenue of the seller.
The best paper on a time economists forgot about ECON 101 has to be this review of European 3G auctions. What was most clear for me from Klemperer’s work is that you can get all up in complex auction theory and mechanism design, but if you forget how very basic concepts in economics work in conjunction with that, you can get easily derailed. They basically put the cart before the horse – they forgot that they had to satisfy their own assumptions before applying their model to reality.
More questions: is the policy they suggest cumbersome, intangible and unable to be monitored for success? This is another pet peeve of mine – my blood boils when people say “We need to fix gender stereotypes! We need to create awareness! We need to change societal attitudes!” without suggesting how it should be done, how this monumental task will be measured for good performance and how they propose regulating all the sources of these things.
Also, how would they justify that spending? Have they thought about the parameters which would determine success or failure? What kind of campaign or agency are they suggesting to carry out these monumental tasks? What are the conditions for success?
Last is that sometimes when people chuck the words “Policy Implications” around, they often have no idea what a deep and complicated field policy design actually is. To be fair, I’m still learning about it and I don’t expect university students or even researchers not involved in related areas to have a full understanding of it.
However, it’s not like economists don’t have a basic understanding of incentives, principal-agent relationships, transaction-cost economics and externalities. Those four areas should be enough to at least attempt a more rigorous analysis of possible policies, rather than simply providing an offhand description of the policy based on a single relationship.
At the end of the day, there’s just a lot of arrogance among some researchers who like to imply that their research necessitates action – yet they haven’t put any meaningful or strategic thought whether the research truly necessitates action in the first place (especially in comparison to cost-equivalent policies in similar areas, or dealing with similar problems), whether the action will actually lead to the desired outcome (checking if assumptions are realistic/addressing relevant design issues) or whether there will be any undesirable externalities or further implications of the policy.
[...]
Maybe the worst thing about all of this is that when I was growing up, I always looked up to people who were aware of issues outside themselves, especially if the issues didn’t necessarily affect them. They seemed so cool and aware and intelligent. I’d watch these people with great admiration for their insight.
Now a lot of that is gone. The people about whom once I thought, wow, this person is so aware and intelligent, I now realize aren’t actually that intelligent. They’re just pretending to be. They’re just better at vocalizing some of the things that anyone can see and turning them into long spiels about what’s wrong with the world. They haven’t really thought about it.
(ironically (intentionally?), the post is mostly complaining about a problem, without offering a workable solution, but I still liked it)
LW survey: Effective Altruists and donations
Analysis of 2013-2014 LessWrong survey results on how much more self-identified EAers donate
If interventions changing population size are cheap, they may be the best option independent of your population ethics
In this post I'll explain why you might want to assist altruistic interventions that change the size of the world population regardless of how valuable you think additional lives are. The argument relies on a combination of 2 population-changing interventions that combine to produce the effect of a non-population-changing intervention, but at a lower cost.
Suppose you can donate to the following 3 interventions:
- "Growth": increase one future person's income from $500/yr to $5,000/yr for $10,000
- "Plus": cause one more person to be born in a middle-income country (income ~$5,000/yr) for $6,000
- "Minus": cause one less person to be born in a poor country (income ~$500/yr) for $1,000
- Plus+Minus is more costly than Growth in reality (quite likely)
- Growth and Plus+Minus are actually not equivalent, since Growth actually helps a particular person (again, see my last post)
- Education about contraception
- Having children yourself (cost varies from person to person)
- Paying others to have children
- Subsidizing contraception
- Subsidizing surrogacy (there are replaceability issues here, but I couldn't find any estimates of supply/demand elasticity)
- Being a surrogate yourself (doesn't cost you any money, but can be unpleasant, so the cost varies from person to person)
Population ethics in practice
There are many different ideas about how utilitarians should value the number of future people. Unfortunately, it is difficult to take all of them into account when deciding among public policies, charities, etc. Arguments about principles like total utilitarianism, average utilitarianism, critical-level utilitarianism, etc. often come from a "global" perspective:
- Does the principle imply that we should have a very large population with a very low quality of life? (Repugnant Conclusion)
- If average utility is negative, does the principle imply that it's good to add additional people with slightly less negative utility? (Sadistic Conclusion)
- Is adding additional people valuable when the population is small, but less valuable when it is large? If so, how large does a population have to be to be considered "large"? ("diminishing marginal value" of people)
But 1% of the world population is 70 million people, and virtually no policy will have that large of an effect. So when applying population ethics to real decisions, I think it's best to act as if CLU is true, and frame disagreements as disagreements about the right value of u0, and which income level corresponds to it. That way, it's much easier to see the practical implications of your viewpoint, and people who disagree in principle may find that they agree in practice about what u0 should be, and therefore about how to choose the best policy/charity/cause/etc. The main exception is existential risk prevention, where success will change the population by a very large amount.
PDF with detailed derivations (uses slightly different notation): https://drive.google.com/file/d/0B-zh2f7_qtukMFhNYkR4alRsSFk/edit?usp=sharing
On not diversifying charity
A common belief within the Effective Altruism movement that you should not diversify charity donations when your donation is small compared to the size of the charity. This is counter-intuitive, and most people disagree with this. A Mathematical Explanation of Why Charity Donations Shouldn't Be Diversified has already been written, but it uses a simplistic model. Perhaps you're uncertain about which charity is best, charities are not continuous, let alone differentiable, and any donation is worthless unless it gives the charity enough money to finally afford another project, your utility function is nonlinear, and to top it all off, rather than accepting the standard idea of expected utility, you are risk-averse.
Standard Explanation:
If you are too lazy to follow the link, or you just want to see me rehash the same argument, here's a summary.
The utility of a donation is differentiable. That is to say, if donating one dollar gives you one utilon, donating another dollar will give you close to one utilon. Not exactly the same, but close. This means that, for small donations, it can be approximated as a linear function. In this case, the best way to donate is to find the charity that has the highest slope, and donate everything you can to it. Since the amount you donate is small compared to the size of the charity, a first-order approximation will be fairly accurate. The amount of good you do with that strategy is close to what you predicted it would do, which is more than you'd predict of any other strategy, which is close to what you'd predict for them, so even if this strategy is sub-optimal, it's at least very close.
Corrections to Account for Reality:
Uncertainty:
Uncertainty is simple enough. Just replace utility with expected utility. Everything will still be continuous, and the reasoning works pretty much the same.
Nonlinear Utility Function:
If your utility function is nonlinear, this is fine as long as it's differentiable. Perhaps saving a million lives isn't a million times better than saving one, but saving the millionth life is about as good as the one after that, right? Maybe each additional person counts for a little less, but it's not like the first million all matter the same, but you don't care about additional people after that.
In this case, the effect of the charity is differentiable with respect to the donation, and the utility is differentiable with respect to the effect of the charity, so the utility is differentiable with respect to the donation.
Risk-Aversion:
If you're risk-averse, it gets a little more complicated.
In this case, you don't use expected utility. You use something else, which I will call meta-utility. Perhaps it's expected utility minus the standard deviation of utility. Perhaps it's expected utility, but largely ignoring extreme tails. What it is is a function from a random variable representing all the possibilities of what could happen to the reals. Strictly speaking, you only need an ordering, but that's not good enough here, since it needs to be differentiable.
Differentiable is more confusing in this case. It depends on the metric you're using. The way we'll be using it here is that having a sufficiently small probability of a given change, or a given probability of a sufficiently small change, counts as a small change. For example, if you only care about the median utility, this isn't differentiable. If I flip a coin, and you win a million dollars if it lands on heads, then you will count that as worth a million dollars if the coin is slightly weighted towards heads, and nothing if it's slightly weighted towards tails, no matter how close it is to being fair. But that's not realistic. You can't track probabilities that precisely. You might care less about the tails, so that only things in the 40% - 60% range matter much, but you're going to pick something continuous. In fact, I think we can safely say that you're going to pick something differentiable. If I add a 0.1% chance of saving a life given some condition, it will make about the same difference as adding another 0.1% chance given the same condition. If you're risk-averse, you'd care more about a 0.1% chance of saving a life it's takes effect during the worst-case scenario than the best-case, but you'd still care about the same for a 0.1% chance of saving a life during the worst case as for upgrading it to saving two lives in that case.
Once you accept that it's continuous, the same reasoning follows as with expected utility. A continuous function of a continuous function is continuous, so the meta-utility of a donation with respect to the amount donated is continuous.
To make the reasoning more clear, here's an example:
Charity A saves one life per grand. Charity B saves 0.9 lives per grand. Charity A has ten million dollars, and Charity B has five million. One or more of these charities may be fraudulent, and not actually doing any good. You have $100, and you can decide where to donate it.
The naive view is to split the $100, since you don't want to risk spending it on something fraudulent. That makes sense if you care about how many lives you save, but not if you care about how many people die. They sound like they're the same thing, but they're not.
If you donate everything to Charity A, it has $10,000,100 and Charity B has $5,000,000. If you donate half and half, Charity A has $10,000,050 and Charity B has $5,000,050. It's a little more diversified. Not much more, but you're only donating $100. Maybe the diversification outweighs the good, maybe not. But if you decide that it is diversifying enough to matter more, why not donate everything to Charity B? That way, Charity A has $10,000,000, and Charity B has $5,000,100. If you were controlling all the money, you'd probably move a million or so from Charity A to Charity B, until it's well and truly diversified. Or maybe it's already pretty close to the ideal and you'd just move a few grand. You'd definitely move more than $100. There's no way it's that close to the optimum. But you only control the $100, so you just do as much as you can with that to make it more diversified, and send it all to Charity B. Maybe it turns out that Charity B is a fraud, but all is not lost, because other people donated ten million dollars to Charity A, and lots of lives were saved, just not by you.
Discontinuity:
The final problem to look at is that the effects of donations aren't continuous. The time I've seen this come up the most is when discussing vegetarianism. If you don't it meat, it's not going to make enough difference to keep the stores from ordering another crate of meat, which means exactly the same number of animals are slaughtered.
Unless, of course, you were the straw that broke the camel's back, and you did keep a store from ordering a crate of meat, and you made a huge difference.
There are times where you might be able to figure that out before-hand. If you're deciding whether or not to vote, and you're not in a battleground state, you know you're not going to cast the deciding vote, because you have a fair idea of who will win and by how much. But you have no idea at what point a store will order another crate of meat, or when a charity will be able send another crate of mosquito nets to Africa, or something like that. If you make a graph of the number of crates a charity sends by percentile, you'll get a step function, where there's a certain chance of sending 500 crates, a certain chance of sending 501, etc. You're just shifting the whole thing to the left by epsilon, so it's a little more likely each shipment will be made. What actually happens isn't continuous with respect to your donation, but you're uncertain, and taking what happens as a random variable, it is continuous.
A few other notes:
Small Charities:
In the case of a sufficiently small charity or large donation, the argument is invalid. It's not that it takes more finesse like those other things I listed. The conclusion is false. If you're paying a good portion of the budget, and the marginal effects change significantly due to your donations, you should probably donate to more than one charity even if you're not risk-averse and your utility function is linear.
I would expect that the next best charity you manage to find would be worse by more than a few percent, so I really doubt it would be worth diversifying unless you personally are responsible for more than a third of the donations.
An example of this is keeping money for yourself. The hundredth dollar you spend on yourself has about a tenth of the effect the thousandth does, and the entire budget is donated by you. The only time you shouldn't diversify is if the marginal benefit of the last dollar is still higher than what you could get donating to charity.
Another example is avoiding animal products. Avoiding steak is much more cost-effective than avoiding milk, but once you've stopped eating meat, you're stuck with things like avoiding milk.
Timeless Decision Theory:
If other people are going to make similar decisions to you, your effective donation is larger, so the caveats about small charities applies. That being said, I don't think this is really much of an issue.
If everyone is choosing independently, even if most of them correlate, the end result will be that the charities get just enough funding that some people donate to some and others donate to others. If this happens, chances are that it would be worth while for a few people to actually split their investments, but it won't make a big difference. They might as well just donate it all to one.
I think this will only become a problem if you're just donating to the top charity on GiveWell, regardless of how closely they rated second place, or you're just donating based purely on theory, and you have no idea if that charity is capable of using more money.
What can total utilitarians learn from empirical estimates of the value of a statistical life?
This post was inspired by Carl Shulman's blog post from last month—if you have time, read that first, since this is basically a response to it. My goal here is to combine
- Empirical studies of how much people are willing to pay to reduce their risk of death, and
- The "total utilitarian" assumption that potential people are as important as existing people, and the value of an additional person is independent of the number of preexisting people
- An additional (quite strong!) assumption that the utility gain from being born and becoming an adult is the same as the utility loss from a premature adult death
Suppose everyone has identical preferences, and only two variables affect expected utility: their probability of survival and their income
. Since von Neumann–Morgenstern utility functions are invariant under affine transformations, we can define the utility of being dead as 0 and still have one degree of freedom left (two utility functions are equivalent iff they are related by a positive linear transformation). Fixing a reference (minimum) income level
, we can always the write the utility function as
,
where is some function defined on
with
. This condition ensures that
is the utility at the minimum income. For instance, if utility from income is logarithmic, we can let
. A logarithm with any other base can be turned into
by a linear transformation, so the choice of base doesn't matter.
We can infer from empirical estimates of the value of a statistical life if we have a hypothesis for the form of
—so total utilitarians should pay a lot of attention to these estimates! If you're willing to pay
for a small relative increase in your probability of survival,
(as opposed to an absolute increase
), then your value of life is defined as
.
If your utility from income takes the same form as and you're rational, then it's also true that
.
In other words, the value of life is the marginal rate of substitution between income and log survival probability. So
and
.
In the case of , we have
.
$6 million is a reasonable estimate (although on the low side) for the value of a statistical life. is in units of income, so the $6M estimate needs to be translated into an income stream. At an interest rate of 3% over 40 years, this will require payments of ~$257,582 per year. If the $6M estimate was for people making $50,000 a year, then
. With
at $300 per year, this gives us
. It's just a coincidence that
is so close to 0: slightly different parameters will shift
substantially away from that point. I biased all my parameter estimates (except for the interest rate, which I understand very poorly) so that
would have a downward bias, so if my estimates are wrong
is probably higher.
I'm not going to draw any conclusions about what a total utilitarian should do, since there are many problems with this method of estimation:
- The value-of-statistical-life studies are from high-income countries, so it's questionable to extrapolate to very low incomes.
- Utility from income probably isn't logarithmic, since people exhibit relative risk aversion.
- The value of
depends strongly on the interest rate.
- I assumed that somebody with $300 per year has the same life expectancy as someone with $50K per year. This isn't as big of a problem as it seems. If they live half as long, you can compare two $300/year people versus one $50K/year person and get a similar result.
'Effective Altruism' as utilitarian equivocation.
Summary: The term 'effective altuist' invites confusion between 'the right thing to do' and 'the thing that most efficiently promotes welfare.' I think this creeping utilitarianism is a bad thing, and should at least be made explicit. This is not to accuse anyone of deliberate deception.
Over the last year or so, the term 'Effective Altruist' has come into use. I self-identified as one on the LW survey, so I speak as a friend. However, I think there is a very big danger with the terminology.
The term 'Effective Altruist' was born out of the need for a label for those people who were willing to dedicate their lives to making the world a better place in rational ways, even if that meant doing counter-intuitive things, like working as an Alaskan truck driver. The previous term, 'really super awesome hardcore people', was indeed a little inelegant.
However, 'Effective Altruist' has a major problem: it refers to altruism, not ethics. Altruism may be a part of ethics (though the etymology of the term gives some concern), but it is not all there is to ethics. Value is complex. Helping people is good, but so is truth, and justice, and freedom, and beauty, and loyalty, and fairness, and honor, and fraternity, and tradition, and many other things.
A charity that very efficiently promoted beauty and justice, but only inefficiently produced happiness, would probably not be considered an EA organization. A while ago I suggested to [one of the leaders of the Center for Effective Altruism] the creation of a charity to promote promise-keeping. I didn't claim such a charity would be an optimal way of promoting happiness, and to them, this was sufficient to show 1) that it was not EA - and hence 2) inferior to EA things.
Such thinking involves either a equivocation or a concealed premise. If 'EA' is interpreted literally, so 'the primary/driving goal is to help others', then something not being EA is insufficient for it to not be the best thing you could do - there is more to ethics and the good, than altruism and promoting welfare. Failure to promote one dimension of the good doesn't mean you're not the optimal way of promoting their sum. On the other hand, if 'EA' is interpreted broadly, as being concerned with 'happiness, health, justice, fairness and/or other values', then merely failing to promote welfare/happiness does not mean a cause is not EA. Much EA discussion, like on the popular facebook group, equivocates between these two meanings.*
...Unless one thought that helping people was all their was to ethics, in which case this is not equivocation. As virtually all of CEA's leaders are utilitarians, it is plausible that is was the concealed premise in their argument. In this case, there is no equivocation, but a different logical fallacy, that of an omitted premise, has been committed. And we should be just as wary as in the case of equivocation.
Unfortunately, utilitarianism is false, or at least not obviously true. Something can be the morally best thing to do, while not being EA. Just because some utilitarians have popularized a term which cleverly equivocates between "promotes welfare" and "is the best thing" does not mean we should be taken in. Every fashionable ideology likes to blurr the lines between its goals and its methods (is Socialism about helping the working man or about state ownership of industry? is libertarianism about freedom or low taxes?) in order to make people who agree with the goals forget that there might be other means of achieving them.
There are two options: recognize 'EA' as referring to only a subset of morality, or recognize as 'EA' actions and organizations that are ethical through ways other than producing welfare/happiness.
* Yes, one might say that promoting X's honor thereby helped X, and thus there was no distinction. However, I think people who make this argument in theory are unlikely to observe it in practice - I doubt that there will be an EA organisation dedicated to pure retribution, even if it was both extremely cheap to promote and a part of ethics.
[Prize] Essay Contest: Cryonics and Effective Altruism
I'm starting a contest for the best essay describing why a rational person of a not particularly selfish nature might consider cryonics an exceptionally worthwhile place to allocate resources. There are three distinct questions relating to this, and you can pick any one of them to focus on, or answer all three.
Contest Summary:
- Essay Topic: Cryonics and Effective Altruism
- Answers at least one of the following questions:
- Why might a utilitarian seeking to do the most good consider contributing time and/or money towards cryonics (as opposed to other causes)?
- What is the most optimal way (or at least, some highly optimal, perhaps counterintuitive way) to contribute to cryonics?
- What reasons might a utilitarian have for actually signing up for cryonics services, as opposed to just making a charitable donation towards cryonics (or vice versa)?
- Length: 800-1200 words
- Target audience: Utilitarians, Consequentialists, Effective Altruists, etc.
- Prize: 1 BTC (around $350, at the moment)
- Deadline: Sunday 11/17/2013, at 8:00PM PST
To enter, post your essay as a comment in this thread. Feel free to edit your submission up until the deadline. If it is a repost of something old, a link to the original would be appreciated. I will judge the essays partly based on upvotes/downvotes, but also based on how well it meets the criteria and makes its points. Essays that do not directly answer any of the three questions will not be considered for the prize. If there are multiple entries that are too close to call, I will flip a coin to determine the winner.
Terminology clarification: I realise that for some individuals there is confusion about the term 'utilitarian' because historically it has been represented using very simple, humanly unrealistic utility functions such as pure hedonism. For the purposes of this contest, I mean to include anyone whose utility function is well defined and self-consistent -- it is not meant to imply a particular utility function. You may wish to clarify in your essay the kind of utilitarian you are describing.
Regarding the prize: If you win the contest and prefer to receive cash equivalent via paypal, this wll be an option, although I consider bitcoin to be more convenient (and there is no guarantee how many dollars it will come out to due to the volatility of bitcoin).
Contest results
[Link] "A Long-run Perspective on Strategic Cause Selection and Philanthropy" by Nick Beckstead and Carl Shulman
A philanthropist who will remain anonymous recently asked us about what we would do if we didn’t face financial constraints. We gave a detailed answer that we thought we might as well share with others, who may also find our perspective interesting. We gave the answer largely in hope of creating some interest in our way of thinking about philanthropy and some of the causes that we find interesting for further investigation, and because we thought the answer would be fruitful for conversation.
[LINK] Anti-Diarrhea Kits and the Supply Chain
http://opinionator.blogs.nytimes.com/2013/07/03/making-medicine-as-easy-to-get-as-a-can-of-coke/
Diarrhea is a pervasive and deadly problem for children in poor countries. ORS (oral rehydration salts) are an effective safe cure, but clinics are sparsely distributed and frequently out of stock.
The time/cost of getting to a clinic is high, and the wait puts a sick child at further risk.
Small stores are much more pervasive.
So Simon and Jane Berry and local partners invented an ORS kit that would fit into Coke crates between the bottles.
However, while putting the kits into Coke crates is a cool idea which helped attract funding, they found it worked better to build relationships with distributors and shopkeepers-- the people who were bringing goods that last mile towards the consumer.
The Simons are still at the stage of experimenting with prices (I just heard about the project from the BBC)-- the kits were originally free, but ideally they'd cost enough to be self-sustaining.
How Efficient is the Charitable Market?
When I talk about the poor distribution of funds in charity, people in the effective altruism movement sometimes say, "Didn't Holden Karnofsky show that charity is an efficient market in his post Broad Market Efficiency?"
My reply is "No. Holden never said, and doesn't believe, that charity is an efficient market."
What is an efficient market?
An efficient market is one in which "one cannot consistently achieve returns in excess of average market returns... given the information available at the time the investment is made." (Details here.)
Of course, market efficiency is a spectrum, not a yes/no question. As Holden writes, "The most efficient markets can be consistently beaten only by the most talented/dedicated players, while the least efficient [markets] can be beaten with fairly little in the way of talent and dedication."
Moreover, market efficiency is multi-dimensional. Any particular market may be efficient in some ways, and in some domains, while highly inefficient in other ways and other domains.
Q for GiveWell: What is GiveDirectly's mechanism of action?
I first wrote up the following post, then happened to run into Holden Karnofsky in person and asked him a much-shortened form of the question verbally. My attempt to recount Holden's verbal reply is also given further below. I was moderately impressed by Holden's response because I had not thought of it when listing out possible replies, but I don't understand yet why Holden's response should be true. Since GiveWell has recently posted about objections to GiveDirectly and replies, I decided to go ahead and post this now.
A question for GiveWell:
Your current #2 top-rated charity is GiveDirectly, which gives one-time gifts of $1000 over 9 months, directly to poor recipients in Kenya via M-PESA.
Givewell tries for high standards of evidence of efficacy and cost-effectiveness. As I understand it, you don't just want the charity to be arguably cost effective, you want a very high probability that the charity is cost-effective.
The main evidence I've seen cited for direct giving is that the recipients who received the $1000 are then substantially better off 9 months later compared to people who aren't.
While I can imagine arguments that could repair the obvious objection to this reasoning, I haven't seen yet how the resulting evidence about cost-effectiveness could rise again to the epistemic standards one would expect of Givewell's #2 evidence-based charity.
The obvious objection is as follows: Suppose the Kenyan government simply printed new shillings and handed out $1000 of such shillings to the same recipients targeted by GiveDirectly. Although the recipients would be better off than non-recipients, this might not reflect any improvement in net utility in Kenya because no new resources were created by printing the money.
There are of course obvious replies to this obvious objection:
(1) Because the shillings handed out by GiveDirectly are purchased on the foreign currency exchange market using U. S. dollars, and would otherwise have been spent in Kenya in other ways, we should not expect any inflation of the shilling, and should expect an increase in Kenyan consumption of foreign goods corresponding to the increased price of shillings implied by GiveDirectly adding their marginal demand to the auction and thereby raising the marginal price of all shillings sold. The primary mechanism of action by which GiveDirectly benefits Kenya is by raising the price of shillings in the foreign exchange market and making more hard currency available to sellers of shillings. So far as I can tell, this argument ought to generalize: Any argument that the Kenyan government could not accomplish most of the same good by printing shillings will mean that the primary mechanism of GiveWell's effectiveness must be the U.S. dollars being exchanged for the shillings on the foreign currency market. This in turn means that GiveDirectly could accomplish most of its good by buying the same shillings on the foreign currency market and burning them.
(Or to sharpen the total point of this article: The sum of the good accomplished by GiveDirectly should equal:
- The good accomplished by the Kenyan government printing shillings and distributing them to the same recipients;
- plus the good accomplished by GiveDirectly then purchasing shillings on the foreign exchange market using US dollars, and burning them.
Indeed, since these mechanisms of action seem mostly independent, we ought to be able to state a percentage of good accomplished which is allegedly attributed to each, summing to 1. E.g. maybe 80% of the good would be achieved by printing shillings and distributing them to the same recipients, and 20% would be achieved by purchasing shillings on the foreign exchange market and burning them. But then we have mostly the same questions as before about how to generate wealth by printing shillings.)
(2) Inequality in Kenya is such that redistributing the supply of shillings toward the very poor increases utility in Kenya. Thus the Kenyan government could accomplish as much good as GiveDirectly by printing an equivalent number of shillings and giving them to the same recipients. This would create inflation that is a loss to other Kenyans, some of them also very poor, but so much of the shilling supply is held by the rich that the net results are favorable. Printing shillings can create happiness because it shifts resources from making speedboats for the rich to making corrugated iron roofs for the poor.
(It would be nice if the Kenyan government just printed shillings for GiveDirectly to use, but this the Kenyan government will not realistically do. Effective altruists must live in the real world, and in the real world GiveDirectly will only accomplish its goals with the aid of effective altruists. One cannot live in the should-universe where Kenya's government is taking up the burden. Effective altruists should reason as if the Kenya government consists of plastic dolls who cannot be the locus of responsibility instead of them - that's heroic epistemology 101. Maybe there will eventually be returns on lobbying for Minimum Guaranteed Income in Kenya if the programs work, but that's for tomorrow, not right now.)
(3) Like the European Union, Kenya is not printing enough shillings under standard economic theory. (I have no idea if this is plausibly true for Kenya in particular.) If the government printed shillings and gave them to the same recipients, this would create real wealth in Kenya because the economy was operating below capacity and velocity of trade would pick up. The shillings purchased by GiveDirectly would otherwise have stayed in bank accounts rather than going to other Kenyans. Note that this contradicts the argument step in (1) where we said that the purchased shillings would otherwise have been spent elsewhere, so you should have questioned one argument step or the other.
(4) Village moneylenders and bosses can successfully extract most surplus generated within their villages by raising rents or demanding bribes. The only way that individuals can escape the grasp of moneylenders and rentiers is with a one-time gift that was not expected and which the moneylenders and bosses could not arrange to capture. The government could accomplish as much good as GiveDirectly by printing the same number of shillings and giving them to the same people in an unpredictable pattern. This would create some inflation but village moneylenders or bosses would ease off on people from whom they couldn't extract as much value, whereas the one-time gift recipients can purchase capital goods that will make them permanently better off in ways that don't allow the new value to be extracted by moneylenders or bosses.
If I recall correctly, GiveDirectly uses the example of a family using some of the gift money to purchase a corrugated iron roof. From my perspective the obvious objection is that they could just be purchasing a corrugated iron roof that would've gone to someone else and raising the prices of roofs. (1) says that Kenya has more foreign exchange on hands and can import, not one more corrugated iron roof, but a variety of other foreign goods; (2) says that the resources used in the corrugated iron roof would otherwise have been used to make a speedboat; (3) says that a new trade takes place in which somebody makes a corrugated iron roof that wouldn't have been manufactured otherwise; and (4) says that the village moneylenders usually adjust their interest rates so as to prevent anyone from saving up enough money to buy a corrugated iron roof.
The trouble is that all of these mechanisms of action seem much harder to measure and be sure of, than the measurable outcomes for gift recipients vs. non-recipients.
To reiterate, the sum of the good accomplished by GiveDirectly should equal the good accomplished by the Kenyan government printing shillings and distributing them to the same recipients, plus the good accomplished by GiveDirectly purchasing shillings on the foreign exchange market using US dollars and then burning them. It seems to me to be difficult to arrive at a state of strong evidence about either of the two terms in this sum, with respect to any mechanism of action I've thought of so far.
With respect to the second term in this sum: GiveDirectly buying shillings on the foreign exchange market and burning them might create wealth, but it's hard to see how you would measure this over the relevant amounts, and no such evidence was cited in the recommendation of GiveDirectly as the #2 charity.
With respect to the first term in this sum: Under the Bayesian definition of evidence, strong evidence is evidence we are unlikely to see when the theory is false. Even in the absence of any mechanism whereby printing nominal shillings creates happiness or wealth, we would still expect to find that the wealth and happiness of gift recipients exceeded the wealth of non-recipients. So measuring that the gift recipients are wealthier and happier is not strong or even medium evidence that printing nominal shillings creates wealth, unless I'm missing something here. Our posterior that printing shillings and giving them to certain people would create net wealth in any given quantity, should roughly equal our prior, after updating on the stated experimental evidence.
When I posed a shortened form of this question to Holden Karnofsky, he replied (roughly, I am trying to rephrase from memory):
It seems to me that this is a perverse decomposition of the benefit accomplished. There's no inflation in the shilling because you're buying them, and since this is true, decomposing the benefit into an operation that does inflationary damage as a side effect, and then another operation that makes up for the inflation, is perverse. It's like criticizing the Against Malaria Foundation based on a hypothetical which involves the mosquito nets being made from the flesh of babies and then adding another effect which saves the lives of other babies. Since this is a perverse sum involving a strange extra side effect, it's okay that we can't get good estimates involving either of the terms in it.
Please keep in mind that this is Holden's off-the-cuff, non-written in-person response as rephrased by Eliezer Yudkowsky from imperfect memory.
With that said, I've thought about (what I think was) Holden's answer and I feel like I'm still missing something. I agree that if U.S. dollars were being sent directly to Kenyan recipients and used only to purchase foreign goods, so that foreign goods were being directly sent from the U.S. to Kenyan recipients, then improvement in measured outcome for recipients compared to non-recipients would be an appropriate metric, and that the decomposition would be perverse. But if the received money, in the form of Kenyan shillings, is being used primarily to purchase Kenyan goods, and causing those goods to be shipped to one villager rather than another while also possibly increasing velocity of trade, remedying inequality, and enabling completely different actors to buy some amount of foreign goods, then I honestly don't understand why this scenario should have the same causal mechanisms as the scenario where foreign goods are being shipped in from outside the country. And then I honestly don't understand why measured improvements for one Kenyan over another should be a good proxy for aggregate welfare change to the country.
I may be missing something that an economist would find obvious or I may have misunderstood Holden's reply. But to me, my sum seems like an obvious causal decomposition of the effects in Kenya, neither of whose terms can be estimated well. I don't understand why I should expect the uncertainty in these two estimates to cancel out when they are added; I don't understand what background causal model yields this conclusion.
To be clear, I personally would guess that the U.S. would be net better off, if the Federal Reserve directly sent everyone in the U.S. with income under $20K/year a one-time $6,000 check with the money phasing out at a 10% rate up to $80K/year. This is because, in order of importance:
- I buy the analogous market monetarist argument (3) that the U.S. is printing too little money.
- I buy the analogous argument (2) about inequality.
- (However, I also somewhat suspect that some analogous form of (4) is going on with poor people somehow systematically having all but a certain amount of value extracted from them, which is in general how a modern country can have only 2% instead of 95% of the population being farmers, and yet there are still people living hand-to-mouth. I would worry that a predictable, universal one-time gift of $6K would not defeat this phenomenon, and that the gift money will just be extracted again somehow. In the case of Minimum Guaranteed Income, I would worry that the labor share of income will drop proportionally to small amounts of MGI as wages are just bid down by people who can live on less. Or something. This would be a much longer discussion and the ideas are much less simple than the above two notions, probably also less important. I'm just mentioning it again because of my long-term puzzlement with the question "Why are there still poor people after agricultural productivity rose by a factor of 100?")
What I wouldn't say is that my belief in the above is as strong as my belief in, say, the intelligence explosion. I'd guess that the printing operation would do more good than harm, but it's not what I would call a strong evidence-based conclusion. If we're going to be okay with that standard of argument generally, then the top charity under that standard of reasoning, generally and evenhandedly applied, ought to work out to some charity that does science and technology research. (X-risk minimization might seem substantially 'weirder' than that, but the best science-funding charities should be only equally weird.) And I wouldn't measure the excess of happiness of gift-recipients compared to non-recipients in a pilot program, and call this a good estimate of the net good if a Minimum Guaranteed Income were universally adopted.
So to reiterate, my question to Givewell is not "Why do you think GiveDirectly might maybe end up doing some good anyway?" but "Does GiveDirectly rise to the standards required for your #2 evidence-based charity?"
SENS and Givewell: Conversation between Holden Karnofsky and Aubrey de Grey
Givewell’s Holden Karnofsky, who has previously posted his thoughts on Givewell supporting SI/MIRI recently discussed the potential for Givewell to begin evaluating biomedical charities, in Givewell’s Yahoo Group. Someone suggested (as I have through less direct means) that they take a hard look at SENS Research Foundation, and then Aubrey de Grey appeared and began an interesting discussion with Holden.
The thread begins with Holden’s long initial post about Givewell’s stance on investigating and recommending biomedical charities, which is definitely worth the read for greater insight. The rest of the conversation is aggregated below for anyone else who can’t stomach Yahoo Groups’ interface.
Overall, Holden seems to agree with the goal of SENS, and interested in the details, but the conversation seems to have ended in October 2012 with Holden stating that he was waiting for Dario Amodei’s thoughts on SENS.
Holden,
First, I think that this is an excellent document. I checked for a
number of things that I had heard about (Breakout Labs, John
Ioannidis, Cochrane Collaboration) and they're all there in your
document.
The one thing that's not explicitly mentioned: longevity and life
extension research. At least prima facie, this seems like something
that should be more important than individual disease research, and it
seems like a classic "Valley of Death" case (pun unintended, but
noted) -- T1 stage to use your terminology. I think the SENS website
http://www.sens.org would be a good starting point for one of the (to
me promising) approaches to life extension. I recall from past
conversations that you were aware of SENS, so this is not new to you,
but I think that longevity should be included as part of any
discussion of biomedical research and given separate consideration
given that it has a much lower status than research into specific
conditions such as cancer, dementia, etc. You may ultimately conclude
that not enough can be done in this area, but I think it should be
part of your preliminary stuff. [btw, the United States has a National
Institute of Aging, but it's much lower-status than most of the other
grantmakers mentioned here].
Vipul
Hi Vipul,
Thanks for the thoughts. I had a followup conversation with Dario about this topic a few days ago. I think the question of "could one fund translational research to treat/prevent aging?" provides an interesting illustration of some of the tricky dynamics here for a funder:
Best,
- It's possible that if there were a great deal more attention giving to treating/preventing aging, we would have some promising treatments. So in a broad sense it's possible that aging is underinvested in.
- A lot of the best basic biology research isn't clearly pointing toward one treatment/condition or another; it's about understanding the fundamentals of how organisms operate. So having an interest in treating aging, as opposed to cancer, might not have a major impact on which projects one funds, if one's main goal is to fund outstanding basic biology research.
- Perhaps because of the lack of emphasis on treating aging (or perhaps because it's simply too difficult of a problem), there don't seem to be promising findings in the "Valley of Death" relevant to aging; the few promising leads have been explored.
- So even if, in a broad sense, there is too little attention given to this problem, knowing this doesn't necessarily yield a clear direction for a relatively small-scale funder of biomedical research.
Holden
Hi everyone,
My attention was brought to this thread, by virtue of the fact that it was my work that gave rise to SENS Foundation, and I'm looking forward to getting more involved here; I've held the Effective Altruism movement in high regard for some time. However, given my newbie status here I want to start by apologising in advance for any oversight of previously-discussed issues etc. I'm naturally delighted both at Holden's post and at Vipul's reply (which I should stress that I did not plant! - I do not know Vipul at all, though I look forward to changing that). I would like to mention just a few key points for discussion:
- Holden, I want to compliment you on your appreciation of how academia really works. Everything you say about that is spot on. The aversion to "high risk high gain" work that has arisen and become so endemic in the system is the most important point here, in terms of why parallel funding routes are needed.
- I'm slightly confused that a lot of Holden's remarks are focused on the private sector (i.e. startups), since my understanding was that GiveWell is about philanthropy; but I realise that there is not all that clear a boundary between the two (and I note the mention of Breakout Labs, with which I have close links and which sits astride that divide more than arguably anyone). The "valley of death" in pre-competitive translational research is a rather different one than that encountered by startups, but the principle is the same, and research to postpone aging certainly encounteres it.
- Something that I presume factors highly among GiveWell's criteria is the extent to which a cause may be undervalued by the bulk of major philanthropists, such that an infusion of additional funds would make more of a difference than in an area that is already being well funded. To me this seems to mirror the logic of focusing on the shortcomings (gaps) in NIH's funding (and that of traditional-model foundations). Holden notes that "Anyone we consider for funding ought to be able to explain why they're better at allocating the funds than the NIH" and I agree wholeheartedly, but my inference is that he thinks that some orgs may indeed be able to explain that. I certainly think that SENS Foundation can.
- Coming to aging: research to postpone aging has the unique problem of quite indescribeable irrationality on the part of most of the general public, policy-makers and even biologists with regard to its desirability. Biogerontologists have been talking to brick walls for decades in their effort to get the rest of the world to appreciate that aging is what causes age-related ill-health, and thus that treatments for aging are merely preventative geriatrics. The concept persists, despite biogerontologists' best efforts, that aging is "natural" and should be left alone, whereas the diseases that it brings about are awful and should be fought. This is made even more bizarre by the fact that the status of age-related diseases as aspects of the later stages of aging absolutely, unequivocally implies that efforts to attack those diseases directly are doomed to fail. As such, this is a (unique? certainly very rare) case where a philanthropic contribution can make a particularly big difference simply because most philanthropists don't see the case for it. It underpins why having an interest in treating aging, as opposed to cancer, absolutely has a major impact on which projects one funds. It's also a case for (if I understand the term correctly) meta-research.
- A lot of the chatter about treating aging revolves around longevity, but it shouldn't. I'm all in favour of longevity, don't get me wrong, but it's not what gets me up in the morning: what does is health. I want people to be truly youthful, however long ago they were born: simple as that. The benefits of longevity per se to humanity may also be substantial, in the form of greater wisdom etc, but that would necessarily come about only very gradually (we won't have any 1000-year-old for at least 900 years whatever happens!), so it doesn't figure strongly in my calculations.
- When forced to acknowledge that the idea of aging being a high-priority target for medicine is an inescapeable consequence of things they already believe (notably that health is good and ageism is bad), many people retreat to the standpoint that it's never going to be possible so it's OK to be irrational about whether it's desirable. The feasibility of postponing age-related ill-health by X years with medicine available Y years from now is, of course, a matter of speculation on which experts disagree, just as with any other pioneering technology. I know that Holden and others have expressed caution (at best) concerning the accuracy of any kind of calculation of probabilities of particular outcomes in the distant (or even not-so-distant) future, and I share that view. However, an approach that may appeal more is to estimate how much humanitarian benefit a given amount of progress would deliver, and then to ask how unlikely that scenario needs to be to make it not worth pursuing. My claim is that the benefits of hastening the defeat of aging by even a few years (which is the minimum that I claim SENS Foundation is in a position to do, given adequate funding) would be so astronomical that the required chance of success to make such an effort worthwhile would be tiny - too tiny for it to be reasonable to argue that such funding would be inadvisable. But of course that is precisely what I would want GiveWell to opine on.
- In the event that GiveWell (or anyone else) were to decide and declare that the defeat of aging is indeed a cause that philanthropists should support, there then arises the question of which organisation(s) should be supported in the best interests of that mission. We at SENS Foundation have worked diligently to rise as quickly as possible in the legitimacy stakes by all standard measures, but we are still young and there remains more to do. If I were to offer an argument to fund us rather than any other entity, it would largely come down to the fact that no other organisation has even a serious plan for defeating aging, let alone a track record of implementing such a plan's early stages.
- A significant chunk of what we do is of a kind that I think comes under "meta-research". A prominent example is a project we're funding at Denver University to extend the well-respected forecasting system "International Futures" so that it can analyse scenarios incorporating dramatically postponed aging.
I greatly welcome any feedback.
Cheers, Aubrey
Hi Aubrey,
Thanks for the thoughts.
The NIH appears to have a division focused on research relevant to this topic: http://www.nia.nih.gov/research/dab . Its budget appears to be ~$175 million (per year). The National Institute on Aging, which houses this division, has a budget of about $1 billion per year, including a separate ~$400 million for neuroscience (which may also be relevant) as well as $115 million for intramural research. Figures are from http://www.nia.nih.gov/about/budget/2012/fiscal-year-2013-budget. The Institute states that its mandate includes translational research (http://www.nia.nih.gov/research/faq/does-nia-support-translational-research). How would you distinguish your work from this work?
(For the moment I'm putting aside the question I raised in my previous response to Vipul on this topic, regarding whether it's best to approach biology funding from the perspective of "trying to treat/cure a particular condition" or "trying to understand fundamental questions in biology whose applications are difficult to predict.")
Best,
Holden
Hi Holden - many thanks.
First: yes, there are really three somewhat separate questions for someone trying to evaluate whether to support SENS Foundation:
1) Is the medical control of aging a hugely valuable mission?
2) Assuming "yes" to (1), is it best achieved by basic research or translational research?
3) Assuming translational, is SENS Foundation the organisation that uses money most effectively in pursuit of that mission?
I had rather expected that you would take some convincing on item (1), and much of what I wrote last time was focused on that. Since it isn't the focus of your question to me, I'm now going to assume until further notice that there is no dissent on that.
So, to answer your question: actually you're not putting aside the basic-vs-translational question as much as you may think you are. The word "translational" is flavour of the month in government funding circles these days (not only in the USA), so it's not surprising that the NIA has a public statement of the kind you pointed to. However, notice that the link they give "for more information" is to a page listing ALL "Funding Opportunity Announcements". There is no page specifically for translational ones, and the reason there isn't is that the amount of work that the NIA actually funds that could really be called translational is tiny. In other words, the page you found is actually just blatant spin. The neuroscience slice you mention is an anomaly arising from the way NIA was founded (the natural place for that money is clearly NINDS): the fact that it's NIA money does not, in practice, translate into its being spent on work to prevent neurodegeneration by treating its cause (aging). Instead, just like NINDS money, it's spent on attacking neurodegeneration directly, as if such diseases could be eliminated from the body just like an infection: the same old mistake that afflicts, and dooms, the whole of geriatric medicine.
So, the first answer to your question is that SENS Foundation really DOES focus on translational research, with an explicit goal of postponing age-related ill-health. But there's also another big difference: we can attack this problem relatively free of the other priorities that afflict mainstream funding (whether from NIH or from trasitional foundations). Most importantly, though we do and will continue to publish our interim results in the peer-reviewed literature, we are much less constrained by "publish or perish" tyranny than typical academics are. This allows us to proceed by constructing and implementing a rational "project plan" (namely SENS) to get to the intended goal (the defeat of aging), whereas what little translational work is funded by NIA or others is guided overwhelmingly by the imperative to get some kind of positive result as quickly as possible, even when it's understood that those results are not remotely likely to "scale", i.e. to translate into eventual medical treatments that significantly delay aging. A great example of this is the NIA's Interventions Testing Program (ITP) to test the mouse longevity effects of various small molecules. The ITP only exists at all (and in a far smaller form than originally intended) as a result of several years of persistence by the then head of the NIA's biology division (Huber Warner), and it focuses entirely on delivery of simple drugs starting rather early in life, with the result that no information emerges that's relevant to treating people who are already in middle age or older. (This is despite the fact that by far the most high-profile result that the ITP has delivered so far, the benefits of rapamycin, actually WAS a late-onset study: it wasn't meant to be, but technical issues delayed the experiment.) In a nutshell, there is a huge bias against high-risk high-gain work.
The third thing that distinguishes SENS Foundation's approach is that we can transcend the "balkanisation" (silo mentality) that dominates mainstream academic funding. When one submits a grant application to NIA, it is evaluated by gerontologists, just as when one submits to NCI it is evaluated by oncologists, etc. What's wrong with this is that it biases the system immensely against cross-disciplinary proposals. SENS is a plan that brings together a large body of knowledge from gerontology but also a huge amount of expertise that was developed for other reasons entirely - to treat acute disease/injury, or in some cases for purposes that were not biomedical at all (notably environmental decontamination). It doesn't matter how robust the objective scientific and technological argument is for work of that sort: it will never compete (especially in today's very tight funding environment) with more single-topic proposals all of whose details can be understood by reviewers from a particular single field.
The final thing to mention, and this actually also answers your question to Vipul about basic versus translational research, is that SENS is a plan that has stood the test of time. I've been propounding it since 2000, well before SENS Foundation existed, and it used to come in for a lot of criticism (initially more in the form of off-the-record ridicule, and latterly, at my behest, in print), but in every single case that criticism was found to stem from ignorance on the part of the detractor, either of what I proposed or of published experimental work on which the proposal was based. That's why I'm now regularly asked to organise entire sessions at mainstream gerontology conferences, whereas as little as five years ago I would never even be invited to speak. It's also why the Research Advisory Board of SENS Foundation consists of such prestigious scientists. This is a very strong argument, in my view, for believing that now is the time to sink a proper amount of money into translational gerontology (though certainly not to cease doin basic biogerontology too). It's well known that basic scientists are often not the most far-sighted when it comes to seeing how to apply their discoveries (attitudes in 1900 to the feasibility of powered flight being the canonical example). It is therefore a source of concern that almost all the experts who have the ear of funders in this field are basic scientists, whose instinct is to carry on finding things out and to deprioritise the tedious business of applying that knowledge. SENS has achieved a gratisfying level of legitimacy in gerontology, but it is still foreign to most card-carrying gerontologists, and as such it remains essentially unfundable via mainstream mechanisms. Hence the need to create a philanthropy-driven entity, SENS Foundation, to get this work done.
Let me know if this helps, or if you have further questions.
Cheers, Aubrey
Hi Aubrey,
Thanks again for engaging so thoughtfully.
I agree that a new technology/treatment that could delay or reverse aging (or aspects of it) would be enormously valuable. Regarding the rest of your argument, this is a good example of the challenges I've been discussing in understanding biomedical research.
You state that you have a high-expected-value plan that the academic world can't recognize the value of because of shortcomings such as "balkanisation" and risk aversion. I believe it may be true that the academic world has such problems to a degree; however, I also believe that there are a lot of extremely talented people in academia and that they often (though not necessarily always) find ways to move forward on promising work. Without more subject-matter expertise (or the advice of someone with such expertise), I can't easily assess the technical merits of your argument or potential counterarguments. Hopefully we'll have a better system for doing so at some point in the future.
I'll be very interested to see Dario's thoughts on the matter if he responds. I'd cite Dario as an example of an academic who ultimately wants to do work of the greatest humanitarian value possible, regardless of whether it is prestigious work. And as my summary of our conversation shows, he acknowledges that the world of biomedical research may have certain suboptimal incentives, but didn't seem to think that these issues are leaving specific, visible outstanding research programs on the table the way that your email implies.
Best,
Holden
Excellent. I too am keen to see Dario's comments. Dario also has the advantage of being based just a few miles from SENS Foundation's research centre, so we can definitely get together f2f soon if he wants.
Cheers, Aubrey
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)