Over the coming weeks, I intend to write up a history of the different parts of the effective altruist movement and their interrelations. It’s natural to start with the part that I know best: the history of my own involvement with effective altruism.
Interest in altruism rooted in literature
My interest in altruism traces to early childhood. Unbeknownst to me, my verbal comprehension ability was unusually high relative to my other cognitive abilities, and for this reason, I gravitated strongly toward reading. Starting from the age of six, I spent hours a day reading fiction. I found many of the stories that I read to be emotionally compelling, and identified with the characters.
My interest is altruism is largely literary in origin — I perceive the sweep of history to be a story, and I want things to go well for the characters, and want the story to have a happy ending. I was influenced both by portrayals of sympathetic, poor characters in need, and by stories of the triumph of the human spirit, and I wanted to help the downtrodden, and contribute to the formation of peak positive human experiences.
I sometimes wonder whether there are other people with altruistic tendencies that are literary in origin, and whether they would be good candidate members of the effective altruist movement. There is some history of artists having altruistic goals. The great painter Vincent van Gogh moved to an impoverished coal mine to preach and minister to the sick. The great mathematician Alexander Grothendieck gave shelter to the homeless.
An analytical bent, and utilitarianism
When I was young, I had vague and dreamy hopes about how I might make the world a better place. As I grew older, I found myself more focused on careful reasoning and rationality.
In high school, I met Dario Amodei, who introduced me to utilitarianism. The ethical framework immediately resonated with me. For me, it corresponded to valuing the well being of all characters in the story — a manifestation of universal love.
This was the birth of my interest in maximizing aggregated global welfare. Maximizing aggregated global welfare corresponds to maximizing cost-effectiveness, and so this can be thought of as the origin of my interest in effective altruism.
I believe that I would have developed interest in global welfare, and in effective altruism, on my own accord, without encountering any members of the effective altruist movement. But for reasons that I describe below, if not for meeting these people, I don’t think that my interests would have been actionable.
Epistemic paralysis
My analytical bent had a downside.
Issues pertaining to the human world are very complex, and there aren’t clear-cut objective answers to the question of how best to make the world a better place. On a given issue, there are many arguments for a given position, and many counterarguments to the arguments, and many counterarguments to the counterarguments, and so on.
Contemplating these resulted in my falling into a state of epistemic learned helplessness. I became convinced that it's not possible to rationally develop confidence in views concerning how to make the world a better place.
Enter GiveWell
In 2007, my college friend Brian Tomasik pointed me to GiveWell. At the time, GiveWell had just launched, and there wasn’t very much on the website, so I soon forgot about it.
In 2009, my high school friend Dario, who had introduced me to utilitarianism, pointed me to GiveWell again. By this point, there was much more information available on the GiveWell website.
I began following GiveWell closely. I was very impressed by the fact that co-founders Holden and Elie seemed to be making sense of the ambiguous world of effective philanthropy. I hadn’t thought that it was possible to reason so well about the human world. This made effective altruism more credible in my eyes, and inspired me. If hadn’t encountered GiveWell, I may not have gotten involved with the effective philanthropy movement at all, although I may have become involved through interactions of Less Wrong, and I may have gone on to do socially valuable work in math education.
I became progressively more impressed by GiveWell over time, and wanted to become involved. In 2011, I did volunteer work for GiveWell, and in 2012, I began working at GiveWell as a research analyst.
While working at GiveWell, I learned a great deal about how to think about philanthropy, and about epistemology more generally. A crucial development in my thinking was the gradual realization that:
- Cost-effectiveness estimates that are supported by a single line of evidence are unreliable.
- Investigation generally reveals that interventions that appear highly cost-effective relative to other interventions are often much worse (in a relative sense) than one would initially guess.
- One can become much more confident in one’s assessment of the value of a philanthropic intervention by examining it from many different angles.
I wrote about this realization in my post Robustness of Cost-Effectiveness Estimates and Philanthropy.
This shift in my thinking gradually percolated, and I realized that my entire epistemological framework had been seriously flawed, because I was relying too much on a small number of relatively strong arguments rather than a large number of independent weak arguments.
Many people had tried to explain this to me in the past, but I was unable to understand what they were driving at, and it was only through my work at GiveWell and my interactions with my coworkers that I was finally able to understand. The benefits of this realization have spanned many aspects of my life, and have substantially increased my altruistic human capital.
If GiveWell hadn’t existed, it’s very possible that I wouldn’t have learned these things. If Dario hadn’t pointed me to GiveWell, I’m sure that I would have encountered GiveWell eventually, but it may have been too late for it to be possible for me to work there, and so I may not have had the associated learning opportunities.
My involvement with GiveWell also facilitated my meeting Vipul Naik, the founder of Open Borders. We’ve had many fruitful interactions related to maximizing global welfare, and if I hadn’t met him through GiveWell, it may have been years before we met.
The significance of Less Wrong
Several people pointed me to Overcoming Bias and Less Wrong starting in 2008, but at the time the posts didn’t draw me in relative to the fascination of reciprocity laws in algebraic number theory. In early 2010, Brian Tomasik pointed me to some of Yvain’s articles on Less Wrong. With the background context of me following GiveWell, Yvain’s posts on utilitarianism really resonated with me. So I started reading Less Wrong.
I met many impressive people who are seriously interested in effective altruism through Less Wrong. Among these are:
- Nick Beckstead — A new postdoc at Oxford University (specifically, the Future of Humanity Institute) who wrote the thesis On the Overwhelming Importance of Shaping the Far Future.
- Paul Christiano and Jacob Steinhardt — graduate students in theory of computing and machine learning at Berkeley and Stanford (respectively). Jacob is a Hertz Fellow, and Paul coauthored a 48 page paper titled Quantum Money from Hidden Subspaces as an undergraduate. Paul has thought a great deal about rational altruism, and Jacob is a member of The Vannevar Group, which aims to accelerate scientific progress to create the greatest amount of social good.
- Qiaochu Yuan — A math graduate student at Berkeley who has the sixth highest karma score on MathOverflow. I’ve been enjoying learning math from Qiaochu.
- Carl Shulman and Dan Keys. I’ve found very intellectually stimulating and have learned a great deal from them.
- Luke Muehlhauser — The executive director of MIRI.
- Others — Including Julia Galef, Anna Salamon, Katja Grace, Louie Helm.
They’ve helped me retain my motivation to do the most good, and have aided me in thinking about effective altruism. They constitute a substantial chunk of the most impressive people who I know in my age group.
It’s genuinely unclear whether I would have gotten to know these people if Eliezer hadn’t started Less Wrong.
Closing summary
My innate inclinations got me interested in effective altruism, but they probably wouldn’t have sufficed for my interest to be actionable. Beyond my innate inclinations, the things that stand out most in my mind as having been crucial are
- Dario introducing me to GiveWell
- Working at GiveWell
- Meeting Vipul through GiveWell
- Eliezer starting Less Wrong
- Meeting peers through Less Wrong
Working at GiveWell substantially increased my altruistic human capital. I’ve learned a great deal from the GiveWell staff, from Vipul, and from the members of the Less Wrong community listed above. We’ve had fruitful collaborations, and they’ve helped me retain my motivation to do the most good.
The personal growth benefits that I derived from working at GiveWell are unusual, if only because GiveWell’s staff is small. The networking benefits from Less Wrong are shared by many others.
Note: I formerly worked as a research analyst at GiveWell. All views here are my own.
This post is cross-posted at www.effective-altruism.com.
Thanks, Jonah. I think skepticism about the dominance of the far future is actually quite compelling, such that I'm not certain that focusing on the far future dominates (though I think it's likely that it does on balance, but much less than I naively thought).
The strongest argument is just that believing we are in a position to influence astronomical numbers of minds runs contrary to Copernican intuitions that we should be typical observers. Isn't it a massive coincidence that we happen to be among a small group of creatures that can most powerfully affect our future light cone? Robin Hanson's resolution of Pascal's mugging relied on this idea.
The simulation-argument proposal is one specific way to hash out this Copernican intuition. The sim arg is quite robust and doesn't depend on the self-sampling assumption the way the doomsday argument does. We have reasonable a priori reasons for thinking there should be lots of sims -- not quite as strong as the arguments for thinking we should be able to influence the far future, but not vastly weaker.
Let's look at some sample numbers. We'll work in units of "number of humans alive in 2014," so that the current population of Earth is 1. Let's say the far future contains N humans (or human-ish sentient creatures), and a fraction f of those are sims that think they're on Earth around 2014. The sim arg suggests that Nf >> 1, i.e., we're probably in one of those sims. The probability we're not in such a sim is 1/(Nf+1), which we can approximate as 1/(Nf). Now, maybe future people have a higher intensity of experience i relative to that of present-day people. Also, it's much easier to affect the near future than the far future, so let e represent the amount of extra "entropy" that our actions face if they target the far future. For example, e = 10^-6 says there's a factor-of-a-million discount for how likely our actions are to actually make the difference we intend for the far future vs. if we had acted to affect the near term. This entropy can come from uncertainty about what the far future will look like, failures of goal preservation, or intrusion of black swans.
Now let's consider two cases -- one assuming no correlations among actors (CDT) and one assuming full correlations (TDT-ish).
CDT case:
It's not obvious that ie/f > 1. For instance, if f = 10^-4, i = 10^2, and e = 10^-6, this would equal 1. Hence it wouldn't be clear that targeting the far future is better than targeting the near term.
TDT-ish case:
The ratio of long-term helping to short-term helping is Nie/(Nf) = ie/f, exactly the same as before. Hence, the uncertainty about whether the near- or far-future dominates persists.
I've tried these calculations with a few other tweaks, and something close to ie/f continues to pop out.
Now, this point is again of the "one relatively strong argument" variety, so I'm not claiming this particular elaboration is definitive. But it illustrates the types of ways that far-future-dominance arguments could be neglecting certain factors.
Note also that even if you think ie/f >> 1, it's still less than the 10^30 or whatever factor a naive far-future-dominance perspective might assume. Also, to be clear, I'm ignoring flow-through effects of short-term helping on the far future and just talking about the intrinsic value of the direct targets of our actions.
In the past, when I expressed worries about the difficulties associated to far-future meme-spreading, which you favor as an alternative to extinction-risk reduction, you said you t... (read more)