Over the coming weeks, I intend to write up a history of the different parts of the effective altruist movement and their interrelations. It’s natural to start with the part that I know best: the history of my own involvement with effective altruism.
Interest in altruism rooted in literature
My interest in altruism traces to early childhood. Unbeknownst to me, my verbal comprehension ability was unusually high relative to my other cognitive abilities, and for this reason, I gravitated strongly toward reading. Starting from the age of six, I spent hours a day reading fiction. I found many of the stories that I read to be emotionally compelling, and identified with the characters.
My interest is altruism is largely literary in origin — I perceive the sweep of history to be a story, and I want things to go well for the characters, and want the story to have a happy ending. I was influenced both by portrayals of sympathetic, poor characters in need, and by stories of the triumph of the human spirit, and I wanted to help the downtrodden, and contribute to the formation of peak positive human experiences.
I sometimes wonder whether there are other people with altruistic tendencies that are literary in origin, and whether they would be good candidate members of the effective altruist movement. There is some history of artists having altruistic goals. The great painter Vincent van Gogh moved to an impoverished coal mine to preach and minister to the sick. The great mathematician Alexander Grothendieck gave shelter to the homeless.
An analytical bent, and utilitarianism
When I was young, I had vague and dreamy hopes about how I might make the world a better place. As I grew older, I found myself more focused on careful reasoning and rationality.
In high school, I met Dario Amodei, who introduced me to utilitarianism. The ethical framework immediately resonated with me. For me, it corresponded to valuing the well being of all characters in the story — a manifestation of universal love.
This was the birth of my interest in maximizing aggregated global welfare. Maximizing aggregated global welfare corresponds to maximizing cost-effectiveness, and so this can be thought of as the origin of my interest in effective altruism.
I believe that I would have developed interest in global welfare, and in effective altruism, on my own accord, without encountering any members of the effective altruist movement. But for reasons that I describe below, if not for meeting these people, I don’t think that my interests would have been actionable.
Epistemic paralysis
My analytical bent had a downside.
Issues pertaining to the human world are very complex, and there aren’t clear-cut objective answers to the question of how best to make the world a better place. On a given issue, there are many arguments for a given position, and many counterarguments to the arguments, and many counterarguments to the counterarguments, and so on.
Contemplating these resulted in my falling into a state of epistemic learned helplessness. I became convinced that it's not possible to rationally develop confidence in views concerning how to make the world a better place.
Enter GiveWell
In 2007, my college friend Brian Tomasik pointed me to GiveWell. At the time, GiveWell had just launched, and there wasn’t very much on the website, so I soon forgot about it.
In 2009, my high school friend Dario, who had introduced me to utilitarianism, pointed me to GiveWell again. By this point, there was much more information available on the GiveWell website.
I began following GiveWell closely. I was very impressed by the fact that co-founders Holden and Elie seemed to be making sense of the ambiguous world of effective philanthropy. I hadn’t thought that it was possible to reason so well about the human world. This made effective altruism more credible in my eyes, and inspired me. If hadn’t encountered GiveWell, I may not have gotten involved with the effective philanthropy movement at all, although I may have become involved through interactions of Less Wrong, and I may have gone on to do socially valuable work in math education.
I became progressively more impressed by GiveWell over time, and wanted to become involved. In 2011, I did volunteer work for GiveWell, and in 2012, I began working at GiveWell as a research analyst.
While working at GiveWell, I learned a great deal about how to think about philanthropy, and about epistemology more generally. A crucial development in my thinking was the gradual realization that:
- Cost-effectiveness estimates that are supported by a single line of evidence are unreliable.
- Investigation generally reveals that interventions that appear highly cost-effective relative to other interventions are often much worse (in a relative sense) than one would initially guess.
- One can become much more confident in one’s assessment of the value of a philanthropic intervention by examining it from many different angles.
I wrote about this realization in my post Robustness of Cost-Effectiveness Estimates and Philanthropy.
This shift in my thinking gradually percolated, and I realized that my entire epistemological framework had been seriously flawed, because I was relying too much on a small number of relatively strong arguments rather than a large number of independent weak arguments.
Many people had tried to explain this to me in the past, but I was unable to understand what they were driving at, and it was only through my work at GiveWell and my interactions with my coworkers that I was finally able to understand. The benefits of this realization have spanned many aspects of my life, and have substantially increased my altruistic human capital.
If GiveWell hadn’t existed, it’s very possible that I wouldn’t have learned these things. If Dario hadn’t pointed me to GiveWell, I’m sure that I would have encountered GiveWell eventually, but it may have been too late for it to be possible for me to work there, and so I may not have had the associated learning opportunities.
My involvement with GiveWell also facilitated my meeting Vipul Naik, the founder of Open Borders. We’ve had many fruitful interactions related to maximizing global welfare, and if I hadn’t met him through GiveWell, it may have been years before we met.
The significance of Less Wrong
Several people pointed me to Overcoming Bias and Less Wrong starting in 2008, but at the time the posts didn’t draw me in relative to the fascination of reciprocity laws in algebraic number theory. In early 2010, Brian Tomasik pointed me to some of Yvain’s articles on Less Wrong. With the background context of me following GiveWell, Yvain’s posts on utilitarianism really resonated with me. So I started reading Less Wrong.
I met many impressive people who are seriously interested in effective altruism through Less Wrong. Among these are:
- Nick Beckstead — A new postdoc at Oxford University (specifically, the Future of Humanity Institute) who wrote the thesis On the Overwhelming Importance of Shaping the Far Future.
- Paul Christiano and Jacob Steinhardt — graduate students in theory of computing and machine learning at Berkeley and Stanford (respectively). Jacob is a Hertz Fellow, and Paul coauthored a 48 page paper titled Quantum Money from Hidden Subspaces as an undergraduate. Paul has thought a great deal about rational altruism, and Jacob is a member of The Vannevar Group, which aims to accelerate scientific progress to create the greatest amount of social good.
- Qiaochu Yuan — A math graduate student at Berkeley who has the sixth highest karma score on MathOverflow. I’ve been enjoying learning math from Qiaochu.
- Carl Shulman and Dan Keys. I’ve found very intellectually stimulating and have learned a great deal from them.
- Luke Muehlhauser — The executive director of MIRI.
- Others — Including Julia Galef, Anna Salamon, Katja Grace, Louie Helm.
They’ve helped me retain my motivation to do the most good, and have aided me in thinking about effective altruism. They constitute a substantial chunk of the most impressive people who I know in my age group.
It’s genuinely unclear whether I would have gotten to know these people if Eliezer hadn’t started Less Wrong.
Closing summary
My innate inclinations got me interested in effective altruism, but they probably wouldn’t have sufficed for my interest to be actionable. Beyond my innate inclinations, the things that stand out most in my mind as having been crucial are
- Dario introducing me to GiveWell
- Working at GiveWell
- Meeting Vipul through GiveWell
- Eliezer starting Less Wrong
- Meeting peers through Less Wrong
Working at GiveWell substantially increased my altruistic human capital. I’ve learned a great deal from the GiveWell staff, from Vipul, and from the members of the Less Wrong community listed above. We’ve had fruitful collaborations, and they’ve helped me retain my motivation to do the most good.
The personal growth benefits that I derived from working at GiveWell are unusual, if only because GiveWell’s staff is small. The networking benefits from Less Wrong are shared by many others.
Note: I formerly worked as a research analyst at GiveWell. All views here are my own.
This post is cross-posted at www.effective-altruism.com.
Hmmm. It's a fair argument, but I'm not sure how well it would work out in practice.
To clarify, I'm not saying that the sim couldn't be run like that. My claim is, rather, that if we are in a sim being run with varying levels of accuracy as suggested, then we should be able to detect it.
Consider, for the moment, a hill. That hill consists of a very large number of electrons, protons and neutrons. Assume for the moment that the hill is not the focus of a scientific experiment. Then, it may be that the hill is being simulated in some computationally cheaper manner than simulating every individual particle.
There are two options. Either the computationally cheaper manner is, in every single possible way, indistinguishable from simulating every individual particle. In this case, there is no reason to use the more computationally expensive method when a scientist tries to run an experiment which includes the hill; all hills can use the computationally cheaper method.
The alternative is that there is some way, however slight or subtle, in which the behaviour of the atoms in the hill differs from the behaviour of those same atoms when under scientific investigation. If this is the case, then it means that the scientific laws deduced from experiments on the hill will, in some subtle way, not match the behaviour of hills in general. In this case, there must be a detectable difference; in effect, under certain circumstances hills are following a different set of physical laws and sooner or later someone is going to notice that. (Note that this can be avoided, to some degree, by saving the sim at regular intervals; if someone notices the difference between the approximation and a hill made out of properly simulated atoms, then the simulation is reloaded from a save just before that difference happened and the approximation is updated to hide that detail. This can't be done forever - after a few iterations, the approximation's computational complexity will begin to approach the computational complexity of the atomic hill in any case, plus you've now wasted a lot of cycles running sims that had no purpose other than refining the approximation - but it could stave off discovery for a period, at least).
Having said that, though, another thought has occurred to me. There's no guarantee (if we are in a sim) that the laws of physics are the same in our universe as they are in baseline; we may, in fact, have laws of physics specifically designed to be easier to compute. Consider, for example, the uncertainty principle. Now, I'm no quantum physicist, but as I understand it, the more precisely a particle's position can be determined, the less precisely its momentum can be known - and, at the same time, the more precisely its momentum is known, the less precisely its position can be found. Now, in terms of a simulation, the uncertainty principle means that the computer running the simulation need not keep track of the position and momentum of every particle at full precision. It may, instead, keep track of some single combined value (a real quantum physicist might be able to guess at what that value is, and how position and/or momentum can be derived from it). And given the number of atoms in the observable universe, the data storage saved by this is massive (and suggests that Baseline's storage space, while immense, is not infinite).
Of course, like any good simplification, the Uncertainty Principle is applied everywhere, whether a scientist is looking at the data or not.
What is and isn't simulated to a high degree of detail can be determined dynamically. If people decide they want to investigate a hill, some system watching the sim can notice that and send a signal that the sim needs to make the hill observations correspond with quantum/etc. physics. This shouldn't be hard to do. For instance, if the theory predicts observation X +/- Y, you can generate some random numbers centered around X with std. dev. Y. Or you can make them somewhat different if the theory is wrong and to account for model uncertainty.
If the scientis... (read more)