Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Another Critique of Effective Altruism

18 Post author: jsteinhardt 05 January 2014 09:51AM
Cross-posted from my blog. It is almost certainly a bad idea to let this post be your first exposure to the effective altruist movement. You should at the very least read these two posts first.


Recently Ben Kuhn wrote a critique of effective altruism. I'm glad to see such self-examination taking place, but I'm also concerned that the essay did not attack some of the most serious issues I see in the effective altruist movement, so I've decided to write my own critique. Due to time constraints, this critique is short and incomplete. I've tried to bring up arguments that would make people feel uncomfortable and defensive; hopefully I've succeeded.

 

Briefly, here are some of the major issues I have with the effective altruism movement as it currently stands:

  • Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities.

  • Over-confident claims coupled with insufficient background research.

  • Over-reliance on a small set of tools for assessing opportunities, which lead many to underestimate the value of things such as “flow-through” effects.

The common theme here is a subtle underlying message that simple, shallow analyses can allow one to make high-impact career and giving choices, and divest one of the need to dig further. I doubt that anyone explicitly believes this, but I do believe that this theme comes out implicitly both in arguments people make and in actions people take.

 

Lest this essay give a mistaken impression to the casual reader, I should note that there are many examplary effective altruists who I feel are mostly immune to the issues above; for instance, the GiveWell blog does a very good job of warning against the first and third points above, and I would recommend anyone who isn't already to subscribe to it (and there are other examples that I'm failing to mention). But for the purposes of this essay, I will ignore this fact except for the current caveat.

 

Over-focus on "tried and true" options


It seems to me that the effective altruist movement over-focuses on “tried and true” options, both in giving opportunities and in career paths. Perhaps the biggest example of this is the prevalence of “earning to give”. While this is certainly an admirable option, it should be considered as a baseline to improve upon, not a definitive answer.

 

The biggest issue with the “earning to give” path is that careers in finance and software (the two most common avenues for this) are incredibly straight-forward and secure. The two things that finance and software have in common is that there is a well-defined application process similar to the one for undergraduate admissions, and given reasonable job performance one will continue to be given promotions and raises (this probably entails working hard, but the end result is still rarely in doubt). One also gets a constant source of extrinsic positive reinforcement from the money they earn. Why do I call these things an “issue”? Because I think that these attributes encourage people to pursue these paths without looking for less obvious, less certain, but ultimately better paths. One in six Yale graduates go into finance and consulting, seemingly due to the simplicity of applying and the easy supply of extrinsic motivation. My intuition is that this ratio is higher than an optimal society would have, even if such people commonly gave generously (and it is certainly much higher than the number of people who enter college planning to pursue such paths).


Contrast this with, for instance, working at a start-up. Most start-ups are low-impact, but it is undeniable that at least some have been extraordinarily high-impact, so this seems like an area that effective altruists should be considering strongly. Why aren't there more of us at 23&me, or Coursera, or Quora, or Stripe? I think it is because these opportunities are less obvious and take more work to find, once you start working it often isn't clear whether what you're doing will have a positive impact or not, and your future job security is massively uncertain. There are few sources of extrinsic motivation in such a career: perhaps moreso at one of the companies mentioned above, which are reasonably established and have customers, but what about the 4-person start-up teams working in a warehouse somewhere? Some of them will go on to do great things but right now their lives must be full of anxiousness and uncertainty.

 

I don't mean to fetishize start-ups. They are just one well-known example of a potentially high-value career path that, to me, seems underexplored within the EA movement. I would argue (perhaps self-servingly) that academia is another example of such a path, with similar psychological obstacles: every 5 years or so you have the opportunity to get kicked out (e.g. applying for faculty jobs, and being up for tenure), you need to relocate regularly, few people will read your work and even fewer will praise it, and it won't be clear whether it had a positive impact until many years down the road. And beyond the “obvious” alternatives of start-ups and academia, what of the paths that haven't been created yet? GiveWell was revolutionary when it came about. Who will be the next GiveWell? And by this I don't mean the next charity evaluator, but the next set of people who fundamentally alter how we view altruism.

 

Over-confident claims coupled with insufficient background research


The history of effective altruism is littered with over-confident claims, many of which have later turned out to be false. In 2009, Peter Singer claimed that you could save a life for $200 (and many others repeated his claim). While the number was already questionable at the time, by 2011 we discovered that the number was completely off. Now new numbers were thrown around: from numbers still in the hundreds of dollars (GWWC's estimate for SCI, which was later shown to be flawed) up to $1600 (GiveWell's estimate for AMF, which GiveWell itself expected to go up, and which indeed did go up). These numbers were often cited without caveats, as well as other claims such as that the effectiveness of charities can vary by a factor of 1,000. How many people citing these numbers understood the process that generated them, or the high degree of uncertainty surrounding them, or the inaccuracy of past estimates? How many would have pointed out that saying that charities vary by a factor of 1,000 in effectiveness is by itself not very helpful, and is more a statement about how bad the bottom end is than how good the top end is?

 

More problematic than the careless bandying of numbers is the tendency toward not doing strong background research. A common pattern I see is: an effective altruist makes a bold claim, then when pressed on it offers a heuristic justification together with the claim that “estimation is the best we have”. This sort of argument acts as a conversation-stopper (and can also be quite annoying, which may be part of what drives some people away from effective altruism). In many of these cases, there are relatively easy opportunities to do background reading to further educate oneself about the claim being made. It can appear to an outside observer as though people are opting for the fun, easy activity (speculation) rather than the harder and more worthwhile activity (research). Again, I'm not claiming that this is people's explicit thought process, but it does seem to be what ends up happening.

 

Why haven't more EAs signed up for a course on global security, or tried to understand how DARPA funds projects, or learned about third-world health? I've heard claims that this would be too time-consuming relative to the value it provides, but this seems like a poor excuse if we want to be taken seriously as a movement (or even just want to reach consistently accurate conclusions about the world).

 

Over-reliance on a small set of tools


Effective altruists tend to have a lot of interest in quantitative estimates. We want to know what the best thing to do is, and we want a numerical value. This causes us to rely on scientific studies, economic reports, and Fermi estimates. It can cause us to underweight things like the competence of a particular organization, the strength of the people involved, and other “intangibles” (which are often not actually intangible but simply difficult to assign a number to). It also can cause us to over-focus on money as a unit of altruism, while often-times “it isn't about the money”: it's about doing the groundwork that no one is doing, or finding the opportunity that no one has found yet.

 

Quantitative estimates often also tend to ignore flow-through effects: effects which are an indirect, rather than direct, result of an action (such as decreased disease in the third world contributing in the long run to increased global security). These effects are difficult to quantify but human and cultural intuition can do a reasonable job of taking them into account. As such, I often worry that effective altruists may actually be less effective than “normal” altruists. (One can point to all sorts of examples of farcical charities to claim that regular altruism sucks, but this misses the point that there are also amazing organizations out there, such as the Simons Foundation or HHMI, which are doing enormous amounts of good despite not subscribing to the EA philosophy.)

 

What's particularly worrisome is that even if we were less effective than normal altruists, we would probably still end up looking better by our own standards, which explicitly fail to account for the ways in which normal altruists might outperform us (see above). This is a problem with any paradigm, but the fact that the effective altruist community is small and insular and relies heavily on its paradigm makes us far more susceptible to it.

Comments (107)

Comment author: Vaniver 05 January 2014 03:55:39PM 12 points [-]

I would argue (perhaps self-servingly) that academia is another example of such a path

Academia is, in my mind, the textbook example of people doing something because it's familiar, not because they've searched for it and it's the right choice. Most of the academics I know will freely state that it only makes sense to go into academia for fame, not for money- and so it's not clear to me what you think the EA benefit is. (Convincing students to become EA? Funding student organizations seems like a better way to do that.)

Comment author: jsteinhardt 05 January 2014 06:25:16PM 3 points [-]

Most of the academics I know will freely state that it only makes sense to go into academia for fame, not for money- and so it's not clear to me what you think the EA benefit is.

The goal is to get direct impact by doing high-impact research. One of the key points here is that donating money is just one particularly straightforward way to do good!

Academia is, in my mind, the textbook example of people doing something because it's familiar, not because they've searched for it and it's the right choice.

I have certainly seen this before, although I think it's less prevalent (but by no means absent) near the top.

Comment author: peter_hurford 05 January 2014 08:36:50PM 2 points [-]

The goal is to get direct impact by doing high-impact research.

One concern is that high-impact research is hard to come by. But it's definitely a possibility (and one that 80K has acknowledged in many places)! What kind of research are you looking in?

Comment author: tog 05 January 2014 09:25:21PM 1 point [-]

The goal is to get direct impact by doing high-impact research.

Can you expand on that a little, to spell out some concrete high-impact research opportunities that you think some EAs should be focusing their careers on?

Comment author: jsteinhardt 06 January 2014 08:19:33AM 1 point [-]

If you're asking about me personally, I work on artificial intelligence; an (out of date) research statement can be found here. My research also sometimes branches out into nearby fields such as statistics, program analysis, and theoretical computer science.

More generally, if I found myself switching fields then some major contenders would be bioinstrumentation, neuroscience, materials science, synthetic biology, and political science. Of course, the particular choice of problem within a field is in many ways more important than the field itself, so in some sense this list is just cataloguing my biases.

Comment author: Drayin 05 January 2014 07:25:53PM 9 points [-]

"Why haven't more EAs signed up for a course on global security, or tried to understand how DARPA funds projects, or learned about third-world health? I've heard claims that this would be too time-consuming relative to the value it provides, but this seems like a poor excuse. If we want to be taken seriously as a movement (or even just want to reach consistently accurate conclusions about the world)."

This one worries me quite a bit. The vast majority of EA's (including myself) have not spent very much time learning about what the large players in third world poverty are (e.g. WHO, UN). In fact you can be an "expert" in EA content and know virtually nothing about the rest of the non-profit/charity sector.

Comment author: peter_hurford 05 January 2014 02:59:52PM 18 points [-]

I'm glad to see more of this criticism as I think it's important for reflection and moving things forward. However, I'm not really sure who you're critiquing or why. My response would be that your critique (a) appears to misrepresent what the "EA mainstream" is, (b) ignores comparative advantage, or (c) says things I just outright disagree with.

~

The EA Mainstream

Perhaps the biggest example of this is the prevalence of “earning to give”. While this is certainly an admirable option, it should be considered as a baseline to improve upon, not a definitive answer.

I imagine we know different people, even within the effective altruist community. So I'll believe you if you say you know a decent amount of people who think "earning to give" is the best instead of a baseline.

However, 80,000 Hours, the career advice organization that basically started earning to give have themselves written an article called "Why Earning to Give is Often Not the Best Option" and say "A common misconception is that 80,000 Hours thinks Earning to Give is typically the way to have the most impact. We’ve never said that in any of our materials.".

Additionally, the earning-to-give people I know (including myself) all agree with the baseline argument but believe earning to give either as best for them relative to other opportunities (e.g., using comparative advantage arguments) and/or believe earning to give to actually be best overall even when considering these arguments (e.g., by being skeptical of EA organizations).

~

Contrast this with, for instance, working at a start-up. Most start-ups are low-impact, but it is undeniable that at least some have been extraordinarily high-impact, so this seems like an area that effective altruists should be considering strongly. Why aren't there more of us at 23&me, or Coursera, or Quora, or Stripe?

I'm not quite sure what you mean by this:

If you're asking "why don't more people work in start-ups?", I don't think EAs are avoiding start-ups in any noticeable way. I'll be working in one, I know several EAs who are working in them, and it doesn't seem to be all that different from software engineers / web developers in non-startups, except as would be predicted by non start-ups providing even better hiring opportunities.

If you're asking "why don't more people start start-ups themselves?", I think you already answered your own question with regard to people being unwilling to take on high personal risk. 80,000 Hours advises people to do start-ups in essays like "Should More Altruists Consider Entreprenuership?" and "Salary or Start-up: How Do Gooders Can Gain More From Risky Careers". Also, I can think of a few EAs who have started their own start-ups on these considerations. So perhaps people are irrationally risk-averse -- that is a valid critique -- but I don't think it's unique to the EA movement or we can do much about it.

If you're asking "why don't more people go into start-ups because these start-ups are doing high impact things themselves and therefore are good opportunities to have direct impact?", then I think you've hit on a valid critique that many people don't take seriously enough. I've heard some EAs mention it, but it is outside the EA mainstream.

~

We want to know what the best thing to do is, and we want a numerical value. This causes us to rely on scientific studies, economic reports, and Fermi estimates. It can cause us to underweight things like the competence of a particular organization, the strength of the people involved, and other “intangibles” (which are often not actually intangible but simply difficult to assign a number to).

I think the EA mainstream would agree with you on this one as well -- GiveWell, for example, has explicitly distanced themselves from numerical calculations (albeit recently) and several EAs have called into question the usefulness of cost-effectiveness estimates, a charge that was largely lead by GiveWell.

~

Comparative Advantage

And beyond the “obvious” alternatives of start-ups and academia, what of the paths that haven't been created yet? GiveWell was revolutionary when it came about. Who will be the next GiveWell? And by this I don't mean the next charity evaluator, but the next set of people who fundamentally alter how we view altruism.

I definitely agree that fundamentally altering how people view altruism would be very high impact (if shifted in a beneficial way, of course). But I don't think everyone has the time, skills, or willingness to do this -- or that they even should. I think this ignores the benefits of some specialization of trade.

Likewise, instead of EAs taking classes on global security for themselves, many defer to GiveWell and expect GiveWell to perform higher-quality research on these giving opportunities. After all, if you have broad trust in GiveWell, it's hard to beat several full-time saavy analysts with your spare time. GiveWell has more comparative advantage here.

~

It also can cause us to over-focus on money as a unit of altruism, while often-times “it isn't about the money”: it's about doing the groundwork that no one is doing, or finding the opportunity that no one has found yet.

Right. But not everyone has the time or talents to do this groundwork. So it seems best if we set up some orgs to do this kind of groundwork (e.g., CEA, MIRI, etc.) and give money to them to let them specialize in these kinds of breakthroughs. And then the people who have the free time can start projects like Effective Fundraising or .impact.

If you're already raising a family and working a full-time job and donating 10%, I think in many cases it's not worth quitting your job or using your free time to look for more opportunities. We don't need absolutely everyone doing this search -- there's comparative advantage considerations here too.

~

Outright Disagreement

How many would have pointed out that saying that charities vary by a factor of 1,000 in effectiveness is by itself not very helpful, and is more a statement about how bad the bottom end is than how good the top end is?

I think this has been very helpful from a PR point of view. And even if you think flow-through effects even things out more so that charities only differ by 10x or 100x (which I currently don't), that's still significant.

And whether that's condemnation of the bad end or praise for the top end depends on your perspective and standards for what makes an org good or bad. At least, the slope of the curve suggests that a lot of the difference is coming from the best organizations being a lot better than the merely good ones as opposed to the very bad ones being exceptionally bad (i.e., the curve is skewed toward the top, not toward the bottom).

~

Quantitative estimates often also tend to ignore flow-through effects: [...] These effects are difficult to quantify but human and cultural intuition can do a reasonable job of taking them into account.

But can it? How do you know? I think you should take your own "research over speculation" advice here. I don't think we understand flow through effects well enough yet to know if they can be reliably intuited.

~

Outright Agreement

an effective altruist makes a bold claim, then when pressed on it offers a heuristic justification together with the claim that “estimation is the best we have”. [...] It can appear to an outside observer as though people are opting for the fun, easy activity (speculation) rather than the harder and more worthwhile activity (research).

I agree this is an unfortunate problem.

~

Conclusion

Lest this essay give a mistaken impression to the casual reader, I should note that there are many exemplary effective altruists who I feel are mostly immune to the issues above

This is where I get to the question of who your intended audience is. It seems like the EA mainstream either agrees with many of your critiques already (and therefore you're just trying to convince EAs to adopt the mainstream) or you're placing too much burden on EAs to ignore comparative advantage and have everyone become an EA trailblazer.

Comment author: CarlShulman 05 January 2014 06:19:14PM *  26 points [-]

GiveWell, for example, has explicitly distanced themselves from numerical calculations (albeit recently) and several EAs have called into question the usefulness of cost-effectiveness estimates, a charge that was largely lead by GiveWell.

I'll speak up on this one. I am a booster of more such estimates, detailed enough to make assumptions and reasoning explicit. Quantifying one's assumptions lets other challenge the pieces individually and make progress, where with a wishy-washy "list of considerations pro and con" there is a lot of wiggle room about their strengths. Sometimes doing this forces one to think through an argument more deeply only to discover big holes, or that the key pieces also come up in the context of other problems.

In prediction tournaments training people to use formal probabilities has been helpful for their accuracy.

Also I second the bit about comparative advantage: CEA recently hired Owen Cotton-Barratt to do cause prioritization/flow-through effects related work. GiveWell Labs is heavily focused on it. Nick Beckstead and others at the FHI also do some work on the topic.

It seems like the EA mainstream either agrees with many of your critiques already (and therefore you're just trying to convince EAs to adopt the mainstream)

I think that on some of these questions there is also real variation in opinion that should not simply be summarized as a clear "mainstream" position.

Comment author: JonahSinick 06 January 2014 07:43:21AM 2 points [-]

The question to my mind is whether the value of attempting to make such estimates is sufficiently great so that time spent on them is more cost-effective than just trying to do something directly.

Can you give recent EA related examples of exercises in making quantitative estimates that you've found useful?

To be clear, I don't necessarily disagree with you (it depends on the details of your views on this point). I agree that laying out a list of pros and cons without quantifying things suffers from vagueness of the type you describe. But I strain to think of success stories.

Comment author: peter_hurford 06 January 2014 02:31:29AM 2 points [-]

I'll speak up on this one. I am a booster of more such estimates, detailed enough to make assumptions and reasoning explicit.

I generally agree. But I think there's a large difference between "here's a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers" and "this is how much it costs to save a life". Another problem is that I don't think people take much into account when comparing figures (e.g., comparing veg ads to GiveWell) is the differences in epistemic strength behind each number, so that could cause a concern.

~

I think that on some of these questions there is also real variation in opinion that should not simply be summarized as a clear "mainstream" position.

I don't know how much variation there is. I don't claim to know a representative sample of EAs. But I do think there's not much variation among the wisdom of EA orgs on these issues of which I proclaim mainstream.

Which positions are you thinking of?

Comment author: CarlShulman 06 January 2014 05:44:43AM *  8 points [-]

But I think there's a large difference between "here's a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers" and "this is how much it costs to save a life".

You still have to answer questions like:

  • "I can get employer matching for charity A, but not B, is the expected effectiveness of B at least twice as great as that for A, so that I should donate to B?"
  • "I have an absolute advantage in field X, but I think that field Y is at least somewhat more important: which field should I enter?"
  • "By lobbying this organization to increase funds to C, I will reduce support for D: is it worth it?"

Those choices imply judgments about expected value. Being evasive and vague doesn't eliminate the need to make such choices, and tacitly quantify the relative value of options.

Being vague can conceal one's ignorance and avoid sticking one's neck out far enough to be cut off, and it can help guard against being misquoted and PR damage, but you should still ultimately be more-or-less assigning cardinal scores in light of the many choices that tacitly rely on them.

It's still important to be clear on how noisy different inputs to one's judgments are, to give confidence intervals and track records to put one's analysis in context rather than just an expected value, but I would say the basic point stands, that we need to make cardinal comparisons and being vague doesn't help.

Comment author: peter_hurford 06 January 2014 03:14:05PM *  1 point [-]

Like I said to Ishaan:

I agree magnitude is important, for more than just a PR perspective. But it's possible to compare magnitudes without using figures like "$3400.47". I think people go a lot less funny in the head when thinking about "approximately ten times better".

Though I think I agree with [you] that producing figures like "$3400.47" is important for calibration, I don't think our goal should be to equate the lowest estimated figure with the highest impact cause or even automatically assume that a lower estimated figure is a better cause (not that [you] would say that, of course).

Comment author: Ishaan 06 January 2014 04:08:51AM *  2 points [-]

I think there's a large difference between "here's a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers" and "this is how much it costs to save a life"

Note: I do want to know how much it costs to save a life (or QALY or some other easy metric of good). I'd rather have a ballpark conservative estimate than nothing to go off of.

Back when AMF was recommended, I considered the sentence: "we estimate the cost per child life saved through an AMF LLIN distribution at about $3,400.47" to be one of the most useful in the report, because it gave an idea of an approximate upper bound on the magnitude of good to be done and was easy to understand. Sure, it might not be nuanced - but there's a lot to be said for a simple measure of magnitude that helps people make decisions without large amounts of thinking.

When considering altruism (in the future - I don't earn yet) I wouldn't simply have a charity budget which simply goes to the most effective cause - I'd also be weighing the benefit to the most effective cause against the benefit to myself.

That is to say, if i find out that saving lives (or some other easy metric of good) is cheaper than I thought, that would encourage me to devote a greater proportion of income to said cause. The cheaper the cost of good, the more urgent it becomes to me that the good is done.

So it's not enough to simply compare charities in a relative sense to find the best. I think the magnitude of good per cost for the most efficient charity, in an absolute sense, is also pretty important for individual donors making decisions about whether to allocate resources to altruism or to themselves.

Comment author: peter_hurford 06 January 2014 03:12:33PM 0 points [-]

That is to say, if I find out that saving lives (or some other easy metric of good) is cheaper than I thought, that would encourage me to devote a greater proportion of income to said cause. The cheaper the cost of good, the more urgent it becomes to me that the good is done.

This sort of makes sense to me, but it also doesn't. My view is that even if causes were way worse than I currently think they are, they'll still be much more important from an utilitarian perspective than spending on myself. Therefore, I do just construct a charity budget out of all the money I'm willing to give up. I can get the sense of feeling like it is even more urgent that you give up resources, but it already was tremendously urgent in the first place...

But, hey, as long as you're doing altruistic stuff, I'm not going to begrudge you much!

~

So it's not enough to simply compare charities in a relative sense to find the best. I think the magnitude of good per cost for the most efficient charity, in an absolute sense, is also pretty important for individual donors making decisions about whether to allocate resources to altruism or to themselves.

I agree magnitude is important, for more than just a PR perspective. But it's possible to compare magnitudes without using figures like "$3400.47". I think people go a lot less funny in the head when thinking about "approximately ten times better".

Though I think I agree with Carl Shulman that producing figures like "$3400.47" is important for calibration, I don't think our goal should be to equate the lowest estimated figure with the highest impact cause or even automatically assume that a lower estimated figure is a better cause (not that Shulman would say that, of course).

Comment author: Ishaan 06 January 2014 06:30:20PM *  1 point [-]

I'm still a student and am only planning how i might spend money when i have it (it seems like a good idea to have a plan for this sort of thing). Thus far I've been looking at both effective altruism and financial independence (mostly frugality+low risk investment) blogs as possible options. It's quite possible that once money is actually in my hands and I'm actually in the position of making the trade-off, I'll see the appeal of the "charity budget" method...or I might discover that my preferences are less or more selfish than I originally thought, etc.

Right now though...suppose the rate was 5$ a life. If I was going to go out and buy a 10$ sandwich instead of feeding myself via cheaper means for 5$, i'd be weighing that sandwich against one human life. I would be a lot more frugal and devote a greater portion of my income to charity, if reality was like that. I'd be relatively horrified by frivolous spending.

On the other extreme, if it costed a billion dollars to save a single life, I could spend all my days being frugal and giving to charity and probably wouldn't significantly help even one person. I'd fulfill more of my preferences by just enjoying myself and not worrying about altruism beyond the interpersonal level.

More realistically, If it costs $2000 to save a life, buying a sandwich at the opportunity cost of saving <1% of a life ... it's still sort of selfish to choose the sandwich, but I'm simply not that good of a person that I wouldn't sometimes trade 1/100th of a strangers life for a small bit of luxury. But I'd certainly think about getting, say, a smaller house if it meant I could save an additional 1-2 people a year.

Of course, the "charity budget" model is simple and makes sense on a practical level when the good / dollar rate remains relatively constant - as I suppose it generally does. But I wouldn't actually know how large to make my charity budget, unless I had a sense of how much good I could potentially do.

Comment author: peter_hurford 06 January 2014 11:23:44PM 0 points [-]

I'm also a student about to graduate and have looked a lot at both EA and financial independence. I think you're thinking about things correctly.

Comment author: eli_sennesh 06 January 2014 06:39:43PM 2 points [-]

If you're asking "why don't more people go into start-ups because these start-ups are doing high impact things themselves and therefore are good opportunities to have direct impact?", then I think you've hit on a valid critique that many people don't take seriously enough. I've heard some EAs mention it, but it is outside the EA mainstream.

Especially because most start-ups don't have a direct impact in anything altruistic. Yeah, there are some really cool start-ups out there that can change the world. There are also start-ups with solid business plans that won't change the world. And then there are the majority (in our times of cheap VC money) that won't change the world and often don't even have a solid business plan.

Comment author: peter_hurford 06 January 2014 11:15:14PM 0 points [-]

Obviously it depends on the startup. But I think people undervalue the impact of, say, creating software that significantly boosts productivity.

Comment author: Larks 06 January 2014 02:38:15AM 5 points [-]

careers in finance and software (the two most common avenues for this) are incredibly straight-forward and secure.

What are you talking about? Investment Banking, at least, has a huge attrition rate. Careers in IB are short and brutal.

Comment author: jpaulson 08 January 2014 08:18:55AM *  4 points [-]

(This comment is on career stuff, which is tangential to your main points)

I recently had to pick a computer science job, and spent a long time agonizing over what would have the highest impact (among other criteria). I'm not convinced startups or academia have a higher expected value than large companies. I would like to be convinced otherwise.

(Software) Startups:

1) Most startups fail. It's easy to underestimate this because you only hear the success stories.

2) Many startups are not solving "important" problems. They are solving relatively minor problems for relatively rich people, because that's where the money is. Snapchat, Twitter, Facebook, Instagram are examples.

3) Serious problems are complicated, and usually require more resources than a startup can bring to bear.

4) Financially: If you aren't a founder, your share of the company is negligible.

(Computer Science) Academia:

1) My understanding is that there are dozens of applications for each tenure-track opening. So your chance of success is low, and your marginal advantage over the next-best applicant is probably low.

2) I trust markets more than grant committees for distributing money.

3) It seems easier to get sidetracked into non-useful work in academia

Comment author: tog 06 January 2014 02:41:46PM *  4 points [-]

Thanks for the interesting critique. I agree with you that EAs often make over-confident claims without solid evidence, although I don't think it's a huge issue that people sometimes understate how much it costs to save a life, as even the most pessimistic realistic estimates of this cost don't undermine the case for donating significant sums to cost-effective charities.

Am I right in understanding that you think that too many EAs are pursuing earning to give careers in finance and technology, whereas you think they'd have greater impact if they worked in start-ups? If so, could you provide some more explanation of why you think this? It seems plausible to me that earning to give is one of the highest-impact career options for many EAs, given the enormous amount of good that donations to the most effective charities can do.

Finally, you say you "worry that effective altruists may actually be less effective than “normal” altruists". That's a pretty striking claim! Can you expand on it a little? In particular, could you give a typical example of 'normal' altruism, and explain why you think it might be more effective than pursuing an earning to give career and donating large sums to a charity like SCI?

Comment author: jsteinhardt 07 January 2014 05:08:43AM 4 points [-]

In particular, could you give a typical example of 'normal' altruism

I gave the Simons Foundation as an example in my essay. Among other things, they fund the arXiv, which already seems to me to be an extremely valuable contribution. Granted, Simons made huge amounts of money as a quant, but as far as I know he isn't explicitly an EA, and he certainly wasn't "earning to give" in the conventional sense of just giving to top GiveWell charities.

Comment author: tog 07 January 2014 07:09:24AM 1 point [-]

I gave the Simons Foundation as an example in my essay. Among other things, they fund the arXiv, which already seems to me to be an extremely valuable contribution.

Thanks - I agree that it's not prima facie absurd that that donation did more good than an equivalent amount of money to AMF would have done. However it seems significantly better than a typical example of normal altruism, which I'd think of as being something like a donation to a moderately effective domestic poverty charity.

Comment author: jsteinhardt 07 January 2014 07:20:43AM 4 points [-]

I don't think it's fair to compare to a "typical example" of normal altruism, because most people who donate do not put much serious thought into what they're going to do with their money. I think the fair comparison would be to altruists who are non-EAs but put comparable amounts of thought into what they do. At that point it's not clear to me that EAs are doing better (especially if we look at the reference class of people who are willing to devote their entire career to a cause).

Of course, I agree that it would be good if as a society we shifted the cultural default for how to donate to be something more effective (e.g. "known to be effective charity" instead of "random reasonable-looking domestic charity"). This is one good thing that I see the EA movement accomplishing and hope that it will continue to accomplish.

Comment author: tog 07 January 2014 11:32:19AM 1 point [-]

OK, I see where you're coming from, and you have a good point (though you might want to consider adjusting the phrasing of the claim in your original post, which as I said came across as very strong).

Comment author: James_Miller 05 January 2014 11:22:13PM 4 points [-]

Effective Altruism (and critiques of it) need to think at the margin. If I give $X to an organization doing good chances are this won't displace someone else from giving the organization money. In contrast, if I get a job at such an organization I have probably displaced someone else from taking that job. This kind of marginal analysis greatly strengthens the value of the “earning to give” path of effective altruism.

Comment author: V_V 05 January 2014 11:39:23PM *  -3 points [-]

This kind of marginal analysis justifies behaviors such as not voting (your vote isn't likely to decide the election) or buying drugs from criminal organizations that murder thousands people per year (your purchases aren't going to significantly affect their size and operations).

EDIT:

"Act only according to that maxim by which you can at the same time will that it should become a universal law."
-- Immanuel Kant

Comment author: gjm 06 January 2014 12:03:20AM 4 points [-]

... justifies behaviours such as ...

Your argument is missing a step, namely the one where you show that those things really are very bad even though this sort of analysis suggests that they do little harm.

[categorical imperative]

It is possible that James doesn't agree with Kant. But if he does, I suggest that he can clearly respond along these lines: "The maxim by which I propose acting is that of acting to maximize expected utility. If everyone does this then perhaps 1000 people will buy drugs from a criminal organization and hence enable it to commit a few more murders -- but they will only do that if the good each of them is able to do by buying those drugs outweighs the (incremental) harm caused by their contribution to the criminal organization, in which case collectively they will do enough good to outweigh those extra murders. I am happy to live in a society that makes such tradeoffs."

But perhaps you are imagining a version of James that would endorse buying the drugs even if that does no good at all, merely on the grounds that the harm done is small. I agree that this (straw?) James couldn't respond along those lines, but I don't see any grounds for thinking the real James takes that view. He hasn't argued that getting a job for an organization that does good would do relatively little good, so there's no value in it; he's argued that getting such a job would do less good than earning a lot of money and giving much of it away.

Comment author: V_V 06 January 2014 12:28:11AM -2 points [-]

Your argument is missing a step, namely the one where you show that those things really are very bad

Criminal organizations murdering thousands people are not something very bad?

even though this sort of analysis suggests that they do little harm.

It is a reductio ad absurdum.

But perhaps you are imagining a version of James that would endorse buying the drugs even if that does no good at all, merely on the grounds that the harm done is small.

Buying drugs (well, let's say marijuana) supposedly does good to the rational consumers who like them. Like buying child pornography or visiting child prostitutes in third world countries does good to pedophiles, and so on.
All these people could use marginal analysis to argue that whatever harm they are doing is negligible and doesn't outweigh their gains.

If you agree with them then clearly you have much different moral principles that I have.

Comment author: James_Miller 06 January 2014 12:38:25AM *  2 points [-]

A friend comes to you and says "I really like marijuana but recognize that my using it harms people because of the nasty drug trade. I am considering either (1) not using marijuana, or (2) using marijuana but giving $50,000 a year more than I normally would to charity. I will give to GiveWell's top charity. The second option would give me a happier life. I trust your judgement. Which of these two options is morally better? "

Comment author: V_V 06 January 2014 01:17:08AM *  0 points [-]

Uh? The proper analogy is that your friend says "I'm considering working at a minimum wage blue collar job and not donating anything or working as a drug gangster and giving $50,000 a year to GiveWell's top charity. The second option would give me a happier life. I trust your judgement. Which of these two options is morally better? "

Comment author: James_Miller 06 January 2014 01:42:10AM 2 points [-]

Alright, I pick the drug gangster path, taking into account the fact that his being a drug gangster probably displaces someone else from selling to his customers and so the marginal harm of this career choice isn't all that high.

Comment author: V_V 06 January 2014 01:44:24AM 2 points [-]

Ok, we clearly have irreconcilably different values.

Comment author: gjm 06 January 2014 01:40:38PM 1 point [-]

(I see you've been downvoted. It wasn't by me. I very seldom both downvote someone and bother to argue with them.)

Criminal organizations murdering thousands of people are not something very bad?

No, that isn't what I said nor what I meant. The thing that might or might not be very bad is doing business with such a criminal organization, not the existence or the activities of the organization (which uncontroversially are almost certainly very bad things).

All these people could use marginal analysis to argue that whatever harm they are doing is negligible and doesn't outweigh their gains.

Could they? I mean, obviously anyone can argue anything, but what's relevant here is whether they could actually demonstrate that their benefit outweighs the marginal harm done. For that to be true in the case of a paedophile visiting a child prostitute, for instance, the relevant question would be: Has the paedophile's extra pleasure exceeded the child's extra suffering?

For this to be a successful instance of your argument, you need to show two things: (1) that the paedophile's extra pleasure really does outweigh the child's extra suffering, and then (2) that despite that what s/he does is a bad thing. It seems to me that #1 is going to be extremely difficult, to say the least. (Which is why almost everyone is opposed to the prostitution of children.) And if #1 is wrong then #2 doesn't arise. (And the easier question of whether what the paedophile does is a bad thing simpliciter is irrelevant to our argument here, because if #1 is wrong then this isn't an instance that an argument like James's could justify.)

Choosing the sexual abuse of children as the instance to work on here, by the way, is probably an effective rhetorical move in many places because it makes it difficult for someone to disagree with you without looking like an advocate for sexually abusing children. On LW, however, the audience is sufficiently clear-thinking that I am not worried that many people will jump to that wrong conclusion, which means you just get to look like someone who's trying to pull a sleazy rhetorical move. Which is probably why you're getting downvoted. A more productive (and, on LW, probably more effective) approach is to avoid such hot-button topics rather than embracing them -- or (if they're genuinely essential to the argument you're making) to distinguish clearly between asking "why doesn't your argument justify X?" and insinuating that your discussion partner actually does approve of X.

Comment author: V_V 06 January 2014 05:41:49PM 2 points [-]

(I see you've been downvoted. It wasn't by me. I very seldom both downvote someone and bother to argue with them.)

I don't care about votes, anyway.

No, that isn't what I said nor what I meant. The thing that might or might not be very bad is doing business with such a criminal organization, not the existence or the activities of the organization (which uncontroversially are almost certainly very bad things).

These organizations can only exist as long as there are people doing business with them.

Could they? I mean, obviously anyone can argue anything, but what's relevant here is whether they could actually demonstrate that their benefit outweighs the marginal harm done. For that to be true in the case of a paedophile visiting a child prostitute, for instance, the relevant question would be: Has the paedophile's extra pleasure exceeded the child's extra suffering?

Well, the paedophile could argue that the child hooker in the streets of Bangkok is going to remain an hooker whether he visits him/her or not. After all, he is only displacing another customer, who, as far as he knows, could treat the child prostitute worse than he would. Even there is no other customer on that particular day, the life of the child prostitute isn't going to become noticeably different on the margin.
Does this make the visiting the child prostitute morally justifiable?

On LW, however, the audience is sufficiently clear-thinking that I am not worried that many people will jump to that wrong conclusion, which means you just get to look like someone who's trying to pull a sleazy rhetorical move. Which is probably why you're getting downvoted.

I think you have an over optimistic opinion of the audience here. People just tend to up-vote things that confirm their beliefs and down-vote things that challenge them.

A more productive (and, on LW, probably more effective) approach is to avoid such hot-button topics rather than embracing them -- or (if they're genuinely essential to the argument you're making) to distinguish clearly between asking "why doesn't your argument justify X?" and insinuating that your discussion partner actually does approve of X.

I didn't insinuate that people who are making "marginal ethics" arguments here are paedophiles who visit child prostitutes. I made a reductio ad absurdum argument to show that marginal ethics can lead to absurd ethical positions, at least in the opinion of those who believe that visiting child prostitutes is immoral.

Comment author: James_Miller 06 January 2014 12:27:35AM *  0 points [-]

"The maxim by which I propose acting is that of acting to maximize expected utility. If everyone does this then perhaps 1000 people will buy drugs from a criminal organization and hence enable it to commit a few more murders -- but they will only do that if the good each of them is able to do by buying those drugs outweighs the (incremental) harm caused by their contribution to the criminal organization, in which case collectively they will do enough good to outweigh those extra murders. I am happy to live in a society that makes such tradeoffs."

YES

"he's argued that getting such a job would do less good than earning a lot of money and giving much of it away."

YES the value being the difference between if the organization hired you compared to their next best alternative.

Comment author: James_Miller 06 January 2014 12:24:08AM 2 points [-]

Voting is probably irrational unless you enjoy it. My vote won't matter unless the election would otherwise be a tie which probably implies that one candidate isn't much, much worse than another. But your drug conclusion doesn't follow from marginal analysis because my giving, say, $1000 to the Mexican Mafia might increased the murder rate by enough to make my actions immoral.

By the Kant quote I shouldn't not grow food because if no one grew food billions would die. The Kant quote violates Consequentialism although since Kant is a famous philosopher and my objection is obvious I suspect he would have a good counter-reply.

Comment author: [deleted] 06 January 2014 05:32:34PM *  6 points [-]

Earlier discussions on "is voting rational?".

http://lesswrong.com/lw/fao/voting_is_like_donating_thousands_of_dollars_to/

Summary: People often say that voting is irrational, because the probability of affecting the outcome is so small. But the outcome itself is extremely large when you consider its impact on other people. I estimate that for most people, voting is worth a charitable donation of somewhere between $100 and $1.5 million. For me, the value came out to around $56,000.

Moreover, in swing states the value is much higher, so taking a 10% chance at convincing a friend in a swing state to vote similarly to you is probably worth thousands of expected donation dollars, too.

I find this much more compelling than the typical attempts to justify voting purely in terms of signal value or the resulting sense of pride in fulfilling a civic duty. And voting for selfish reasons is still almost completely worthless, in terms of direct effect. If you're on the way to the polls only to vote for the party that will benefit you the most, you're better off using that time to earn $5 mowing someone's lawn. But if you're even a little altruistic... vote away!

http://lesswrong.com/lw/faq/does_my_vote_matter/

Does My Vote Matter?

Yes, if the election is close. You'll never get to know that your vote was decisive, but one vote can substantially change the odds on Election Day nonetheless. Even if the election is a foregone conclusion (or if you don't care about the major candidates), the same reasoning applies to third parties- there are thresholds that really matter to them, and if they reach those now they have a significantly better chance in the next election. And finally, local elections matter in the long run just as state or nation elections do. So, in most cases, voting is rational if you care about the outcome.

http://www.nber.org/papers/w15220.pdf

Gelman, Silver and Edlin estimated that the average American voter has a 1 in 60 million chance of deciding the election.

http://lesswrong.com/lw/faq/does_my_vote_matter/7s5t

One's vote matters not because in rare circumstances it might be decisive in selecting a winner. One's vote matters because by voting you reaffirm the collective intentionality that voting is how we settle our differences. All states exist only through the consent of it's people. By voting you are asserting your consent to the process and it's results. Democracy is strengthened through the participation of the members of society. If people fail to participate society itself suffers.

http://lesswrong.com/lw/vi/todays_inspirational_tale/

I should also mention that voting is a Newcomblike problem. As I don't believe rational agents should defect in the 100fold iterated prisoner's dilemma, I don't buy the idea that rational agents don't vote .

But a vote for a losing candidate is not "thrown away"; it sends a message to mainstream candidates that you vote, but they have to work harder to appeal to your interest group to get your vote. Readers in non-swing states especially should consider what message they're sending with their vote before voting for any candidate, in any election, that they don't actually like.

Comment author: Creutzer 06 January 2014 10:58:20AM *  1 point [-]

Surprisingly, they don't, at least as far as I know. I haven't ever heard of anybody giving, or even trying to give, a proper definition of a maxim, in particular of the level at which it is to be stated (that is underspecified, if not to say unspecified, which makes the whole categorical imperative extremely vulnerable to rationalizations), and of the way that the description of the hypothetical situation in which the maxim is universalised is to be computed. My suspicion, though I haven't done any research to confirm it, is that this is because philosophers who like Kantian ethics don't like formal logic and have no clue about causal models and counterfactuals.

Comment author: hyporational 06 January 2014 09:18:13AM 0 points [-]

While your vote won't matter, what about convincing many people that their votes don't matter?

Comment author: James_Miller 06 January 2014 05:10:03PM 0 points [-]

Spending money on advertising to influence an election can be rational.

Comment author: Strange7 05 January 2014 02:49:01PM 4 points [-]

Naive efficient-market analysis suggests that if finance and computer programming are predictable and lucrative careers, there should be some less stable career option which is even more lucrative on average. For someone who's genuinely earning to give, and planning to keep only a pittance for their own survival regardless, that variability shouldn't matter.

Comment author: James_Miller 05 January 2014 11:35:17PM 2 points [-]

No because only a tiny percentage of the population has a high enough IQ and work ethic to succeed at the elite level of these jobs. To do well at a top investment bank you need to be willing to work (in your early 20s at least) 80+ hours a week at a job that's often stressful and boring and where you have little choice over what you do despite the fact that you have the ability to get a job that's much more interesting and involves half the work yet yields an upper middle class lifestyle.

Comment author: jsteinhardt 05 January 2014 07:12:04PM 1 point [-]

I agree with this but my point was more to go for high impact directly by producing a socially valuable product.

Comment author: eli_sennesh 06 January 2014 06:41:43PM 0 points [-]

In which case you should warn people against investment banking.

(No, seriously, this is not just ideological whinging. If you're going for impact rather than money, you don't want to be an investment banker, you want to be in the actual companies being funded by investment bankers.)

Comment author: private_messaging 05 January 2014 02:52:04PM *  1 point [-]

How's about computer programming start-ups? (Though in most cases a specific one is predictably non-lucrative)

Comment author: lukeprog 05 January 2014 05:47:45PM 8 points [-]

The history of effective altruism is littered with over-confident claims, many of which have later turned out to be false. In 2009, Peter Singer claimed that you could save a life for $200 (and many others repeated his claim). While the number was already questionable at the time, by 2011 we discovered that the number was completely off. Now new numbers were thrown around: from numbers still in the hundreds of dollars (GWWC's estimate for SCI, which was later shown to be flawed) up to $1600 (GiveWell's estimate for AMF, which GiveWell itself expected to go up, and which indeed did go up).

Another good example is GiveWell's 2009 estimate that "Because [our] estimate makes so many conservative assumptions, we feel it is overall reasonable to expect [Village Reach's] future activities to result in lives saved for under $1000 each."

Comment author: timtyler 09 January 2014 03:01:20AM 5 points [-]

"8 lives saved per dollar donated to the Machine Intelligence Research Institute. — Anna Salamon"

Comment author: lukeprog 09 January 2014 03:55:40AM 3 points [-]

Pulling this number out of the video and presenting it by itself, as Kruel does, leaves out important context, such as Anna's statement "Don't trust this calculation too much. [There are] many simplifications and estimated figures. But [then] if the issue might be high stakes, recalculate more carefully." (E.g. after purchasing more information.)

However, Anna next says:

I've talked about [this estimate] with a lot of people and the bargain seems robust. Maybe you go for a soft takeoff scenario, [then the estimate] comes out maybe an order of magnitude lower. But it still comes out [as] unprecedentedly much goodness that you can purchase for a little bit of money or time.

And that is something I definitely disagree with. I don't think the estimate is anywhere near that robust.

Comment author: AnnaSalamon 09 January 2014 07:28:07PM 5 points [-]

I agree with Luke's comment; compared to my views in 2009, the issue now seems more complicated to me; my estimate of impact form donation re: AI risk is lower (though still high); and I would not say that a particular calculation is robust.

Comment author: satt 11 January 2014 12:16:49PM 2 points [-]

my estimate of impact form donation re: AI risk is lower (though still high)

Out of curiosity, what's your current estimate? I recognize it'll be rough, but even e.g. "more likely than not between $1 and $50 per life saved" would be interesting.

Comment author: V_V 09 January 2014 04:18:32PM 3 points [-]

And that is something I definitely disagree with. I don't think the estimate is anywhere near that robust.

Is this MIRI official position? Because, AFAIK that estimate was never retracted.

Anyway, the problem doesn't seem to be much with the exact numbers, but with the process: what she did was essentially a travesty of a Fermi estimate, where she pulled numbers of out thin air and multiplied them together to get a self-serving result.

This person is "Executive Director and Cofounder" of CFAR. Is this what they teach for $1,000 a day? How to fool yourself by performing a mental ritual with made up numbers?

Comment author: lukeprog 09 January 2014 05:47:39PM *  4 points [-]

Is this MIRI official position? Because, AFAIK that estimate was never retracted.

I don't know what Anna's current view is. (Edit: Anna has now given it.)

In general, there aren't such things as "MIRI official positions," there are just individual persons' opinions at a given time. Asking for MIRI's official position on a research question is like asking for CSAIL's official opinion on AGI timelines. If there are "MIRI official positions," I guess they'd be board-approved policies like our whistleblower policy or something.

Comment author: V_V 10 January 2014 01:56:41PM 0 points [-]

Thanks for the answer

Comment author: David_Gerard 12 June 2014 06:11:01PM *  -1 points [-]

You are ignoring that the slide being projected as she was saying it emphasises the point - it was being treated as an important point to make.

"It's out of context!" is a weaselly argument, and one that, having watched the video and read the transcript, I really just don't find credible. It's not at all at odds with the context. The context is fully available. Anna made that claim, she emphasised it as a point worth noting beforehand in the slide deck, she apparently meant it at the time. You're attempting to discredit Kruel in general by ad hominem, and doing so in a manner that is simply not robust.

Comment author: paper-machine 12 June 2014 06:48:24PM *  -2 points [-]

I see nowhere the claim that Kruel pretended to quote from that video.

That's clearly a rough estimate of the value of a positive singularity, and MIRI only studies one pathway to it. MIRI donations are not fungible with donations to a positive singularity, which needs to be true for Kruel's misquote to be even roughly equivalent to what Salamon actually said.

Even if we grant that unstated premise, there's her disclaimer that the estimate (of the value of a positive singularity) is important to be written down explicitly (Principle 1 @ 7:15) even if it is inaccurate and cannot be trusted (Principle 2 directly afterward).

Kruel has proven himself to be an unreliable narrator wherever MIRI is concerned; saying people should be extremely skeptical of his claims is not pulling an ad hominem.

Comment author: David_Gerard 13 June 2014 07:26:17AM 0 points [-]

I see nowhere the claim that Kruel pretended to quote from that video.

12:31. "You can divide it up, per half day of time, something like 800 lives. Per $100 of funding, also something like 800 lives." There's a slide up at that moment making the same claim. It wasn't a casual aside, it was a point that was part of the talk.

Kruel has proven himself to be an unreliable narrator wherever MIRI is concerned;

He wasn't in this case, and you haven't shown it in any other case. Do you have a list to hand?

Comment author: paper-machine 13 June 2014 02:30:11PM *  -3 points [-]

Please respond to the second paragraph of my previous comment, which explains why this doesn't mean what Kruel claims it means. Also note that I am not claiming it was not an important point in her talk.

Kruel has proven himself to be an unreliable narrator wherever MIRI is concerned;

He wasn't in this case, and you haven't shown it in any other case. Do you have a list to hand?

You claim he wasn't. I find three serious misrepresentations. 1) The original estimate was not about MIRI funding; 2) The original estimate was heavily disclaimed excepting a statement about "robustness"; 3) Salamon retracted it, including the robustness claim.

As for XiXi's history of acting in bad faith, you should be more than familiar with it. But if you insist, here is his characterization of his criticism:

That said, on several occasions I failed to adopt the above principles and have often mocked MIRI/LW when it would have been better to engage in more serious criticism. But I did not fail completely. See for example my primer on AI risks or the interviews that I conducted with various experts about those risks. I cannot say that MIRI/LW has been trying to rephrase the arguments of their critics in the same way that I did, or went ahead and asked experts to review their claims.

(emphasis added). Note that this comment was posted three weeks before his post on the Salamon misquote.

Comment author: jsteinhardt 11 January 2014 06:29:32AM 0 points [-]

I don't think you should form your opinion of Anna from this video. It gave me an initially very unfavorable impression that I updated away from after a few in-person conversions.

(If you read the other things I write you'll know that I'm nowhere close to a MIRI fanatic so hopefully the testimonial carries some weight.)

Comment author: lukeprog 05 January 2014 05:54:36PM 7 points [-]

It seems to me that the effective altruist movement over-focuses on “tried and true” options, both in giving opportunities and in career paths. Perhaps the biggest example of this is the prevalence of “earning to give”.

I would have guessed that the biggest example is the focus on poverty reduction / global health initiatives that GiveWell and GWWC have traditionally focused nearly all their attention on. E.g. even though Holden has since the beginning suspected that the highest-EV altruistic causes are outside global health, this point isn't mentioned on GiveWell's historical "top charities" pages (2012, 2011, 2010, 2009, 2008), which emphasize the important focus on "tried and true" charitable interventions.

Comment author: owencb 05 January 2014 02:17:15PM 7 points [-]

One in six Yale graduates go into finance and consulting, seemingly due to the simplicity of applying and the easy supply of extrinsic motivation. My intuition is that this ratio is higher than an optimal society would have, even if such people commonly gave generously.

Because those one-in-six don't all give generously, we can't conclude whether it's right at the margins for graduates to go into earning to give, even if we grant the assumption about the ratio in an optimal society.

I agree that it's worth looking at a wider spread of career possibilities, but this isn't the argument to use to get there.

Comment author: eli_sennesh 06 January 2014 06:36:57PM *  -1 points [-]

I think the stronger point against finance and consulting is that they are very hedonically suboptimal. Or, in plain language: their motivation ratio of intrinsic to extrinsic is so damn low that they burn people out really, really quickly. Even if you're going to donate every penny to $YOUR_FAVORITE_CAUSE, wrecking your health and your psyche within less than a decade to do so is unsustainable and miserable.

Strong evidence: burnout rates in finance and consulting are high, very few students enter university looking to enter those careers, and very few students from sub-elite institutions ever enter them at all (indicating that if you're not being deliberately tempted with a fat salary and bonus check, there's just very little reason to go to finance).

Comment author: private_messaging 06 January 2014 01:53:41PM *  4 points [-]

What is particularly worrysome to me is that the positive effects of interventions such as improvements in the education are much harder to qualitatively calculate.

Say, an individual can make the choice to be a judge in the US, or to be a banker and donate a lot of money to the charities. The straightforward calculation does not take into account the importance of good people among the justices; without such people US would probably have been in no position to send aid (and would need monetary aid itself).

Comment author: eli_sennesh 06 January 2014 06:45:53PM *  -2 points [-]

Back of the envelope calculation:

Let's say the country spends X*$10k/year to jail a prisoner. This is tax money. There's also Y*$10k/year, the lost economic value that would have been generated by the prisoner being able to actually do something with their life.

Now let's say that a judge who puts himself in the position to keep people out of jail can "save" N people from jail each year.

Then the altruistic impact of becoming a judge who keeps people out of jail is N*(X+Y)*$10k/year for each such judge. That's easily going to be on par with the amount of money a six-figure-earning computer programmer can donate to charity each year, and you haven't even donated part of your salary yet.

Comment author: V_V 09 January 2014 03:37:38PM 5 points [-]

Then the altruistic impact of becoming a judge who keeps people out of jail is N(X+Y)$10k/year for each such judge.

And the impact of abolishing jails altogether is 10k/year *-... oh wait!

Comment author: eli_sennesh 10 January 2014 03:43:25PM -2 points [-]

You are assuming there is a net utility gain to society from all imprisonments. I think, given what we know about, say, the War on Drugs, that this is obviously false.

Comment author: V_V 10 January 2014 04:59:58PM 4 points [-]

You are assuming there is a net utility gain to society from all imprisonments.

No, I'm just mocking your grossly naive calculation which assumes that keeping people in jail has only costs and disregards the obvious benefits which are the whole point of having jails in the first place.

Comment author: eli_sennesh 11 January 2014 10:57:55AM -2 points [-]

Except that I was making an "exists such that" point, not a "forall" point.

Comment author: gjm 11 January 2014 01:37:31PM -1 points [-]

I think you could have made that clearer when making the point in the first place.

So, anyway, the suggestion is that it might be very high-impact to be a judge who keeps people who shouldn't be in jail but would be jailed by many other judges out of jail. It might ... but I wonder how many such cases a typical judge actually encounters (I think casualties of the War On Drugs don't make up that large a fraction of the prison population) and how much power they have to keep those people out of jail (aren't there mandatory sentences in many cases?). Do you have the relevant information?

The point is that if N is, say, 0.2 then our hypothetical 6-figure earner could easily be giving more than that in charitable donations.

One other really important point. Altruistic impact is not measured in dollars but in utility. Giving N(X+Y)10k to the government may do much less good than giving it to an effective charity.

Comment author: Lumifer 10 January 2014 03:45:26PM 2 points [-]

You are assuming there is a net utility gain to society from all imprisonments

Do you mean "from all" or do you mean "from each"?

Comment author: eli_sennesh 10 January 2014 03:51:16PM -1 points [-]

I mean that he appears to believe society gains net utility from keeping each individual prisoner in jail, versus releasing them, but also keeping all the prisoners in jail as a group, versus releasing them all. I'm choosing not to distinguish between the admittedly separate effects of one person being released versus an entire organized/self-organized group being released together.

I'm just saying that I think there are many obvious cases in which we could release prisoners at a net gain to society.

Comment author: pianoforte611 06 January 2014 02:24:55AM *  3 points [-]

This still feels like a "we need fifty Stalins" critique.

For me the biggest problems with the effective altruism movement are:

1: Most people aren't utilitarians.

2: Maximizing QALY's isn't even the correct course of action under utilitarianism - its short sighted and silly. Which is worse under utilitarianism: Louis Pasteur dying in his childhood or 100,000 children in a third world country dying? I would argue that the death of Louis Pasteur is a far greater tragedy since his contributions to human knowledge have saved a lot more than 100,000 lives and have advanced society in other ways. But a QALY approach does not capture this. That's extreme obviously, but my issue is that all lives are not equal. People in developed countries matter way more than people in developing countries in terms of advancing technology and society in general.

Comment author: zedzed 06 January 2014 04:04:04AM 6 points [-]
  1. How fungible is Louis Pasteur? If he had died as a child, someone else would have done the same work, just perhaps a little later. How many lives would have been lost as a result of this delay? I don't have a hard answer to this, but I have trouble putting the estimate as high as 100k.

  2. How predictable is Louis Pasteur? Looking at his Wikipedia article, if we look at him as a child, we don't predict he makes the contributions he does. Let's say there's a 0.1% chance of that happening. On the other hand, suppose there's a child dying in the third world who we could bring to the first world for the cost of Louis Pasteur not dying in his childhood who has a 1% chance of making the same contributions. Clearly, losing the latter child is, on average, a greater tragedy than Louis Pasteur.

It's reasonable to invest heavily in fewer people who can therefore make Pasteur-like contributions, rather than lightly in more people who won't. Unless I'm mistaken, this is essentially what CFAR is doing. However, bell curves tell us that there's more extraordinary people in the developing countries who could matter way more than people in developed countries, but only if we get them into developed countries where we can tap their potential. For every child in America who, given a standard education, has a 1% chance of making Pasteur-like contributions, there's three in India, and, if we can identify them cheaply, it's much more cost effective to move their chances of success from epsilon to 0.01 than the developed child's chances from 0.01 to 0.02.

Comment author: pianoforte611 06 January 2014 04:42:43AM *  -2 points [-]
  1. Truly rare talent is not fungible. Without Grigori Perelman, mathematicians would have struggled for a very long time to crack the Poincare Conjecture, even though all of the required tools were already there. The same is true in physics and possibly chemistry (Linus Pauling comes to mind). Maybe biology but I'm not sure. Its possible that Pasteur was fungible, but there is another issue I didn't mention: the effects of losing great minds isn't linear. Losing 100 scientists is worse than 100 times as bad as losing one (more on this later).

"However, bell curves tell us that there's more extraordinary people in the developing countries who could matter way more than people in developed countries, but only if we get them into developed countries where we can tap their potential"

First of all countries differ in their average IQ so the math does not work that way. Also extraordinary students are already able to become scientists if they want in developed countries - universities take students from all over the world. Finally this isn't what the effective altruist movement is focused on. A QALY based approach would not have us identify the brightest children in developing countries and bring them to developed countries. A full scholarship costs what $100,000 minimum? Clearly a QALY based approach would demand that we instead use that money to save several hundred lives.

Edit: I suspect that I may have come across as suggesting that we divert the effort from EA into saving potential Louis Pasteurs. That was not my intention: I was using an extreme example to show why QALYs (or pretty much anything that amounts to save as many lives as possible) are a poor metric - when you save the lives of a group of people: you have to consider what those people are going to do and how they are going to change society.

I don't know what the most worthwhile thing to do is: I'm not that arrogant. But I don't think that public health interventions in very poor countries are the most worthwhile things.

Comment author: CarlShulman 06 January 2014 05:54:29AM *  1 point [-]

Maximizing DALY's isn't even the correct course of action under utilitarianism

That's an understatement! DALYs are defined as intrinsically bad: one DALY is the loss of one year of healthy life relative to a reference lifespan, or equivalent morbidity. QALYs are the good ones that you want to increase.

Comment author: pianoforte611 06 January 2014 01:37:36PM 1 point [-]

Edited

Comment author: jsteinhardt 06 January 2014 08:24:40AM 1 point [-]

I thought it was clear from the "Over-reliance on a small set of tools" section that I am strongly against relying on DALYs or similar metrics. Although I disagree with the framing of the solution being to weight different people differently. I'd prefer to move beyond the "maximize weighted sum of happiness" framing entirely (although still retain it as one of many reasoning tools).

Comment author: pianoforte611 06 January 2014 01:38:52PM 0 points [-]

Good point sorry, what criteria would you use?

Comment author: BarbaraB 05 January 2014 07:30:28PM 1 point [-]

It is interesting, what people inside EA find troubling, compared to people outside. (I do not identify myself as EA).

For me, the most repellent things are mentioned here: http://lesswrong.com/lw/j8n/a_critique_of_effective_altruism/#poor-psychological-understanding

In other words, self sacrifice is expected from me to the extent, that my life would suck. No, thanks.

Specifically, the issues about children: 1. I want to have them. 2. Apart from my psychological need - do the damned EA know what they are doing ? Is it really that helpful, that western middle class should have even lower population growth, than there is know ? Some people predict, that Europe, as my (our ??) children will know it, will be Islamic. I hope I will not offend the muslim rationalists, I know there are some on this site. Anyway, the culture currently associated with Islam does not seem to me like truth-seeking-friendly. It certainly will fix itself later, like Christianity fixed itself from the bigotry stage in cca 600 years. But do we really want to withdraw from the population battle entirely ? (OK, the word battle probably does not attract You altruist folks, but I do not know the other way to say it). I can imagine the counterarguments, that spreading memes inside familes is not that efficient, that children often rebel. And that memes can be spread outside the family. Well, good luck turning the significant portion of Muslim imigrants into rationalists ! USA is different from EU, but I guess withdrawing middle class from children bearing pool there is also no victory.

I mean, I do not force anybody to have children if they do not want to. It is a lot of work and resources. But to guilt anybody into not having them ? There are way too many people in that cathegory, who are lazy to have them. Why adding another incentive by making it virtuous ?

Comment author: John_Maxwell_IV 06 January 2014 05:50:08PM *  10 points [-]

Prominent EA Julia Wise and her husband have decided to have kids. IMO, a good way to think about EA is that everyone makes their own trade-off between their own quality of life and the quality of life of others. You can also think of helping people in terms of scoring points.

Comment author: benkuhn 06 January 2014 06:15:31PM *  2 points [-]

FYI, her last name is just Wise.

Comment author: John_Maxwell_IV 07 January 2014 07:19:23AM 0 points [-]

Ack! Sorry!

Comment author: BarbaraB 06 January 2014 07:01:17PM 0 points [-]

Prominent EA Julia Wiseman and her husband have decided to have kids.

I am glad to hear it, because they were the most annoying example for me before. Good for her / them.

Comment author: owencb 05 January 2014 11:10:00PM 6 points [-]

I think some individual EAs may very reasonably decide that it's not worth it for them personally to have children. But part of the strength of the movement, that attracts people, is the message that people can achieve a lot without great personal sacrifice. I wonder where that message got lost.

Comment author: BarbaraB 06 January 2014 12:04:40AM *  6 points [-]

But part of the strength of the (EA) movement is the message that people can achieve a lot without great personal sacrifice. I wonder where that message got lost.

Hm, are You asking why I did not notice that message, or why did it, objectively, get lost ? I will answer the first part, why I never noticed such a message.

  1. Short after learning, that EA exist, my CFAR friend, who is dating an EA, told me about a disagreement she had with her boyfriend. He did not not want her to go her best friends wedding, because travel expenses and time spent could be used better. (Although he later admitted it was a half joke from his side). She also told me, he periodically scolds her, her temperament is too prone to hapiness, which makes her less understand suffering, which makes less incentive for her to work on preventing it. That was not a joke.

  2. I had some shocks at the EA facebook group. OK, Ben Kuhn complains that EA fb group is stupid these days. I was told that supporting less than optimal charity is immoral. I translate it into examples, that supporting any Slovakian charity is immoral, because money are better spent on AMF. Supporting this baby is probably even more immoral than my favourite Slovakian charity. If the example involved my baby, it was immoral to have the baby in the first place. I switly left the facebook group, but after my departure, I saw an afterdiscussion, that what a pity we made Barbara leave, these truths should not be revealed to newcomers.

  3. I spend 2 or 3 nights reading EA online stuff to determine, whether these interactions were outliers and came to conclusion they were not. Ben Kuhn does not convince me otherwise in his article. The article also confirms, that the pressure to have no children is felt by some. You may decide to have them, but the sentiment is :Aye, I am a sinner !

The peer pressure You get from these folks is overwhelmingly guilt inducing. I perceive the movement as self destructive and not sustainable.

Comment author: hyporational 06 January 2014 10:00:43AM *  5 points [-]

There is an identifiable homogenous movement? I'll gladly adopt the good ideas and apply them as it suits me, forget about the movement if it consists of self-handicapping pathologically literal people. Don't throw the baby out with the bathwater.

Comment author: BarbaraB 06 January 2014 10:33:28AM 3 points [-]

Peer pressure is a strong thing. I do not want to have a peer pressure of self-handicapping pathologically literal people on me, but they are EA mainstreem as far as I can tell. Therefore I want to keep distance from EA as "folks to hang out with". I, for instance, hope that LW meetup groups in Bratislava and Vienna will remain sensible places for me to go.

Comment author: eli_sennesh 06 January 2014 06:52:06PM 2 points [-]

Yes, there is an identifiable homogeneous movement. Those people are the reason I don't actually tell most of my IRL friends that I have a LessWrong account. These kinds of people are the ones defining the reputation of LessWrong, rationalism, effective altruism, MIRI, CFAR, and every associated whatever.

Comment author: hyporational 06 January 2014 09:31:54AM 0 points [-]

For me, EA is one reason among many for not having children. I doubt it would convince me alone, but I might say it did if I wanted to appear virtuous.

Comment author: Viliam_Bur 05 January 2014 08:05:43PM 7 points [-]

Seems like EA could be yet another destructive Western superstimulus, although unlike many others it generates a lot of utility in other parts of the world. Still, maybe avoiding self-destruction could be wiser in long run -- you know, just in case the Friendly AI fails to materialize for another 20 years or more.

Comment author: John_Maxwell_IV 06 January 2014 06:07:24PM 0 points [-]

Some people also take regular exercise too far. Does that make it a bad thing? If you're actually being self-destructive, then that's clearly not optimal even from an extreme altruist perspective because you will be most effective and productive when you are happy, healthy, and motivated. Consider how well many top companies like Google treat their employees.

Comment author: [deleted] 06 January 2014 06:17:51PM *  1 point [-]

This seems to be a common misconception about effective altruism for some people.

I once told a person about effective altruism and this person said to me:

Have you reduced your diet to rice, beans, spinach, water, and maybe a multi-vitamin? That would provide all the essential nutrients you need to survive, and at the same time free up some of the money you were "wasting" on frivolous luxury eating beef, chicken, cheese, etc. Have you moved into the smallest, cheapest housing available? Did you sell your car and now rely solely on walking or biking to get around so you can donate all that money to the people who need it to survive?

This is a bit puzzling to me because it's quite clear that this is not an optimal lifestyle especially for effective altruists because you want to continue giving for as long as possible. The effects on your mental health and the burnout risk is not worth the small amount of money you could save by possibly sacrificing all your future donations.

Comment author: pianoforte611 06 January 2014 03:19:21AM *  1 point [-]

"Is it really that helpful, that western middle class should have even lower population growth, than there is now"

Like seriously. I don't think effective altruism will do too much damage to the already abysmal fertility rate of the Western world, but it sure isn't helping! I'm disturbed that may highly intelligent individuals have declined having children in favor of maximizing QALYs.

Comment author: eli_sennesh 06 January 2014 06:56:14PM 0 points [-]

I do have to say, I've never understood the European/Western liberal (in the broad classical sense, not the "social democratic sense", but more strongly among social democrats) impulse to devalue one's own culture and values so very much that one would rather go extinct in the process of helping others than survive in any form.

Yes, our planet cannot currently sustain a population of 9 billion (projected population peak in 2050) living at Western standards of income/consumption. Population reduction and/or (inshallah!) space colonization are necessary for humanity to live sustainably. This does not mean that we should segregate the species by belief into "Those who believe in sustainability", who then go extinct from non-breeding, and "Those who believe in having as many babies as possible", who then suffer an overpopulation crisis right quick.

Sustainability yes, voluntary extinction no.

Comment author: EHeller 06 January 2014 07:06:41PM *  3 points [-]

I do have to say, I've never understood the European/Western liberal (in the broad classical sense, not the "social democratic sense", but more strongly among social democrats) impulse to devalue one's own culture and values so very much...

I think you are confusing correlation for causation. I don't think the sustainability movement is largely responsible for declining birth rates, but rather that Western culture values many other things OVER child rearing, andmore advanced civilization requires delaying child birth until later. Most of the adult couples I know who are childless aren't childless for ethical reasons, but instead for things like careers,etc. This isn't a devaluing of culture, its the expression of it.

Hence, France managed to bring back their declining birth rates by making it easier to have kids, so the trade-off between (for instance) career/family is lessened. I'd be happy to see other first world countries address the problem in similar ways.

Comment author: eli_sennesh 06 January 2014 11:16:00PM 1 point [-]

That's usually my first explanation, actually. You're probably right and I just got misdirected.

Comment author: V_V 06 January 2014 11:21:55PM 2 points [-]

Yes, our planet cannot currently sustain a population of 9 billion (projected population peak in 2050) living at Western standards of income/consumption. Population reduction and/or (inshallah!) space colonization are necessary for humanity to live sustainably.

Do you expect space colonization before 2050?
Anyway, historically colonization didn't significantly reduce homeland population size.

Comment author: eli_sennesh 07 January 2014 03:19:14PM 0 points [-]

Do you expect space colonization before 2050?

Extremely difficult to forecast, since we're already in political turmoil in many parts of the world. I can't really say what sorts of governments will be in power by 2050.

Comment author: V_V 07 January 2014 04:41:26PM 0 points [-]

I don't think it's matter of politics. We don't have the technology for space colonization, neither now nor in the foreseeable future (~100 years).

Comment author: Nornagest 07 January 2014 05:18:20PM *  1 point [-]

We may be able to create stable colonies off-planet, and we almost certainly will be able to in 100 years, barring total nuclear war or self-replicating paperclips eating the planet or something. What we don't have the technology to do is to move a significant fraction of Earth's population off-planet -- that would cost in the high trillions of dollars even at cargo launch rates to LEO, and human-rated launches to any of the places we might actually want to colonize are much more expensive. Economies of scale could improve this, but not enough.

Space elevators or one of their relatives might make this more attractive in a "not burning all of Earth's available hydrocarbons" sense, but the energy balance is still pretty daunting.

Comment author: V_V 07 January 2014 05:25:59PM *  0 points [-]

We may be able to create stable colonies off-planet, and we almost certainly will be able to in 100 years

Earth-dependent outposts, e.g. an ISS on Mars, possibly yes, at great financial expenses and risk for those who would live there. Self-sustaining colonies, no.

Comment author: pianoforte611 06 January 2014 07:38:57PM *  1 point [-]

Jayman provides a pretty interesting story for why Western liberalism might be the way it is.

http://jaymans.wordpress.com/2012/06/01/liberalism-hbd-population-and-solutions-for-the-future/

Comment author: Pablo_Stafforini 11 February 2014 12:37:15AM *  0 points [-]

The history of effective altruism is littered with over-confident claims, many of which have later turned out to be false. In 2009, Peter Singer claimed that you could save a life for $200 (and many others repeated his claim).

I think this sentence misrepresents Peter Singer's position. Here's a relevant excerpt from The Life You Can Save (pp. 85-87, 103). As you can see, Singer actually criticizes many organizations for providing excessively optimistic estimates, and doesn't himself endorse the $200 per-life-saved figure.

For saving lives on a large scale, it is difficult to beat some of the campaigns initiated by the World Health Organization (WHO) […]

The WHO campaigns have saved lives and prevented blindness. But how efficiently have they used their resources—that is, how much have they cost per life saved? Until we can get closer to answering this question, it’s going to be hard to decide how to use our money most effectively. Organizations often put out figures suggesting that lives can be saved for very small amounts of money. WHO, for example, estimates that many of the 3 million people who die annually from diarrhea or its complications can be saved by an extraordinarily simple recipe for oral rehydration therapy: a large pinch of salt and a fistful of sugar dissolved in a jug of clean water. This lifesaving remedy can be assembled for a few cents, if only people know about it. UNICEF estimates that the hundreds of thousands of children who still die of measles each year could be saved by a vaccine costing less than $1 a dose. And Nothing But Nets, an organization conceived by American sportswriter Rick Reilly and supported by the National Basketball Association, provides anti-mosquito bed nets to protect children in Africa from malaria, which kills a million children a year. In its literature, Nothing But Nets mentions that a $10 net can save a life: “If you give $100 to Nothing But Nets, you’ve saved ten lives.”

If we could accept these figures, GiveWell’s job wouldn’t be so hard. All it would have to do to know which organization can save lives in Africa at the lowest cost would be to pick the lowest figure. But while these low figures are undoubtedly an important part of the charities’ efforts to attract donors, they are, unfortunately, not an accurate measure of the true cost of saving a life.

Take bed nets as an example. They will, if used properly, prevent people from being bitten by mosquitoes while they sleep, and therefore will reduce the risk of malaria. But not every net saves a life: Most children who receive a net would have survived without it. Jeffrey Sachs, attempting to measure the effect of nets more accurately, took this into account, and estimated that for every one hundred nets delivered, one child’s life will be saved every year (Sachs estimated that on average a net lasts five years). If that is correct, then at $10 per net delivered, $1000 will save one child a year for five years, so the cost is $200 per life saved (this doesn’t consider the prevention of dozens of debilitating but nonfatal cases). But even if we assume that these figures are correct, there is a gap in them—they give us the cost of delivering a bed net, and we know how many bed nets “in use” will save a life, but we don’t know how many of the bed nets that are delivered are actually used. And so the $200 figure is not fully reliable, and that makes it hard to measure whether providing bed nets is a better or worse use of our donations that other lifesaving measures. […]

It’s difficult to calculate how much it costs to save or transform the life of someone who is extremely poor. We need to put more resources into evaluating the effectiveness of various programs. Nevertheless, we have seen that much of the work done by charities is highly cost-effective, and we can reasonably believe that the cost of saving a life through one of these charities is somewhere between $200 and $2,000.

Comment author: Lumifer 11 February 2014 01:14:35AM 2 points [-]

this sentence misrepresents Peter Singer's position

I don't know about that. In The Singer Solution to World Poverty he certainly sounds as if he is endorsing the $200/life number.

Comment author: Pablo_Stafforini 11 February 2014 11:01:17AM *  1 point [-]

I agree, but that is in an newspaper article written in 1999, not in the source alluded in the original post ("In 2009, Peter Singer claimed that you could save a life for $200... [t]he number was already questionable at the time."). When Singer takes a closer look at the estimates, as he does in The Life You Can Save, he reaches a more conservative and nuanced conclusion.

Comment author: MichaelVassar 12 January 2014 05:21:47PM 0 points [-]

Another reasonable concern has to do with informational flow-through lines. When novel investigation demonstrates that previous claims or perspectives were in error, do we have good ways to change the group consensus?