wdmacaskill

Wiki Contributions

Comments

Sorted by

Argh! Original post didn't go through (probably my fault), so this will be shorter than it should be:

First point:

I know very little about CEA, and a brief check of their website leaves me a little unclear on why Luke recommends them, aside from the fact that they apparently work closely with FHI.

CEA = Giving What We Can, 80,000 Hours, and a bit of other stuff

Reason -> donations to CEA predictably increase the size and strength of the EA community, a good proportion of whom take long-run considerations very seriously and will donate to / work for FHI/MIRI, or otherwise pursue careers with the aim of extinction risk mitigation. It's plausible that $1 to CEA generates significantly more than $1's worth of x-risk-value [note: I'm a trustee and founder of CEA].

Second point:

Don't forget CSER. My view is that they are even higher-impact than MIRI or FHI (though I'd defer to Sean_o_h if he disagreed). Reason: marginal donations will be used to fund program management + grantwriting, which would turn ~$70k into a significant chance of ~$1-$10mn, and launch what I think might become one of the most important research institutions in the world. They have all the background (high profile people on the board; an already written previous grant proposal that very narrowly missed out on being successful). High leverage!

CEA and CFAR don't do anything, to my knowledge, that would increase these odds, except in exceedingly indirect ways.

People from CEA, in collaboration with FHI, have been meeting with people in the UK government, and are producing policy briefs on unprecedented risks from new technologies, including AI (the first brief will go on the FHI website in the near future). These meetings arose as a result of GWWC media attention. CEA's most recent hire, Owen Cotton-Barratt, will be helping with this work.

your account of effective altruism seems rather different from Will's: "Maybe you want to do other things effectively, but >then it's not effective altruism". This sort of mixed messaging is exactly what I was objecting too.

I think you've revised the post since you initially wrote it? If so, you might want to highlight that in the italics at the start, as otherwise it makes some of the comments look weirdly off-base. In particular, I took the initial post to aim at the conclusion:

  1. EA is utilitarianism in disguise which I think is demonstrably false.

But now the post reads more like the main conclusion is:

  1. EA is vague on a crucial issue, which is whether the effective pursuit of non-welfarist goods counts as effective altruism. which is a much more reasonable thing to say.

I think the simple answer is that "effective altruism" is a vague term. I gave you what I thought was the best way of making it precise. Weeatquince, and Luke Muelhauser wanted to make it precise in a different way. We could have a debate about which is the more useful precisifcation, but I don't think that here is the right place for that.

On either way of making the term precise, though, EA is clearly not trying to be the whole of morality, or to give any one very specific conception of morality. It doesn't make a claim about side-constraints; it doesn't make a claim about whether doing good is supererogatory or obligatory; it doesn't make a claim about the nature of welfare. EA is broad tent, and deliberately so: very many different ethical perspectives will agree, for example, that it's important to find out which charities do the most to improve the welfare of those living in extreme poverty (as measured by QALYs etc), and then encouraging people to give to those charities. If so, then we've got an important activity that people of very many different ethical backgrounds can get behind - which is great!

Hi,

Thanks for this post. The relationship between EA and well-known moral theories is something I've wanted to blog about in the past.

So here are a few points:

1. EA does not equal utilitarianism.

Utilitarianism makes many claims that EA does not make:

EA does not claim whether it's obligatory or merely supererogatory to spend one's resources helping others; utilitarianism claims that it is obligatory.

EA does not make a claim about whether there are side-constraints - certain things that it is impermissible to do, even if it were for the greater good. Utilitarianism claims that it's always obligatory to act for the greater good.

EA does not claim that there are no other things besides welfare that are of value; utilitarianism does claim this.

EA does not make a precise claim about what promoting welfare consists in (for example, whether it's more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.

Also, note that some eminent EAs are not even consequentialist leaning, let alone utilitarian: e.g. Thomas Pogge (political philosopher) and Andreas Mogensen (Assistant Director of Giving What We Can) explicitly endorse a rights-based theory of morality; Alex Foster (epic London EtG-er) and Catriona MacKay (head of the GWWC London chapter) are both Christian (and presumably not consequentialist, though I haven't asked).

2. Rather, EA is something that almost every plausible moral theory is in favour of.

Almost every plausible moral theory thinks that promoting the welfare of others in an effective way is a good thing to do. Some moral theories that promoting the welfare of others is merely supererogatory, and others think that there are other values at stake. But EA is explicitly pro promoting welfare; it's not anti other things, and it doesn't claim that we're obligated to be altruistic, merely that it's a good thing to do.

3. Is EA explicitly welfarist?

The term 'altruism' suggests that it is. And I think that's fine. Helping others is what EAs do. Maybe you want to do other things effectively, but then it's not effective altruism - it's "effective justice", "effective environmental preservation", or something. Note, though, that you may well think that there are non-welfarist values - indeed, I would think that you would be mistaken not to act as if there were, on moral uncertainty grounds alone - but still be part of the effective altruism movement because you think that, in practice, welfare improvement is the most important thing to focus on.

So, to answer your dilemma:

EA is not trying to be the whole of morality.

It might be the whole of morality, if being EA is the only thing that is required of one. But it's not part of the EA package that EA is the whole of morality. Rather, it represents one aspect of morality - an aspect that is very important for those living in affluent countries, and who have tremendous power to help others. The idea that we in rich countries should be trying to work out how to help others as effectively as possible, and then actually going ahead and doing it, is an important part of almost every plausible moral theory.

I explicitly address this in the second paragraph of the "The history of GiveWell’s estimates for lives saved per dollar" section of my post as well as the "Donating to AMF has benefits beyond saving lives" section of my post.

Not really. You do mention the flow-on benefits. But you don't analyse whether your estimate of "good done per dollar" has increased or decreased. And that's the relevant thing to analyse. If you argued "cost per life saved has had greater regression to your prior than you'd expected; and for that reason I expect my estimates of good done per dollar to regress really substantially" (an argument I think you would endorse), I'd accept that argument, though I'd worry about how much it generalises to cause-areas other than global poverty. (e.g. I expect there to be much less of an 'efficient market' for activities where there are fewer agents with the same goals/values, like benefiting non-human animals, or making sure the far-future turn out well). Optimism bias still holds, of course.

You say that "cost-effectiveness estimates skew so negatively." I was just pointing out that for me that hasn't been the case (for good done per $), because long-run benefits strike me as swamping short-term benefits, a factor that I didn't initially incorporate into my model of doing good. And, though I agree with the conclusion that you want as many different angles as possible (etc), focusing on cost per life saved rather than good done per dollar might lead you to miss important lessons (e.g. "make sure that you've identified all crucial normative and empirical considerations"). I doubt that you personally have missed those lessons. But they aren't in your post. And that's fine, of course, you can't cover everything in one blog post. But it's important for the reader not to overgeneralise.

I agree with this. I don't think that my post suggests otherwise.

I wasn't suggesting it does.

Good post, Jonah. You say that: "effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact". What do you mean by "qualitative analysis"? As I understand it, your points are: i) The amount by which you should regress to your prior is much greater than you had previously thought, so ii) you should favour robustness of evidence more than you had previously. But that doesn't favour qualitative vs non-qualitative evidence. It favours more robust evidence of lower but good cost-effectiveness over less robust evidence of higher cost-effectiveness. The nature of the evidence could be either qualitative or quantitative, and the things you mention in "implications" are generally quantitative.

In terms of "good done per dollar" - for me that figure is still far greater than I began with (and I take it that that's the question that EAs are concerned with, rather than "lives saved per dollar"). This is because, in my initial analysis - and in what I'd presume are most people's initial analyses - benefits to the long-term future weren't taken into account, or weren't thought to be morally relevant. But those (expected) benefits strike me, and strike most people I've spoken with who agree with the moral relevance of them, to be far greater than the short-term benefits to the person whose life is saved. So, in terms of my expectations about how much good I can do in the world, I'm able to exceed those by a far greater amount than I'd previously thought likely. And that holds true whether it costs $2000 or $20000 to save a life. I'm not mentioning that either to criticise or support your post, but just to highlight that the lesson to take from past updates on evidence can look quite different depending on whether you're talking about "good done per dollar" or "lives saved per dollar", and the former is what we ultimately care about.

Final point: Something you don't mention is that, when you find out that your evidence is crappier than you'd thought, two general lessons are to pursue things with high option value and to pay to gain new evidence (though I acknowledge that this depends crucially on how much new evidence you think you'll be able to get). Building a movement of people who are aiming to do the most good with their marginal resources, and who are trying to work out how best to do that, strikes me as a good way to achieve both of these things.

Thanks for mentioning this - I discuss Nozick's view in my paper, so I'm going to edit my comment to mention this. A few differences:

As crazy88 says, Nozick doesn't think that the issue is a normative uncertainty issue - his proposal is another first-order decision theory, like CDT and EDT. I argue against that account in my paper. Second, and more importantly, Nozick just says "hey, our intuitions in Newcomb-cases are stakes-sensitive" and moves on. He doesn't argue, as I do, that we can explain the problematic cases in the literature by appeal to decision-theoretic uncertainty. Nor does he use decision-theoretic uncertainty to respond to arguments in favour of EDT. Nor does he respond to regress worries, and so on.

Don't worry, that's not an uncomfortable question. UDT and MDT are quite different. UDT is a first-order decision theory. MDT is a way of extending decision theories - so that you take into account uncertainty about which decision theory to use. (So, one can have meta causal decision theory, meta evidential decision theory, and (probably, thought I haven't worked through it) meta updateless decision theory.)

UDT, as I understand it (and note I'm not at all fluent in UDT or TDT) always one-boxes; whereas if you take decision-theoretic uncertainty into account you should sometimes one-box and sometimes two-box, depending on the relative value of the contents of the two boxes. Also, UDT gets what most decision-theorists consider the wrong answer in the smoking lesion case, whereas the account I defend, meta causal decision theory, doesn't (or, at least, doesn't, depending on one's credences in first-order decision theories).

To illustrate, consider the case:

High-Stakes Predictor II (HSP-II) Box C is opaque; Box D, transparent. If the Predictor predicts that you choose Box C only, then he puts one wish into Box C, and also a stick of gum. With that wish, you save the lives of 1 million terminally ill children. If he predicts that you choose both Box C and Box D, then he puts nothing into Box C. Box D — transparent to you — contains an identical wish, also with the power to save the lives of 1 million children, so if one had both wishes one would save 2 million children in total. However, Box D contains no gum. One has two options only: choose Box C only, or both Box C and Box D.

In this case, intuitively, should you one box, or two box? My view is clear: that if someone one-boxes in the above case, they made the wrong decision. And it seems to me that this is best explained with appeal to decision-theoretic uncertainty.

Other questions: Bostrom's parliamentary model is different. Between EDT and CDT, the intertheoretic comparisons of value are easy, so there's no need to use the parliamentary analogy - one can just straightforwardly take an expectation over decision theories.

Pascal's Mugging (aka the "Fanaticism" worry). This is a general issue for attempts to take normative uncertainty into account in one's decision-making, and not something I discuss in my paper. But if you're concerned about Pascal's mugging and, say, think that a bounded Decision Theory is the best way to respond to the problem - then at the meta level you should also have a bounded decision theory (and at the meta meta level, and so on).

(part 3; final part)

Second: The GWWC Pledge. You say:

“The GWWC site, for example, claims that from 291 members there will be £72.68M pledged. This equates to £250K / person over the course of their life. Claiming that this level of pledging will occur requires either unreasonable rates of donation or multi-decade payment schedules. If, in line with GWWC's projections, around 50% of people will maintain their donations, then assuming a linear drop off the expected pledge from a full time member is around £375K. Over a lifetime, this is essentially £10K / year. It seems implausible that expected mean annual earnings for GWWC members is of order £100K.”

Again, there are quite a few mistakes:

First, in comments you twice say that “£112.8M” has been pledged rather than “$112.8M”. I know that’s just a typo but it’s an important one.

Second, you say that the GWWC site claims that, “there will be £72.68M pledged” (future tense). It doesn’t, it says, “$112.8mn pledged” (past tense). It’s a pretty important difference – the pledging is something that has happened, not something that will happen. This might partly explain the confusion discussed in point 4, below. Third, and more substantively, you don’t consider the idea, raised in other comments, that some donors might be donating considerably more than 10%, or that some donors might be donating considerably more than the mean. Both are true of GWWC pledgers.

Fourth, you seem to wilfully misunderstand the verb ‘to pledge’. I regularly make the following statement: “I have pledged to give everything I earn above £20 000 p.a. [PPP and inflation-adjusted to Oxford 2009]”. Am I lying when I say that? Using synonyms, I could have said “I promise to give…”, “I commit to give…” or “I sincerely intend to give…”. None of these entail “I am certain that I will donate everything above £20 000 p.a.”. Using my belief that I will earn on average over £42 000 p.a. [PPP and inflation-adjusted to Oxford 2009] over the course of my life, and that I will work until I’m 68, I can infer that I’ve pledged to give over £1 000 000 over the course of my life, which is also something I say. Am I lying when I say that? (Also note that if only 73 people made the same pledge as me, then we would have jointly pledged the current GWWC amount).

Fifth, I don’t know why you took us to use the $100mn pledged figure as an estimate of our impact. In fact you had evidence to the contrary. In a blog post that you cite I said: “As of last March, we’d invested $170 000’s worth of volunteer time into Giving What We Can, and had moved $1.7 million to GiveWell or GWWC top-recommended development charities, and raised a further $68 million in pledged donations. Taking into account the facts that some proportion of this would have been given anyway, there will be some member attrition, and not all donations will go to the very best charities (and using data for all these factors when possible), we estimate that we had raised $8 in realised donations and $130 in future donations for every $1’s worth of volunteer time invested in Giving What We Can.” (emphasis added).

Finally, I think that the GWWC pledge is misleading only if it’s taken to be a measure of our impact. But we don’t advertise it as that. We could try to make it some other number. We could adjust the number downwards, in order to take into account: how much would have been given anyway; member attrition; a discount rate. Or we could adjust the number upwards, in order to take into account: overgiving; real growth of salaries, and inflation. It could also be adjusted downward to take into account that not all donations are to GW or GWWC recommended charities, or (perhaps) upwards to take into account the idea that we will have better evidence about the best giving opportunities in a few years’ time, and thereby be able to donate to charities better than AMF, SCI or DtW. But any number we gave based on these adjustments would be more misleading and arbitrary than the literal amount pledged. It would also be more confusing for the large majority of our website viewers who haven’t thought about things like counterfactual giving or whether the discount rate should be positive or negative over the next few years; they’re used to the social norm which is to advertise pledges as stated. Until you, no-one who does understand issues such as counterfactual giving and discount rates has understood the amount pledged figure as an impact-assessment.

In comments there was some uncertainty about how we come up with the total pledged figure. What we do is as follows. Each member, when they return their pledge form, states a) what percentage they commit to (or, if taking the Further Pledge, the baseline income above which they give everything); b) their birthdate; c) their expected average earnings per annum. Assuming a (conservative) standard retirement age, that allows us to calculate their expected donations. In some cases, members understandably don’t want to reveal their expected earnings. What we used to do, in such cases, is to use the mean earnings of all the other members who have given their incomes. However, when, recently, one member joined with very large expected earnings (pursuing earning to give), we raised the question whether this method suffers from sample bias, because people who expect to earn a lot will be more likely to report. I’m not sure that’s true: I could imagine that people who earn more often don’t want to flaunt that fact. However, wanting to be conservative, we decided instead to use the mean earnings of the country in which the member works.

Bottom Line for Readers If you’re interested in the question of whether 80,000 Hours and Giving What We Can have acted optimally or will act optimally in the future, the answer is simple: certainly not. We inevitably do some things worse than we could have done, and we value your input on concrete suggestions about how our organisations can improve.

If you’re interested in the question of whether $1 invested in 80,000 Hours or Giving What We Can produces more than $1’s worth of value for the best causes, read here, here, here and here and, most of all, contact me for the calculations and, if you’d like, our latest business plan, at will dot crouch at 80000hours.org. So far, I haven’t seen any convincing arguments to the conclusion that we fail to have a ROI greater than 1; however, it’s something I’d love additional input on, as the outside view makes me wary about believing that I work for the best charity I know of.

Load More