All of wdmacaskill's Comments + Replies

Argh! Original post didn't go through (probably my fault), so this will be shorter than it should be:

First point:

I know very little about CEA, and a brief check of their website leaves me a little unclear on why Luke recommends them, aside from the fact that they apparently work closely with FHI.

CEA = Giving What We Can, 80,000 Hours, and a bit of other stuff

Reason -> donations to CEA predictably increase the size and strength of the EA community, a good proportion of whom take long-run considerations very seriously and will donate to / work for FHI... (read more)

0John_Maxwell
I looked but didn't see any donation info for CSER. Are they soliciting donations?
3Sean_o_h
On point 1: I can confirm that members of CEA have done quite a lot of awareness-spreading about existential risks and long-run considerations, as well as bringing FHI, MIRI and other organisations to the attention of potential donors who have concerns in this area. I generally agree with Will's point, and I think it's very plausible that CEA's work will result in more philanthropic funding coming FHI's way in the future. On point 2: I also agree. I need to have some discussion with the founders to confirm some points on strategy going forward as soon as the Christmas period's over, but it's likely that additional funds could play a big role in CSER's progress in gaining larger funding streams. I'll be posting on this shortly.

CEA and CFAR don't do anything, to my knowledge, that would increase these odds, except in exceedingly indirect ways.

People from CEA, in collaboration with FHI, have been meeting with people in the UK government, and are producing policy briefs on unprecedented risks from new technologies, including AI (the first brief will go on the FHI website in the near future). These meetings arose as a result of GWWC media attention. CEA's most recent hire, Owen Cotton-Barratt, will be helping with this work.

your account of effective altruism seems rather different from Will's: "Maybe you want to do other things effectively, but >then it's not effective altruism". This sort of mixed messaging is exactly what I was objecting too.

I think you've revised the post since you initially wrote it? If so, you might want to highlight that in the italics at the start, as otherwise it makes some of the comments look weirdly off-base. In particular, I took the initial post to aim at the conclusion:

  1. EA is utilitarianism in disguise which I think is demonstra
... (read more)
1Dias
I haven't revised the post subsequent to anyone commenting. I did make a ninja edit to clear up some formatting immediately after submitting.

I think the simple answer is that "effective altruism" is a vague term. I gave you what I thought was the best way of making it precise. Weeatquince, and Luke Muelhauser wanted to make it precise in a different way. We could have a debate about which is the more useful precisifcation, but I don't think that here is the right place for that.

On either way of making the term precise, though, EA is clearly not trying to be the whole of morality, or to give any one very specific conception of morality. It doesn't make a claim about side-constraints; i... (read more)

Hi,

Thanks for this post. The relationship between EA and well-known moral theories is something I've wanted to blog about in the past.

So here are a few points:

1. EA does not equal utilitarianism.

Utilitarianism makes many claims that EA does not make:

EA does not claim whether it's obligatory or merely supererogatory to spend one's resources helping others; utilitarianism claims that it is obligatory.

EA does not make a claim about whether there are side-constraints - certain things that it is impermissible to do, even if it were for the greater good. Utili... (read more)

0[anonymous]
sd
0lmm
That's rather a double standard there. Any specific form of EA does make a precise claim about what should be maximized.
4Dias
Thanks for the response. I agree with most of the territory covered, of course, but my objection here is to the framing, not the philosophy. So why does the website explicitly list fairness, justice and trying to do as much good as possible as EA goals in themselves? And why does user:weeatquince (whose identity we both know but I will not 'out' on a public forum) think that "actions and organizations that are ethical through ways other than producing welfare/happiness, as long as they apply rationality to doing good" are EA?

I explicitly address this in the second paragraph of the "The history of GiveWell’s estimates for lives saved per dollar" section of my post as well as the "Donating to AMF has benefits beyond saving lives" section of my post.

Not really. You do mention the flow-on benefits. But you don't analyse whether your estimate of "good done per dollar" has increased or decreased. And that's the relevant thing to analyse. If you argued "cost per life saved has had greater regression to your prior than you'd expected; and for that... (read more)

0JonahS
Ok. Do you have any suggestions for how I could modify my post to make it more clear in these respects?

Good post, Jonah. You say that: "effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact". What do you mean by "qualitative analysis"? As I understand it, your points are: i) The amount by which you should regress to your prior is much greater than you had previously thought, so ii) you should favour robustness of evidence more than you had previously. But that doesn't favour qualitative vs non-qualitative evidence. It favours... (read more)

3JonahS
* Assessing the quality of the people behind a project is qualitative rather than quantitative. * Room for more funding is in principle quantitative, but my experience has been that in practice, room for more funding analysis ends up being more qualitative, as you have to make judgments about things such as who would otherwise have funded the project, which hinge heavily on knowledge of the philanthropic landscape in respects that aren't easily quantified. * Gauging historical precedent requires many judgment calls, and so can't be quantified. * Deciding what giving opportunities one can learn the most from can't be quantified. I explicitly address this in the second paragraph of the "The history of GiveWell’s estimates for lives saved per dollar" section of my post as well as the "Donating to AMF has benefits beyond saving lives" section of my post. I agree with this. I don't think that my post suggests otherwise.

Thanks for mentioning this - I discuss Nozick's view in my paper, so I'm going to edit my comment to mention this. A few differences:

As crazy88 says, Nozick doesn't think that the issue is a normative uncertainty issue - his proposal is another first-order decision theory, like CDT and EDT. I argue against that account in my paper. Second, and more importantly, Nozick just says "hey, our intuitions in Newcomb-cases are stakes-sensitive" and moves on. He doesn't argue, as I do, that we can explain the problematic cases in the literature by appeal ... (read more)

Don't worry, that's not an uncomfortable question. UDT and MDT are quite different. UDT is a first-order decision theory. MDT is a way of extending decision theories - so that you take into account uncertainty about which decision theory to use. (So, one can have meta causal decision theory, meta evidential decision theory, and (probably, thought I haven't worked through it) meta updateless decision theory.)

UDT, as I understand it (and note I'm not at all fluent in UDT or TDT) always one-boxes; whereas if you take decision-theoretic uncertainty into accoun... (read more)

0PikachuSpecial
Can't we just assume that whatever we do was predicted correctly? The problem does assume an 'almost certain' predictor. Shouldn't that make two-boxing the worst move?

UDT is totally supposed to smoke on the smoking lesion problem. That's kinda the whole point of TDT, UDT, and all the other theories in the family.

It seems to me that your high-stakes predictor case is adequately explained by residual uncertainty about the scenario setup and whether Omega actually predicts you perfectly, which will yield two-boxing by TDT in this case as well. Literal, absolute epistemic certainty will lead to one-boxing, but this is a degree of certainty so great that we find it difficult to stipulate even in our imaginations.

I ought to... (read more)

(part 3; final part)

Second: The GWWC Pledge. You say:

“The GWWC site, for example, claims that from 291 members there will be £72.68M pledged. This equates to £250K / person over the course of their life. Claiming that this level of pledging will occur requires either unreasonable rates of donation or multi-decade payment schedules. If, in line with GWWC's projections, around 50% of people will maintain their donations, then assuming a linear drop off the expected pledge from a full time member is around £375K. Over a lifetime, this is essentially £10K / ye... (read more)

Thanks for writing this. I found it illuminating.

In the future, I'd suggest posting multipart comments like this as replies to one another, so it's easier to read them in order.

(part 2) The most important mistakes in the post

Bizarre Failures to Acquire Relevant Evidence As lukeprog noted, you did not run this post by anyone within CEA who had sufficient knowledge to correct you on some of the matters given above. Lukeprog describes this as ‘common courtesy’. But, more than that, it’s a violation of a good epistemic principle that one should gain easily accessible relevant information before making a point publicly.

The most egregious violation of this principle is that, though you say you focus on the idea that donating to CEA has... (read more)

2efenj
The $43bn figure (the amount the World Bank (WB) lent in 2011) can be found on the WB website here, the factor of 17000 comes (I think) from dividing $43 bn by the expected annual donations from the pledges ($43 bn / ($112 mn in pledges / 45 years of work) ~ 17000). However, obviously, as you state, doubling the effectiveness of WB activities will not have the same impact as bringing CEA up to the size of the WB, unless one (unrealistically) assumes that the GWWC recommended charities are only twice as effective as the average WB intervention (though ideally one should take into account the diminishing marginal returns of GWWC and 80k).
5jefftk
Eliezer's HN comment: http://news.ycombinator.com/item?id=4726651

(part 1) Summary Thanks once again, Jonathan, for taking the time to write publicly about CEA, and to make some suggestions about ways in which CEA might be falling short. In what follows I’ll write a candid response to your post, which I hope you’ll take as a sign of respect — this is LW and I know that honesty in this community is valued far more than sugarcoating. Ultimately, we’re all aiming here to proportion our beliefs to our evidence, and beating around the bush doesn’t help with that aim.

In your post you raise some important issues — often issues ... (read more)

Hi Jonathan,

First off, thanks for putting so much time into writing this extensive list of questions and doubts you have about CEA. Unlike for-profit activities, we don't have immediate feedback effects telling us when we're doing well and when we're doing badly, so criticism is an important countermeasure to make sure we do things as well as possible. We therefore really welcome people taking a critical eye to our activities.

As the person who wrote the original CEA material here on LessWrong, and the person who you mention above, I feel I should be the on... (read more)

7Joanna Morningstar
Hi Will, I'm glad to hear that a general response is being collated; if there are things where CEA can improve it would seem like a good idea to do them, and if I'm wrong I would like to know that. Turning to the listed points: * I went into that conversation with a number of questions I sought answers to, and either asked them or saw the data coming up from other questions. I knew your time was valuable and mostly targeted at other people there. * Adam explicitly signed off on my comment to Luke. He saw the draft post, commented on it, recommended it be put here and received the original string of emails in the context of being a friend, and person I knew would have a closer perspective on the day to day running of CEA than myself. * £1700 came from Jacob (Trefethen), in conversation shortly after you were in Cambridge, and purporting to be from internal numbers. I had asked whether CEA has an internal price at which new pledges would be bought, on the basis that one should exist, and it would be important for valuing a full-time Cambridge position. * ~4K is 1/3 of the Oxford undergrad population, which was the figure I had heard quoted in the discussion in Cambridge. * GWWC lists 8 people as a sample of past-and-present researchers, a research manager and a research director. I estimated that half of the former set would have moved on, and thus that 6 people were at least engaged in part time research for GWWC. I am concerned both about utility-maximisation and the ROI. It seems easier to fix efficiency problems whilst institutions are still small, or create alternate more efficient institutions if need be; ideally groups akin to CEA's projects are going to move budgets of O(10^9 / year), and I want to see that used as effectively as possible. In terms of ROI, I don't put large weight in the estimated returns absent a calculation or substantial trust in the instrumental rationality of the organisation making the claims. To take the canonical example,

At the moment the best thing to do would be to link to each of the organisations' websites individually.

It's a good point. So far it hasn't been an issue. But if there was someone who we thought was worth the money, and for some good reason simply wouldn't work for less than a certain amount, then we'd pay a higher amount - we don't have a policy that we aren't able to pay any more than £18k.

My response was too long to be a comment so I've posted it here. Thanks all!

Can I clarify: I think you meant "CEA" rather than "EAA" in your first question?

0Giles
Yes, thanks - fixed.

Hi - answer to this will be posted along with the responses to other questions on Giles' discussion page. If you e-mail me (will [dot] crouch [at] givingwhatwecan.org) then I can send you the calculations.

2Peter Wildeford
I look forward to it. Email sent!

It's a good question! I was going to respond, but I think that, rather than answering questions on this thread, I'll just let people keep asking questions, and then I'll respond to them all at once - hopefully that'll make readability clearer for other users.

Here is the CEA website - but it's just a stub linking to the others.

And no. To my knowledge, we haven't contacted her. From the website, it seems like our approaches are quite different, though the terms we use are similar.

3Viliam_Bur
The important part is whether the other charities are linking to CEA. Or at least acknowledge its existence. And cooperation. As an example of what may be wrong, look at this website: "www.ombudsmaninternational.com". It has a "Donate" button and links to many important organizations. But it's (almost certainly) a scam.

These are all good questions! Interestingly, they are all relevant to the empirical aspect of a research grant proposal I'm writing. Anyway, our research team is shared between 80,000 Hours and GWWC. They would certainly be interested in addressing all these questions (I think it would officially come under GWWC). I know that those at GiveWell are very interested in at least some of the above questions as well; hopefully they'll write on them soon.

Feel free to post the questions just now, Giles, in case that there are others that people want to add.

4Giles
Done

Thanks for this, this is a common response to earning to give. However, we already have a number of success stories: people who have started their EtG jobs and are loving them.

It's rare that someone had their heart set on a particular career, such as charity work, then completely changes their plans and begins EtG. Rather, much more common is that someone is thinking "I really want to do [lucrative career X], but I should do something more ethical" or that they think "I'm undecided between lucrative career X, and other careers Y and Z; all l... (read more)

katydee100

I'm not surprised that people are doing this now, but I will be surprised if most of them are still doing it in five years, much less in the actual long term.

That being said, if the organization can maintain recruitment of new people, a lot of good will still be done even under this assumption.

Thanks for this. Asking people "how much would you have pledged?" is of course only a semi-reliable method of ascertaining how much someone actually would have pledged. Some people - like yourself - might neglect that fact that they would have been convinced by the same arguments from other sources; others might be overoptimistic about how their future self would live up to their youthful ideals. We try to be as conservative as reasonable with our assumptions in this area: we take the data and then err on the side of caution. We assumed that 54... (read more)

0anholt
Excellent. That sounds pretty reasonable, and that's pretty impressive leveraging given those assumptions.
4Strange7
Perhaps it would also be useful to work backwards? That is, figure out exactly how conservative the assumptions need to be to put the value of a donation below the break-even point.

That's right. If there's a lot of concern, we can write up what we already know, and look into it further - we're very happy to respond to demand. This would naturally go under EAA research.

1Giles
There are some related concerns that need to be factored into the multipliers for extending lifespans and reducing poverty, but which don't fall naturally under EAA's research: * Impact of extra population/animal population/consumption on environmental and other resources * Effect of extending a life or reducing poverty on global economic growth * Positive impact of increased economic growth * Negative impact of increased economic growth - existential risk and possibly other considerations? * How much of the weights in the Disability-Adjusted Life Year calculation come from valuing quality of life factors for their own sake, and how much is a fudge factor associated with reduced expected income/employability/social involvement associated with disability or disease? Toby Ord makes sort of this point here Do you know which organisation's remit these kinds of question would fall into? Do any of these questions already receive mainstream attention (and if so are they likely to miss something important out of their calculations?)

Thanks benthamite, I think everything you said above was accurate.

It would be good to have more analysis of this.

Is saving someone from malaria really the most cost-effective way to speed technological progress per dollar?

The answer is that I don't know. Perhaps it's better to fund technology directly. But the benefit:cost ratio tends to be incredibly high for the best developing world interventions. So the best developing world health interventions would at least be contenders. In the discussion above, though, preventing malaria doesn't need to be the most cost-effective way of speeding up technological progress. The point was only that that benefit outweighs the harm done by increasing the amount of farming.

On (a). The argument for this is based on the first half of Bostrom's Astronomical Waste. In saving someone's life (or some other good economic investment), you move technological progress forward by a tiny amount. The benefit you produce is the difference you make at the end of civilisation, when there's much more at stake than there is now.

It's almost certainly more like -10,000N I'd be cautious about making claims like this. We're dealing with tricky issues, so I wouldn't claim to be almost certain about anything in this area. The numbers I used in th

... (read more)
MTGandP100

I think that calculation makes sense and the -36 number looks about right. I had actually done a similar calculation a while ago and came up with a similar number. I suppose my guess of -10,000 was too hasty.

It may actually be a good deal higher than 36 depending on how much suffering fish and shellfish go through. This is harder to say because I don't understand the conditions in fish farms nearly as well as chicken farms.

Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable.

"Separability" of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it's a good thing to bring a new person into existence depends only on facts about that person (assuming they don't have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn't be relevant what h... (read more)

0drnickbone
When discussing such questions, we need to be careful to distinguish the following: 1. Is a world containing population B better than a world containing population A? 2. If a world with population A already existed, would it be moral to turn it into a world with population B? 3. If Omega offered me a choice between a world with population A and a world with population B, and I had to choose one of them, knowing that I'd live somewhere in the world, but not who I'd be, would I choose population B? I am inclined to give different answers to these questions. Similarly for Parfit's repugnant conclusion; the exact phrasing of the question could lead to different answers. Another issue is background populations, which turn out to matter enormously for average utilitarianism. Suppose the world already contains a very large number of people wth average utility 10 (off in distant galaxies say) and call this population C. Then the combination of B+C has lower average utility than A+C, and gets a clear negative answer on all the questions, so matching your intuition. I suspect that this is the situation we're actually in: a large, maybe infinite, population elsewhere that we can't do anything about, and whose average utility is unknown. In that case, it is unclear whether average utilitarianism tells us to increase or decrease the Earth's population, and we can't make a judgement one way or another.
4TorqueDrifter
Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human - as in, in pop A you will get utility -10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward "maximize expected util of 'being someone in this world'", or something like "suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being's utility". Such perspectives would give the "non-intuitive" result in these sorts of thought experiments.
1A1987dM
Once you make such an unrealistic assumption, the conclusions won't necessarily be non-unrealistic. (If you assume water has no viscosity, you can conclude that it exerts no drag on stuff moving in it.) In particular, ISTM that as long as my basic physiological needs are met, my utility almost exclusively depends on interacting with other people, playing with toys invented by other people, reading stuff written by other people, listening to music by other people, etc.

By the way, thanks for the comments! Seeing as the post is getting positive feedback, I'm going to promote it to the main blog.

In order to get exceptional value for money you need to (correctly) believe that you are smarter than the big donors - >otherwise they'd already have funded whatever you're planning on funding to the point where the returns diminish to the >same level as everything else.

That's if you think that the big funders are rational and have similar goals as you. I think assuming they are rational is pretty close to the truth (though I'm not sure: charity doesn't have the same feedback mechanisms as business, because if you get punished you don't get punish... (read more)

0Strange7
The best way to look good to, say, exceptionally smart people and distant-future historians, is to act in almost exactly the way a genuinely good person would act.
3beoShaffer
Agree with the part before the dash, have a subtle but important correction to the second part. While the explicit desire to look good certainly can play a role, I think it is as or more common for giving to have a different proximate cause, but to still approximate efficient signaling (rather than efficient helping) because the underlying intuitions evolved for signaling purposes.

I wouldn't want to commit to an answer right now, but the Hansonian Hypothesis does make the right prediction in this case. If I'm directly helping, it's very clear that I have altruistic motives. But if I'm doing something much more indirect, then my motives become less clear. (E.g. if I go into finance in order to donate, I no longer look so different from people who go into finance in order to make money for themselves). So you could take the absence of meta-charity as evidence in favour of the Hansonian Hypothesis.

Hey,

80k members give to a variety of causes. When we surveyed, 34% were intending to give to x-risk, and it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area. As for how this pans out with additional members, we'll have to wait and see. But I'd expect $1 to 80k to generate significantly more than $1's worth of value even for existential risk mitigation alone. It certainly has done so far.

We did a little bit of impact-assessment for 80k (again, wit... (read more)

4John_Maxwell
Is saving someone from malaria really the most cost-effective way to speed technological progress per dollar? Seems like you might well be better off loaning money on kiva.org or some completely different thing. (Edit: Jonah Sinick points me to 1, 2, 3, 4 regarding microfinance.) Some thoughts from Robin Hanson on how speeding technological progress may affect existential risks: http://www.overcomingbias.com/2009/12/tiptoe-or-dash-to-future.html. I'd really like to see more analysis of this.
1MTGandP
This is purely speculative. You have not presented any evidence that (a) the compounding effects of donating money to alleviate poverty outweigh the direct effects, or that (b) this does not create enough animal suffering to outweigh the benefits. And it still ignores the fact that animal welfare charities are orders of magnitude more efficient than human charities. It's almost certainly more like -10,000N. One can determine this number by looking at the suffering caused by eating different animal products as well as the number of animals eaten in a lifetime (~21000).

Thanks for that. I guess that means I'm not a rationalist! I try my best to practice (1). But I only contingently practice (2). Even if I didn't care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering. I.e. I do what I do not because it's what I happen to value, but because I think it's objectively valuable (and if you value something else, like promoting suffering, then I think you're mistaken!) That is, I'm a moral realist. Whereas the definition given in E... (read more)

0Kindly
This seems to be similar to Eliezer's beliefs. Relevant quote from Harry Potter and the Methods of Rationality:
7thomblake
Not at all. (Eliezer is a sort of moral realist). It would be weird if you said "I'm a moral realist, but I don't value things that I know are objectively valuable". It doesn't really matter whether you're a moral realist or not - instrumental rationality is about achieving your goals, whether they're good goals or not. Just like math lets you crunch numbers, whether they're real statistics or made up. But believing you shouldn't make up statistics doesn't therefore mean you don't do math.

Haha! I don't think I'm worthy of squeeing, but thank you all the same.

In terms of the philosophy, I think that average utilitarianism is hopeless as a theory of population ethics. Consider the following case:

Population A: 1 person exists, with a life full of horrific suffering. Her utility is -100.

Population B: 100 billion people exist, each with lives full of horrific suffering. Each of their utility levels is -99.9

Average utilitarianism says that Population B is better than Population A. That definitely seems wrong to me: bringing into existence people whose lives aren't worth living just can't be a good thing.

0A1987dM
That's not obvious to me. IMO, the reason why in the real world “bringing into existence people whose lives aren't worth living just can't be a good thing” is that they consume resources that other people could use instead; but if in the hypothetical you fix the utility of each person by hand, that doesn't apply to the hypothetical. I haven't thought about these things that much, but my current position is that average utilitarianism is not actually absurd -- the absurd results of thought experiments are due to the fact that those thought experiments ignore the fact that people interact with each other.

Thanks! Yes, I'm good friends with Nick and Toby. My view on their model is as follows. Sometimes intertheoretic value comparisons are possible: that is, we can make sense of the idea that the difference in value (or wrongness) between two options A and B one one moral theory is greater, lesser, or equal to the difference in value (or wrongness) between two options C and D on another moral theory. So, for example, you might think that killing one person in order to save a slightly less happy person is much more wrong according to a rights-based moral view... (read more)

Hi All,

I'm Will Crouch. Other than one other, this is my first comment on LW. However, I know and respect many people within the LW community.

I'm a DPhil student in moral philosophy at Oxford, though I'm currently visiting Princeton. I work on moral uncertainty: on whether one can apply expected utility theory in cases where one is uncertain about what is of value, or what one one ought to do. It's difficult to do so, but I argue that you can.

I got to know people in the LW community because I co-founded two organisations, Giving What We Can and 80,000 Hou... (read more)

3beoShaffer
Hi Will, I think most LWer's would agree that; "Anyone who tries to practice rationality as defined on Less Wrong." is a passible description of what we mean by 'rationalist'.
6Nisan
I'm glad you're here! Do you have any comments on Nick Bostrom and Toby Ord's idea for a "parliamentary model" of moral uncertainty?
9MixedNuts
Pretense that this comment has a purpose other than squeeing at you like a 12-year-old fangirl: what arguments make you prefer total utilitarianism to average?

Hi all,

It's Will here. Thanks for the comments. I've responded to a couple of themes in the discussion below over at the 80,000 hours blog, which you can check out if you'd like. I'm interested to see the results of this poll!