CEA and CFAR don't do anything, to my knowledge, that would increase these odds, except in exceedingly indirect ways.
People from CEA, in collaboration with FHI, have been meeting with people in the UK government, and are producing policy briefs on unprecedented risks from new technologies, including AI (the first brief will go on the FHI website in the near future). These meetings arose as a result of GWWC media attention. CEA's most recent hire, Owen Cotton-Barratt, will be helping with this work.
your account of effective altruism seems rather different from Will's: "Maybe you want to do other things effectively, but >then it's not effective altruism". This sort of mixed messaging is exactly what I was objecting too.
I think you've revised the post since you initially wrote it? If so, you might want to highlight that in the italics at the start, as otherwise it makes some of the comments look weirdly off-base. In particular, I took the initial post to aim at the conclusion:
I think the simple answer is that "effective altruism" is a vague term. I gave you what I thought was the best way of making it precise. Weeatquince, and Luke Muelhauser wanted to make it precise in a different way. We could have a debate about which is the more useful precisifcation, but I don't think that here is the right place for that.
On either way of making the term precise, though, EA is clearly not trying to be the whole of morality, or to give any one very specific conception of morality. It doesn't make a claim about side-constraints; i...
Hi,
Thanks for this post. The relationship between EA and well-known moral theories is something I've wanted to blog about in the past.
So here are a few points:
1. EA does not equal utilitarianism.
Utilitarianism makes many claims that EA does not make:
EA does not claim whether it's obligatory or merely supererogatory to spend one's resources helping others; utilitarianism claims that it is obligatory.
EA does not make a claim about whether there are side-constraints - certain things that it is impermissible to do, even if it were for the greater good. Utili...
I explicitly address this in the second paragraph of the "The history of GiveWell’s estimates for lives saved per dollar" section of my post as well as the "Donating to AMF has benefits beyond saving lives" section of my post.
Not really. You do mention the flow-on benefits. But you don't analyse whether your estimate of "good done per dollar" has increased or decreased. And that's the relevant thing to analyse. If you argued "cost per life saved has had greater regression to your prior than you'd expected; and for that...
Good post, Jonah. You say that: "effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact". What do you mean by "qualitative analysis"? As I understand it, your points are: i) The amount by which you should regress to your prior is much greater than you had previously thought, so ii) you should favour robustness of evidence more than you had previously. But that doesn't favour qualitative vs non-qualitative evidence. It favours...
Thanks for mentioning this - I discuss Nozick's view in my paper, so I'm going to edit my comment to mention this. A few differences:
As crazy88 says, Nozick doesn't think that the issue is a normative uncertainty issue - his proposal is another first-order decision theory, like CDT and EDT. I argue against that account in my paper. Second, and more importantly, Nozick just says "hey, our intuitions in Newcomb-cases are stakes-sensitive" and moves on. He doesn't argue, as I do, that we can explain the problematic cases in the literature by appeal ...
Don't worry, that's not an uncomfortable question. UDT and MDT are quite different. UDT is a first-order decision theory. MDT is a way of extending decision theories - so that you take into account uncertainty about which decision theory to use. (So, one can have meta causal decision theory, meta evidential decision theory, and (probably, thought I haven't worked through it) meta updateless decision theory.)
UDT, as I understand it (and note I'm not at all fluent in UDT or TDT) always one-boxes; whereas if you take decision-theoretic uncertainty into accoun...
UDT is totally supposed to smoke on the smoking lesion problem. That's kinda the whole point of TDT, UDT, and all the other theories in the family.
It seems to me that your high-stakes predictor case is adequately explained by residual uncertainty about the scenario setup and whether Omega actually predicts you perfectly, which will yield two-boxing by TDT in this case as well. Literal, absolute epistemic certainty will lead to one-boxing, but this is a degree of certainty so great that we find it difficult to stipulate even in our imaginations.
I ought to...
(part 3; final part)
Second: The GWWC Pledge. You say:
“The GWWC site, for example, claims that from 291 members there will be £72.68M pledged. This equates to £250K / person over the course of their life. Claiming that this level of pledging will occur requires either unreasonable rates of donation or multi-decade payment schedules. If, in line with GWWC's projections, around 50% of people will maintain their donations, then assuming a linear drop off the expected pledge from a full time member is around £375K. Over a lifetime, this is essentially £10K / ye...
Thanks for writing this. I found it illuminating.
In the future, I'd suggest posting multipart comments like this as replies to one another, so it's easier to read them in order.
(part 2) The most important mistakes in the post
Bizarre Failures to Acquire Relevant Evidence As lukeprog noted, you did not run this post by anyone within CEA who had sufficient knowledge to correct you on some of the matters given above. Lukeprog describes this as ‘common courtesy’. But, more than that, it’s a violation of a good epistemic principle that one should gain easily accessible relevant information before making a point publicly.
The most egregious violation of this principle is that, though you say you focus on the idea that donating to CEA has...
(part 1) Summary Thanks once again, Jonathan, for taking the time to write publicly about CEA, and to make some suggestions about ways in which CEA might be falling short. In what follows I’ll write a candid response to your post, which I hope you’ll take as a sign of respect — this is LW and I know that honesty in this community is valued far more than sugarcoating. Ultimately, we’re all aiming here to proportion our beliefs to our evidence, and beating around the bush doesn’t help with that aim.
In your post you raise some important issues — often issues ...
Hi Jonathan,
First off, thanks for putting so much time into writing this extensive list of questions and doubts you have about CEA. Unlike for-profit activities, we don't have immediate feedback effects telling us when we're doing well and when we're doing badly, so criticism is an important countermeasure to make sure we do things as well as possible. We therefore really welcome people taking a critical eye to our activities.
As the person who wrote the original CEA material here on LessWrong, and the person who you mention above, I feel I should be the on...
At the moment the best thing to do would be to link to each of the organisations' websites individually.
It's a good point. So far it hasn't been an issue. But if there was someone who we thought was worth the money, and for some good reason simply wouldn't work for less than a certain amount, then we'd pay a higher amount - we don't have a policy that we aren't able to pay any more than £18k.
Can I clarify: I think you meant "CEA" rather than "EAA" in your first question?
Hi - answer to this will be posted along with the responses to other questions on Giles' discussion page. If you e-mail me (will [dot] crouch [at] givingwhatwecan.org) then I can send you the calculations.
It's a good question! I was going to respond, but I think that, rather than answering questions on this thread, I'll just let people keep asking questions, and then I'll respond to them all at once - hopefully that'll make readability clearer for other users.
Here is the CEA website - but it's just a stub linking to the others.
And no. To my knowledge, we haven't contacted her. From the website, it seems like our approaches are quite different, though the terms we use are similar.
These are all good questions! Interestingly, they are all relevant to the empirical aspect of a research grant proposal I'm writing. Anyway, our research team is shared between 80,000 Hours and GWWC. They would certainly be interested in addressing all these questions (I think it would officially come under GWWC). I know that those at GiveWell are very interested in at least some of the above questions as well; hopefully they'll write on them soon.
Feel free to post the questions just now, Giles, in case that there are others that people want to add.
Thanks for this, this is a common response to earning to give. However, we already have a number of success stories: people who have started their EtG jobs and are loving them.
It's rare that someone had their heart set on a particular career, such as charity work, then completely changes their plans and begins EtG. Rather, much more common is that someone is thinking "I really want to do [lucrative career X], but I should do something more ethical" or that they think "I'm undecided between lucrative career X, and other careers Y and Z; all l...
I'm not surprised that people are doing this now, but I will be surprised if most of them are still doing it in five years, much less in the actual long term.
That being said, if the organization can maintain recruitment of new people, a lot of good will still be done even under this assumption.
Thanks for this. Asking people "how much would you have pledged?" is of course only a semi-reliable method of ascertaining how much someone actually would have pledged. Some people - like yourself - might neglect that fact that they would have been convinced by the same arguments from other sources; others might be overoptimistic about how their future self would live up to their youthful ideals. We try to be as conservative as reasonable with our assumptions in this area: we take the data and then err on the side of caution. We assumed that 54...
That's right. If there's a lot of concern, we can write up what we already know, and look into it further - we're very happy to respond to demand. This would naturally go under EAA research.
Thanks benthamite, I think everything you said above was accurate.
It would be good to have more analysis of this.
Is saving someone from malaria really the most cost-effective way to speed technological progress per dollar?
The answer is that I don't know. Perhaps it's better to fund technology directly. But the benefit:cost ratio tends to be incredibly high for the best developing world interventions. So the best developing world health interventions would at least be contenders. In the discussion above, though, preventing malaria doesn't need to be the most cost-effective way of speeding up technological progress. The point was only that that benefit outweighs the harm done by increasing the amount of farming.
On (a). The argument for this is based on the first half of Bostrom's Astronomical Waste. In saving someone's life (or some other good economic investment), you move technological progress forward by a tiny amount. The benefit you produce is the difference you make at the end of civilisation, when there's much more at stake than there is now.
...It's almost certainly more like -10,000N I'd be cautious about making claims like this. We're dealing with tricky issues, so I wouldn't claim to be almost certain about anything in this area. The numbers I used in th
I think that calculation makes sense and the -36 number looks about right. I had actually done a similar calculation a while ago and came up with a similar number. I suppose my guess of -10,000 was too hasty.
It may actually be a good deal higher than 36 depending on how much suffering fish and shellfish go through. This is harder to say because I don't understand the conditions in fish farms nearly as well as chicken farms.
Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable.
"Separability" of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it's a good thing to bring a new person into existence depends only on facts about that person (assuming they don't have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn't be relevant what h...
By the way, thanks for the comments! Seeing as the post is getting positive feedback, I'm going to promote it to the main blog.
In order to get exceptional value for money you need to (correctly) believe that you are smarter than the big donors - >otherwise they'd already have funded whatever you're planning on funding to the point where the returns diminish to the >same level as everything else.
That's if you think that the big funders are rational and have similar goals as you. I think assuming they are rational is pretty close to the truth (though I'm not sure: charity doesn't have the same feedback mechanisms as business, because if you get punished you don't get punish...
I wouldn't want to commit to an answer right now, but the Hansonian Hypothesis does make the right prediction in this case. If I'm directly helping, it's very clear that I have altruistic motives. But if I'm doing something much more indirect, then my motives become less clear. (E.g. if I go into finance in order to donate, I no longer look so different from people who go into finance in order to make money for themselves). So you could take the absence of meta-charity as evidence in favour of the Hansonian Hypothesis.
That's the hope! See below.
Hey,
80k members give to a variety of causes. When we surveyed, 34% were intending to give to x-risk, and it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area. As for how this pans out with additional members, we'll have to wait and see. But I'd expect $1 to 80k to generate significantly more than $1's worth of value even for existential risk mitigation alone. It certainly has done so far.
We did a little bit of impact-assessment for 80k (again, wit...
Thanks for that. I guess that means I'm not a rationalist! I try my best to practice (1). But I only contingently practice (2). Even if I didn't care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering. I.e. I do what I do not because it's what I happen to value, but because I think it's objectively valuable (and if you value something else, like promoting suffering, then I think you're mistaken!) That is, I'm a moral realist. Whereas the definition given in E...
Haha! I don't think I'm worthy of squeeing, but thank you all the same.
In terms of the philosophy, I think that average utilitarianism is hopeless as a theory of population ethics. Consider the following case:
Population A: 1 person exists, with a life full of horrific suffering. Her utility is -100.
Population B: 100 billion people exist, each with lives full of horrific suffering. Each of their utility levels is -99.9
Average utilitarianism says that Population B is better than Population A. That definitely seems wrong to me: bringing into existence people whose lives aren't worth living just can't be a good thing.
Thanks! Yes, I'm good friends with Nick and Toby. My view on their model is as follows. Sometimes intertheoretic value comparisons are possible: that is, we can make sense of the idea that the difference in value (or wrongness) between two options A and B one one moral theory is greater, lesser, or equal to the difference in value (or wrongness) between two options C and D on another moral theory. So, for example, you might think that killing one person in order to save a slightly less happy person is much more wrong according to a rights-based moral view...
Hi All,
I'm Will Crouch. Other than one other, this is my first comment on LW. However, I know and respect many people within the LW community.
I'm a DPhil student in moral philosophy at Oxford, though I'm currently visiting Princeton. I work on moral uncertainty: on whether one can apply expected utility theory in cases where one is uncertain about what is of value, or what one one ought to do. It's difficult to do so, but I argue that you can.
I got to know people in the LW community because I co-founded two organisations, Giving What We Can and 80,000 Hou...
Hi all,
It's Will here. Thanks for the comments. I've responded to a couple of themes in the discussion below over at the 80,000 hours blog, which you can check out if you'd like. I'm interested to see the results of this poll!
Argh! Original post didn't go through (probably my fault), so this will be shorter than it should be:
First point:
CEA = Giving What We Can, 80,000 Hours, and a bit of other stuff
Reason -> donations to CEA predictably increase the size and strength of the EA community, a good proportion of whom take long-run considerations very seriously and will donate to / work for FHI... (read more)