wdmacaskill

Wikitag Contributions

Comments

Sorted by

Ah, by the "software feedback loop" I mean: "At the point of time at which AI has automated AI R&D, does a doubling of cognitive effort result in more than a doubling of output? If yes, there's a software feedback loop - you get (for a time, at least) accelerating rates of algorithmic efficiency progress, rather than just a one-off gain from automation." 

I see now why you could understand "RSI" to mean "AI improves itself at all over time". But even so, the claim would still hold - even if (implausibly) AI gets no smarter than human-level, you'd still get accelerated tech development,  because the quantity of AI research effort would increase at a growth rate much faster than the quantity of human research effort. 

There's definitely a new trend towards custom-website essays. Forethought is a website for lots of research content, though (like Epoch), not just PrepIE.

And I don't think it's because of people getting more productive because of reasoning models - AI was helpful for PrepIE but more like 10-20% productivity boost than 100% boost, and I don't think AI was used much for SA, either.

Thanks - appreciate that! It comes up a little differently for me, but still an issue - we've asked the devs to fix. 

Argh! Original post didn't go through (probably my fault), so this will be shorter than it should be:

First point:

I know very little about CEA, and a brief check of their website leaves me a little unclear on why Luke recommends them, aside from the fact that they apparently work closely with FHI.

CEA = Giving What We Can, 80,000 Hours, and a bit of other stuff

Reason -> donations to CEA predictably increase the size and strength of the EA community, a good proportion of whom take long-run considerations very seriously and will donate to / work for FHI/MIRI, or otherwise pursue careers with the aim of extinction risk mitigation. It's plausible that $1 to CEA generates significantly more than $1's worth of x-risk-value [note: I'm a trustee and founder of CEA].

Second point:

Don't forget CSER. My view is that they are even higher-impact than MIRI or FHI (though I'd defer to Sean_o_h if he disagreed). Reason: marginal donations will be used to fund program management + grantwriting, which would turn ~$70k into a significant chance of ~$1-$10mn, and launch what I think might become one of the most important research institutions in the world. They have all the background (high profile people on the board; an already written previous grant proposal that very narrowly missed out on being successful). High leverage!

CEA and CFAR don't do anything, to my knowledge, that would increase these odds, except in exceedingly indirect ways.

People from CEA, in collaboration with FHI, have been meeting with people in the UK government, and are producing policy briefs on unprecedented risks from new technologies, including AI (the first brief will go on the FHI website in the near future). These meetings arose as a result of GWWC media attention. CEA's most recent hire, Owen Cotton-Barratt, will be helping with this work.

your account of effective altruism seems rather different from Will's: "Maybe you want to do other things effectively, but >then it's not effective altruism". This sort of mixed messaging is exactly what I was objecting too.

I think you've revised the post since you initially wrote it? If so, you might want to highlight that in the italics at the start, as otherwise it makes some of the comments look weirdly off-base. In particular, I took the initial post to aim at the conclusion:

  1. EA is utilitarianism in disguise which I think is demonstrably false.

But now the post reads more like the main conclusion is:

  1. EA is vague on a crucial issue, which is whether the effective pursuit of non-welfarist goods counts as effective altruism. which is a much more reasonable thing to say.

I think the simple answer is that "effective altruism" is a vague term. I gave you what I thought was the best way of making it precise. Weeatquince, and Luke Muelhauser wanted to make it precise in a different way. We could have a debate about which is the more useful precisifcation, but I don't think that here is the right place for that.

On either way of making the term precise, though, EA is clearly not trying to be the whole of morality, or to give any one very specific conception of morality. It doesn't make a claim about side-constraints; it doesn't make a claim about whether doing good is supererogatory or obligatory; it doesn't make a claim about the nature of welfare. EA is broad tent, and deliberately so: very many different ethical perspectives will agree, for example, that it's important to find out which charities do the most to improve the welfare of those living in extreme poverty (as measured by QALYs etc), and then encouraging people to give to those charities. If so, then we've got an important activity that people of very many different ethical backgrounds can get behind - which is great!

Hi,

Thanks for this post. The relationship between EA and well-known moral theories is something I've wanted to blog about in the past.

So here are a few points:

1. EA does not equal utilitarianism.

Utilitarianism makes many claims that EA does not make:

EA does not claim whether it's obligatory or merely supererogatory to spend one's resources helping others; utilitarianism claims that it is obligatory.

EA does not make a claim about whether there are side-constraints - certain things that it is impermissible to do, even if it were for the greater good. Utilitarianism claims that it's always obligatory to act for the greater good.

EA does not claim that there are no other things besides welfare that are of value; utilitarianism does claim this.

EA does not make a precise claim about what promoting welfare consists in (for example, whether it's more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.

Also, note that some eminent EAs are not even consequentialist leaning, let alone utilitarian: e.g. Thomas Pogge (political philosopher) and Andreas Mogensen (Assistant Director of Giving What We Can) explicitly endorse a rights-based theory of morality; Alex Foster (epic London EtG-er) and Catriona MacKay (head of the GWWC London chapter) are both Christian (and presumably not consequentialist, though I haven't asked).

2. Rather, EA is something that almost every plausible moral theory is in favour of.

Almost every plausible moral theory thinks that promoting the welfare of others in an effective way is a good thing to do. Some moral theories that promoting the welfare of others is merely supererogatory, and others think that there are other values at stake. But EA is explicitly pro promoting welfare; it's not anti other things, and it doesn't claim that we're obligated to be altruistic, merely that it's a good thing to do.

3. Is EA explicitly welfarist?

The term 'altruism' suggests that it is. And I think that's fine. Helping others is what EAs do. Maybe you want to do other things effectively, but then it's not effective altruism - it's "effective justice", "effective environmental preservation", or something. Note, though, that you may well think that there are non-welfarist values - indeed, I would think that you would be mistaken not to act as if there were, on moral uncertainty grounds alone - but still be part of the effective altruism movement because you think that, in practice, welfare improvement is the most important thing to focus on.

So, to answer your dilemma:

EA is not trying to be the whole of morality.

It might be the whole of morality, if being EA is the only thing that is required of one. But it's not part of the EA package that EA is the whole of morality. Rather, it represents one aspect of morality - an aspect that is very important for those living in affluent countries, and who have tremendous power to help others. The idea that we in rich countries should be trying to work out how to help others as effectively as possible, and then actually going ahead and doing it, is an important part of almost every plausible moral theory.

I explicitly address this in the second paragraph of the "The history of GiveWell’s estimates for lives saved per dollar" section of my post as well as the "Donating to AMF has benefits beyond saving lives" section of my post.

Not really. You do mention the flow-on benefits. But you don't analyse whether your estimate of "good done per dollar" has increased or decreased. And that's the relevant thing to analyse. If you argued "cost per life saved has had greater regression to your prior than you'd expected; and for that reason I expect my estimates of good done per dollar to regress really substantially" (an argument I think you would endorse), I'd accept that argument, though I'd worry about how much it generalises to cause-areas other than global poverty. (e.g. I expect there to be much less of an 'efficient market' for activities where there are fewer agents with the same goals/values, like benefiting non-human animals, or making sure the far-future turn out well). Optimism bias still holds, of course.

You say that "cost-effectiveness estimates skew so negatively." I was just pointing out that for me that hasn't been the case (for good done per $), because long-run benefits strike me as swamping short-term benefits, a factor that I didn't initially incorporate into my model of doing good. And, though I agree with the conclusion that you want as many different angles as possible (etc), focusing on cost per life saved rather than good done per dollar might lead you to miss important lessons (e.g. "make sure that you've identified all crucial normative and empirical considerations"). I doubt that you personally have missed those lessons. But they aren't in your post. And that's fine, of course, you can't cover everything in one blog post. But it's important for the reader not to overgeneralise.

I agree with this. I don't think that my post suggests otherwise.

I wasn't suggesting it does.

Good post, Jonah. You say that: "effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact". What do you mean by "qualitative analysis"? As I understand it, your points are: i) The amount by which you should regress to your prior is much greater than you had previously thought, so ii) you should favour robustness of evidence more than you had previously. But that doesn't favour qualitative vs non-qualitative evidence. It favours more robust evidence of lower but good cost-effectiveness over less robust evidence of higher cost-effectiveness. The nature of the evidence could be either qualitative or quantitative, and the things you mention in "implications" are generally quantitative.

In terms of "good done per dollar" - for me that figure is still far greater than I began with (and I take it that that's the question that EAs are concerned with, rather than "lives saved per dollar"). This is because, in my initial analysis - and in what I'd presume are most people's initial analyses - benefits to the long-term future weren't taken into account, or weren't thought to be morally relevant. But those (expected) benefits strike me, and strike most people I've spoken with who agree with the moral relevance of them, to be far greater than the short-term benefits to the person whose life is saved. So, in terms of my expectations about how much good I can do in the world, I'm able to exceed those by a far greater amount than I'd previously thought likely. And that holds true whether it costs $2000 or $20000 to save a life. I'm not mentioning that either to criticise or support your post, but just to highlight that the lesson to take from past updates on evidence can look quite different depending on whether you're talking about "good done per dollar" or "lives saved per dollar", and the former is what we ultimately care about.

Final point: Something you don't mention is that, when you find out that your evidence is crappier than you'd thought, two general lessons are to pursue things with high option value and to pay to gain new evidence (though I acknowledge that this depends crucially on how much new evidence you think you'll be able to get). Building a movement of people who are aiming to do the most good with their marginal resources, and who are trying to work out how best to do that, strikes me as a good way to achieve both of these things.

Load More