In this thread, I would like to invite people to summarize their attitude to Effective Altruism and to summarise their justification for their attitude while identifying the framework or perspective their using.
Initially I prepared an article for a discussion post (that got rather long) and I realised it was from a starkly utilitarian value system with capitalistic economic assumptions. I'm interested in exploring the possibility that I'm unjustly mindkilling EA.
I've posted my write-up as a comment to this thread so it doesn't get more air time than anyone else's summarise and they can be benefit equally from the contrasting views.
I encourage anyone who participates to write up their summary and identify their perspective BEFORE they read the others, so that the contrast can be most plain.
I confess that I get the impression that the real purpose of the thread is Clarity's own comment, but here FWIW are my own opinions.
My underlying assumptions are consequentialist (approximately preference-utilitarian) as to ethics, and rationalist/empiricist as to epistemology.
"Effective altruism" can mean at least two things.
I very strongly approve of effective altruism in the first, broad, sense. I dare say narrow-sense EA is not the best possible version of broad-sense EA, but it may be the best approximation readily available.
I don't think strong approval of broad-sense EA is in need of much justification; if one is anything resembling a utilitarian (I am, as it happens, something resembling a utilitarian) then it's almost a tautology.
Should we weight people equally for EA purposes? (I.e., should we reject claims that "charity begins at home", that we should actually just take care of ourselves and to hell with everyone else, that it's morally right not to care about people far away and very different from ourselves, etc.?) To some extent this is a question about first principles and hence largely unanswerable. I think we should expect our moral intuitions to be more heavily weighted against further more-different people than we would want on reflection, because they are partly a product of evolution and in the not-very-distant past our ability to help further more-different people was drastically less.
Should we focus on interventions that target very poor people, people in very poor countries, etc.? Given the answer to the previous question, I think we should expect the best interventions to be there. Crude model explaining why: any given person will have a bunch of problems and will, roughly, solve them in order of benefit/cost; they will stop when they run out of resources. Money is not the only resource but by definition interconverts with a wide variety of resources. We should expect the people with least money to have the worst problems. The governments of the places where they live will make some effort to address some of those problems, and again will roughly address them in order of benefit/cost and stop when they run out of resources. So we should expect people in the poorest countries to have their problems helped least by governments. Likely weaknesses of model: some problems need resources not readily exchanged for money (so consider also highly "non-monetary" problems like depression, totally untreatable diseases, unrequited love), but note that by definition these are hard to address by giving money. Some people have a bad idea of how to help themselves (so consider whether ill-informed or cognitively weak people offer better opportunities for "paternalistic" charity than one would expect just on the basis of their wealth) but helping them effectively may be difficult and paternalism is kinda icky. Some governments are very ineffective (by accident or design) at using their resources to help their neediest citizens (so consider dysfunctional countries as well as poor ones) but note that helping people effectively is probably harder where governments are broken or malicious.
What about non-human animals? Dunno. Difficult question that I'm not going to try to resolve here.
What about existential risk? The difficulty here is that naive calculations tend to suggest we should drop everything else and reduce existential risk (where this is taken in a maybe-unusual sense that includes the "risk" that the human race endures for millions of years, but never engages in large-scale colonization of other planets or massive-scale uploading or other scenarios that produce colossal numbers of people), but this has a distinctly Pascal's-mugging flavour to it. Personally, I'm happy to discount future things and maybe even very distant things a little, and a little exponential discounting "tames" the argument that existential risk is overwhelmingly important.
Should we focus on causes for which clear quantifiable benefits can be demonstrated? I'd love to answer yes; that would make things so much easier. But the error terms in the quantification always seem to be enormous, and I don't see any good reason to assume that the actually-best causes are all ones whose benefits are readily quantified and demonstrated.
Should we focus on small charities with low-hanging fruit? They're surely the easiest to quantify the benefits of, and it's likely that there are good opportunities there (and I believe GiveWell has found some). I am not altogether convinced that these are actually the best opportunities, but finding and evaluating others seems like a really hard problem. (Candidates include: larger charities whose economies of scale or greater credibility might make them more effective; political lobbying to try to redirect the huge resources of major governments; investment in for-profit enterprises that might bring big benefits to poor places.)
Does GiveWell give good advice? Conditional on my answers above, I'd say it does about as well as I can see any plausible way of doing given the resources available to them, and I don't know of anyone else doing better.
Is "earning to give" better than working directly on doing good? I would expect this to vary a lot. If you are able to earn a good salary, wouldn't be much more effective at a charitable organization than other people they can afford to hire, and don't have exceptional skills directly applicable to doing good for the neediest people, then earning to give seems like a very good bet. If (e.g.) investment in carefully-chosen for-profit enterprises is actually a better way of doing good, that might be even better (though you should then consider whether you should give them $X rather than investing in them and getting $X less in expected return than you would for whatever investments you'd have made purely selfishly).