In this thread, I would like to invite people to summarize their attitude to Effective Altruism and to summarise their justification for their attitude while identifying the framework or perspective their using.
Initially I prepared an article for a discussion post (that got rather long) and I realised it was from a starkly utilitarian value system with capitalistic economic assumptions. I'm interested in exploring the possibility that I'm unjustly mindkilling EA.
I've posted my write-up as a comment to this thread so it doesn't get more air time than anyone else's summarise and they can be benefit equally from the contrasting views.
I encourage anyone who participates to write up their summary and identify their perspective BEFORE they read the others, so that the contrast can be most plain.
Effective Altruism is a well-intentioned but flawed philosophy. This is a critique of typical EA approaches, but it might not apply to all EAs, or to alternative EA approaches.
Edit: In a follow up comment, I clarify that this critique is primarily directed at GiveWell and Peter Singer's styles of EA, which are the dominant EA approaches, but are not universal.
There is no good philosophical reason to hold EA's axiomatic style of utilitarianism. EA seems to value lives equally, but this is implausible from psychology (which values relatives and friends more), and also implausible from non-naive consequentialism, which values people based on their contributions, not just their needs.
Even if you agree with EA's utilitarianism, it is unclear that EA is actually effective at optimizing for it over a longer time horizon. EA focuses on maximizing lives saved in the present, but it has never been shown that this approach is optimal for human welfare over the long-run. The existential risk strand of EA gets this better, but it is too far off.
If EA is true, then moral philosophy is a solved problem. I don't think moral philosophy works that way. Values are much harder than EA gives credit for. Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.
EA has an opportunity cost, and its confidence is crowding out better ideas. What would those better altruistic interventions be? I don't know, but I feel like we can do better.
EAs have a weak understanding of geopolitics and demographics. The current state of the world is that Western Civilization, the goose that laid the golden egg, is declining. If indeed Western Civilization is in trouble, and we are facing near or medium-term catastrophic risks like social collapse, turning into Brazil, or war with Russia or China, then the highest-value opportunities for altruism will be at home. Unless you think we have a hard-takeoff AI scenario or technological miracles in the near-term, we should be very worried about geopolitics, demographics, and civilization in the medium-term and long-term.
If Western Civilization collapses, or is over-taken by China, then that will not be a good future for human welfare. Averting this possibility is way more high-impact than anything else that EAs are currently doing. If the West is secure and abundant, then maybe EAs have the right idea by redistributing wealth out of the West. But if the West is precarious and fragile, then redistribution makes less sense, and addressing the risks in the West seems more important.
EAs do not understand demographics, or are not taking them seriously if they do. The West is currently faltering in fertility and undergoing population replacement from people from areas with higher crime and corruption. Meanwhile, altruism itself varies between populations based on clannishness and inbreeding. We are heading towards a future that is demographically more clannish and less altruistic.
Some EAs are open borders advocates, but open borders is a ridiculously dangerous experiment for the West. They have not satisfactorily accounted for the crime and corruption that immigrants may bring. Additionally, under democracy, immigrants can vote and change the culture. Open border advocates hope that institutions will survive, but they have provided no good arguments that Western institutions will survive rapid demographic change. Institutions might seem fine and then rapidly collapse in a non-linear way. If Western Civilization collapses into ethnic turmoil or Soviet sclerosis, then humans everywhere will suffer.
Some EAs have a skeptical attitude towards parenthood, because it takes away money from charity, and believe that EAs are easier to convert than create. In some cases, EAs who want to become parents justify parenthood as an unprincipled exception. This whole conversation is ridiculous and exemplifies EAs’ flawed moral philosophy and understanding of humans. Altruistic parents are likely to have altruistic children due to the heritability of behavioral traits. If altruistic people fail to breed, then they will take their altruistic genes to the grave with them, like the Shakers. If altruism itself is a casualty of changing demographics, then human welfare will suffer in the future. (If you doubt this can happen, then check out the earlier two links, and good luck getting Eastern Europeans or Middle-Easterners interested in EA.)
I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments; see the interest of many EAs in open borders and animal rights. I do not see a large understanding in EA of what altruism is and how it can become pathological. Pathological altruism is where people become practically addicted to a feeling of doing good which leads them to act sometime with negative consequences. A quote from the book in that review, which shows some of the difficulties disentangling moral psychological from moral philosophy:
It seems that some people have strong intuitions towards altruism or animal rights, but it’s another thing entirely to say that those arguments are philosophically strong. It seems that people who are biologically predisposed towards altruism will be motivated to find philosophical arguments that justify what they already want to do. I don’t think EAs have corrected for this bias. If EAs’ arguments are flawed, then their adoption of them must be explained by their moral intuitions or signaling desires. Since EA provides great opportunities to signal altruism, intelligence, and discernment, it seems that there would be a gigantic temptation for some personalities to get into EA and exaggerate the quality of its arguments, or adopt its axioms even though other axioms are possible. Even though EAs employ reason and philosophy unlike typical pathological altruists, moral philosophy is subjective, and choice of particular moral theories seems highly related to personality.
The other psychological bias of EAs is due to them getting nerd-sniped by narrowly defining problems, or picking problems that are easier to solve or charities that are possible to evaluate. They seem to operate from the notion that giving away some of their money to charity is taken for granted, so they just need to find the best charity out of those that are possible to evaluate. In an inconvenient world for an altruist, the high-value opportunities are unknown or unknowable, throwing your money at what seems best might result in a negligible or negative effect, and keeping your money in your piggy bank until more obvious opportunities emerge might make the most sense.
EA isn’t all bad. It’s probably better than typical ineffective charities, so if you absolute must give to a charity, then effective charities are probably better. EAs have the right idea by trying to evaluate charities. Many EA arguments are strong within the bounds of utilitarianism, or the confines of a particular problem. But EAs have a hard road towards justification because their philosophy advocates spending money on strong moral claims, and being wrong about important things about the world will totally throw off their results.
My criticisms here don't apply to all EAs or all possible EA approaches, just the median EA arguments and interventions I've seen. It is conceivable that in the future EA will become more persuasive to a larger group of people once it has greater knowledge about the world and incorporates that knowledge into its philosophy. An alternative approach to EA would focus on preserving Western Civilization and avoiding medium-term political/demographic catastrophies. But nobody is sufficiently knowledgeable at this point to know how we could spend money towards this goal.
Interesting that the solutions you're jumping to are about defending the 'west' and beating the south / east rather than working with the south/east to make sure the best of both is shared?