Clarity comments on Effective Altruism from XYZ perspective - Less Wrong

4 Post author: Clarity 08 July 2015 04:34AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: Journeyman 10 July 2015 02:20:18AM *  7 points [-]

Effective Altruism is a well-intentioned but flawed philosophy. This is a critique of typical EA approaches, but it might not apply to all EAs, or to alternative EA approaches.

Edit: In a follow up comment, I clarify that this critique is primarily directed at GiveWell and Peter Singer's styles of EA, which are the dominant EA approaches, but are not universal.

  • There is no good philosophical reason to hold EA's axiomatic style of utilitarianism. EA seems to value lives equally, but this is implausible from psychology (which values relatives and friends more), and also implausible from non-naive consequentialism, which values people based on their contributions, not just their needs.

  • Even if you agree with EA's utilitarianism, it is unclear that EA is actually effective at optimizing for it over a longer time horizon. EA focuses on maximizing lives saved in the present, but it has never been shown that this approach is optimal for human welfare over the long-run. The existential risk strand of EA gets this better, but it is too far off.

  • If EA is true, then moral philosophy is a solved problem. I don't think moral philosophy works that way. Values are much harder than EA gives credit for. Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.

  • EA has an opportunity cost, and its confidence is crowding out better ideas. What would those better altruistic interventions be? I don't know, but I feel like we can do better.

  • EAs have a weak understanding of geopolitics and demographics. The current state of the world is that Western Civilization, the goose that laid the golden egg, is declining. If indeed Western Civilization is in trouble, and we are facing near or medium-term catastrophic risks like social collapse, turning into Brazil, or war with Russia or China, then the highest-value opportunities for altruism will be at home. Unless you think we have a hard-takeoff AI scenario or technological miracles in the near-term, we should be very worried about geopolitics, demographics, and civilization in the medium-term and long-term.

  • If Western Civilization collapses, or is over-taken by China, then that will not be a good future for human welfare. Averting this possibility is way more high-impact than anything else that EAs are currently doing. If the West is secure and abundant, then maybe EAs have the right idea by redistributing wealth out of the West. But if the West is precarious and fragile, then redistribution makes less sense, and addressing the risks in the West seems more important.

  • EAs do not understand demographics, or are not taking them seriously if they do. The West is currently faltering in fertility and undergoing population replacement from people from areas with higher crime and corruption. Meanwhile, altruism itself varies between populations based on clannishness and inbreeding. We are heading towards a future that is demographically more clannish and less altruistic.

  • Some EAs are open borders advocates, but open borders is a ridiculously dangerous experiment for the West. They have not satisfactorily accounted for the crime and corruption that immigrants may bring. Additionally, under democracy, immigrants can vote and change the culture. Open border advocates hope that institutions will survive, but they have provided no good arguments that Western institutions will survive rapid demographic change. Institutions might seem fine and then rapidly collapse in a non-linear way. If Western Civilization collapses into ethnic turmoil or Soviet sclerosis, then humans everywhere will suffer.

  • Some EAs have a skeptical attitude towards parenthood, because it takes away money from charity, and believe that EAs are easier to convert than create. In some cases, EAs who want to become parents justify parenthood as an unprincipled exception. This whole conversation is ridiculous and exemplifies EAs’ flawed moral philosophy and understanding of humans. Altruistic parents are likely to have altruistic children due to the heritability of behavioral traits. If altruistic people fail to breed, then they will take their altruistic genes to the grave with them, like the Shakers. If altruism itself is a casualty of changing demographics, then human welfare will suffer in the future. (If you doubt this can happen, then check out the earlier two links, and good luck getting Eastern Europeans or Middle-Easterners interested in EA.)

  • I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments; see the interest of many EAs in open borders and animal rights. I do not see a large understanding in EA of what altruism is and how it can become pathological. Pathological altruism is where people become practically addicted to a feeling of doing good which leads them to act sometime with negative consequences. A quote from the book in that review, which shows some of the difficulties disentangling moral psychological from moral philosophy:

Despite the fact that a moral conviction feels like a deliberate rational conclusion to a particular line of reasoning, it is neither a conscious choice nor a thought process. Certainty and similar states of ‘knowing that we know’ arise out of primary brain mechanisms that, like love or anger, function independently of rationality or reason. . . .

What feels like a conscious life-affirming moral choice—my life will have meaning if I help others—will be greatly influenced by the strength of an unconscious and involuntary mental sensation that tells me that this decision is “correct.” It will be this same feeling that will tell you the “rightness” of giving food to starving children in Somalia, doing every medical test imaginable on a clearly terminal patient, or bombing an Israeli school bus. It helps to see this feeling of knowing as analogous to other bodily sensations over which we have no direct control.

It seems that some people have strong intuitions towards altruism or animal rights, but it’s another thing entirely to say that those arguments are philosophically strong. It seems that people who are biologically predisposed towards altruism will be motivated to find philosophical arguments that justify what they already want to do. I don’t think EAs have corrected for this bias. If EAs’ arguments are flawed, then their adoption of them must be explained by their moral intuitions or signaling desires. Since EA provides great opportunities to signal altruism, intelligence, and discernment, it seems that there would be a gigantic temptation for some personalities to get into EA and exaggerate the quality of its arguments, or adopt its axioms even though other axioms are possible. Even though EAs employ reason and philosophy unlike typical pathological altruists, moral philosophy is subjective, and choice of particular moral theories seems highly related to personality.

The other psychological bias of EAs is due to them getting nerd-sniped by narrowly defining problems, or picking problems that are easier to solve or charities that are possible to evaluate. They seem to operate from the notion that giving away some of their money to charity is taken for granted, so they just need to find the best charity out of those that are possible to evaluate. In an inconvenient world for an altruist, the high-value opportunities are unknown or unknowable, throwing your money at what seems best might result in a negligible or negative effect, and keeping your money in your piggy bank until more obvious opportunities emerge might make the most sense.

EA isn’t all bad. It’s probably better than typical ineffective charities, so if you absolute must give to a charity, then effective charities are probably better. EAs have the right idea by trying to evaluate charities. Many EA arguments are strong within the bounds of utilitarianism, or the confines of a particular problem. But EAs have a hard road towards justification because their philosophy advocates spending money on strong moral claims, and being wrong about important things about the world will totally throw off their results.

My criticisms here don't apply to all EAs or all possible EA approaches, just the median EA arguments and interventions I've seen. It is conceivable that in the future EA will become more persuasive to a larger group of people once it has greater knowledge about the world and incorporates that knowledge into its philosophy. A neoreactionary approach to EA would focus on preserving Western Civilization and avoiding medium-term political/demographic catastrophies. But nobody is sufficiently knowledgeable at this point to know how we could spend money towards this goal.

Comment author: Clarity 10 July 2015 03:15:14AM *  4 points [-]

If anyone's skimming through these comments, it's worthwhile noting that most of my original ideas as seen in my top-level comment have been thoroughly refuted.

tl;dr - My perspective is, in short, echoed on Marginal Revolution:

‘Of course, there are systematic problems with charitable giving. Most importantly, the feedback mechanism is never going to work as well when people are buying something to be consumed by others (as Milton Friedman explains)’ –

Those criticisms that remain and many stronger points of contention are far more eloquently independently explained by Journeyman's critique here.

Anyhow, I don't like the movements branding, which is essentially its core feature. Since the community would probably reorganise around a new brand anyway. Altruism is fictional, hypothetical, doesn't exist.

It has been observed, however, that the very act of eating (especially, when there are others starving in the world) is such an act of self-interested discrimination. Ethical egoists such as Rand who readily acknowledge the (conditional) value of others to an individual, and who readily endorse empathy for others, have argued the exact reverse from Rachels, that it is altruism which discriminates: "If the sensation of eating a cake is a value, then why is it an immoral indulgence in your stomach, but a moral goal for you to achieve in the stomach of others?"

It is therefore altruism which is an arbitrary position, according to Rand.

  • W. Pedia.
Comment author: Randaly 12 July 2015 11:38:06AM 1 point [-]

Thanks, this helped me!