Davidmanheim comments on A critique of effective altruism - Less Wrong

64 Post author: benkuhn 02 December 2013 04:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (152)

You are viewing a single comment's thread. Show more comments above.

Comment author: CarlShulman 01 December 2013 11:29:34PM *  61 points [-]

Disclaimer: I like and support the EA movement.

I agree with Vaniver, that it would be good to give more time to arguments that the EA movement is going to do large net harm. You touch on this a bit with the discussion of Communism and moral disagreement within the movement, but one could go further. Some speculative ways in which the EA movement could have bad consequences:

  • The EA movement, driven by short-term QALYs, pulls effort away from affecting science and policy in rich countries with long-term impacts to brief alleviation of problems for poor humans and animals
  • AMF-style interventions increase population growth and lower average world income and education, which leads to fumbling of long-run trajectories or existential risk
  • The EA movement screws up population ethics and the valuation of different minds in such a way that it doesn't just fail to find good interventions, but pursues actively terrible ones (e.g. making things much worse by trading off human and ant conditions wrongly)
  • Even if the movement mostly does not turn towards promoting bad things, it turns out to be easier to screw things up than to help, and foolish proponents of conflicting sub-ideologies collectively make things worse for everyone, PD style; you see this in animal activists enthused about increasing poverty to reduce meat consumption, or poverty activists happy to create huge deadweight GDP losses as long as resources are transferred to the poor,
  • Something like explicit hedonistic utilitarianism becomes an official ideology somewhere, in the style of Communist states (even though the members don't really embrace it in full on every matter, they nominally endorse it as universal and call their contrary sentiments weakness of will): the doctrine implies that all sentient beings should be killed and replaced by some kind of simulated orgasm-neurons and efficient caretaker robots (or otherwise sacrifice much potential value in the name of a cramped conception of value), and society is pushed in this direction by a tragedy of the commons; also, see Robin Hanson
  • Misallocating a huge mass of idealists' human capital to donation for easily measurable things and away from more effective things elsewhere, sabotages more effective do-gooding for a net worsening of the world
  • The EA movement gets into politics and can't clearly evaluate various policies with huge upside and downside potential because of ideological blinders, and winds up with a massive net downside
  • The EA movement finds extremely important issues, and then turns the public off from them with its fanaticism, warts, or fumbling, so that it would have been better to have left those issues to other institutions
Comment author: Davidmanheim 03 December 2013 05:44:07PM 3 points [-]

Many of these issues seem related to arrow's impossibility theorem; if groups have genuinely different values, and we optimize for one set not another, ants get tiny apartments and people starve, or we destroy the world economy because we discount too much, etc.

To clarify, I think LessWrong thinks most issues are simple, because we know little about them; we want to just fix it. As an example, poverty isn't solved for good reasons; it's hard to balance incentives and growth, and deal with heterogeneity, there exist absolute limits on current wealth and the ability to move it around, and the competing priorities of nations and individuals. It's not unsolved because people are too stupid to give money to feed the poor charities. We underestimate the rest of of the world because we're really good at one thing, and think everyone is stupid for not being good at it - and even if we're right, we're not good at (understanding) many other things, and some of those things matter for fixing these problems.

Comment author: homunq 21 December 2013 07:23:09PM 4 points [-]

Note: Arrow's Impossibility Theorem is not actually a serious philosophical hurdle for a utilitarian (though related issues such as the Gibbard-Satterthwaite theorem may be). That is to say: it is absolutely trivial to create a social utility function which meets all of Arrow's "impossible" criteria, if you simply allow cardinal instead of just ordinal utility. (Arrow's theorem is based on a restriction to ordinal cases.)

Comment author: Davidmanheim 13 January 2014 05:32:12PM 0 points [-]

Thank you for the clarification; despite this, cardinal utility is difficult because it assumes that we care about different preferences the same amount, or definably different amounts.

Unless there is a commodity that can adequately represent preferences (like money) and a fair redistribution mechanism, we still have problems maximizing overall welfare.

Comment author: homunq 16 March 2014 12:43:09PM 1 point [-]

No argument here. It's hard to build a good social welfare function in theory (ie, even if you can assume away information limitations), and harder in practice (with people actively manipulating it). My point was that it is a mistake to think that Arrow showed it was impossible.

(Also: I appreciate the "thank you", but it would feel more sincere if it came with an upvote.)

Comment author: Davidmanheim 10 February 2015 06:04:00AM 0 points [-]

I had upvoted you. Also, I used Arrow as a shorthand for that class of theorem, since they all show that a class of group decision problem is unsolvable - mostly because I can never remember how to spell Satterthewaite.