benkuhn comments on A critique of effective altruism - LessWrong

64 Post author: benkuhn 02 December 2013 04:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (152)

You are viewing a single comment's thread. Show more comments above.

Comment author: benkuhn 02 December 2013 03:28:48AM *  1 point [-]

That deflates that criticism. For the object-level social dynamics problem, I think that people will not actually care about those problems unless they are incentivised to care about those problems, and it's not clear to me that is possible to do.

Is epistemology the real failing, here? This may just be the communism analogy, but I'm not seeing how the incentive structure of EA is lined up with actually getting things done rather than pretending to actually get things done. Do you have a good model of the incentive structure of EA?

I don't think EA has to worry about incentive structure in the same way that communism does, because EA doesn't want to take over countries (well, if it does, that's a different issue). Fundamentally we rely on people deciding to do EA on their own, and thus having at least some sort of motivation (or, like, coherent extrapolated motivation) to actually try. (Unless you're arguing that EA is primarily people who are doing it entirely for the social feedback from people and not at all out of a desire to actually implement utilitarianism. This may be true; if it is, it's a separate problem from incentives.)

The problem is more that this motivation gets co-opted by social-reward-seeking systems and we aren't aware of that when it happens. One way to fix this is to fix incentives, it's true, but another way is to fix the underlying problem of responding to social incentives when you intended to actually implement utilitarianism. Since the reason EA started was to fix the latter problem (e.g. people responding to social incentives by donating to the Charity for Rare Diseases in Cute Puppies), I think that that route is likely to be a better solution, and involve fewer epicycles (of the form where we have to consciously fix incentives again whenever we discover other problems).

I'm also not entirely sure this makes sense, though, because as I mentioned, social dynamics isn't a comparative advantage of mine :P

(Responding to the meta-point separately because yay threading.)

Comment author: CarlShulman 02 December 2013 04:37:56AM *  13 points [-]

I don't think EA has to worry about incentive structure in the same way that communism does, because EA doesn't want to take over countries (well, if it does, that's a different issue)

GiveWell is moving into politics and advocacy, there are 80k people in politics, and GWWC principals like Toby Ord do a lot of advocacy with government and international organizations, and have looked at aid advocacy groups.

Comment author: Strange7 14 December 2013 09:50:12AM 1 point [-]

In a more general sense, telling some large, ideologically-cohesive group of people to take as much of their money as they can stand to part with and throw it all at some project, and expecting them to obey, seems like an intrinsically political act.

Comment author: Vaniver 02 December 2013 04:01:26AM 8 points [-]

Unless you're arguing that EA is primarily people who are doing it entirely for the social feedback from people and not at all out of a desire to actually implement utilitarianism. This may be true; if it is, it's a separate problem from incentives.

I think that the EA system will be both more robust and more effective if it is designed with the assumption that the people in it do not share the system's utility function, but that win-win trades are possible between the system and the people inside it.

Comment author: MichaelVassar 04 December 2013 05:15:20PM 4 points [-]

I think that attempting effectiveness points towards a strong attractor of taking over countries.

Comment author: ColonelMustard 02 December 2013 04:57:57AM *  9 points [-]

EA doesn't want to take over countries

"Take over countries" is such an ugly phrase. I prefer "country optimisation".

Comment author: atucker 02 December 2013 06:22:04AM *  2 points [-]

Social feedback is an incentive, and the bigger the community gets the more social feedback is possible.

Insofar as Utilitarianism is weird, negative social feedback is a major reason to avoid acting on it, and so early EAs must have been very strongly motivated to implement utilitarianism in order to overcome it. As the community gets bigger, it is less weird and there is more positive support, and so it's less of a social feedback hit.

This is partially good, because it makes it easier to "get into" trying to implement utilitarianism, but it's also bad because it means that newer EAs need to care about utilitarianism relatively less.

It seems that saying that incentives don't matter as long as you remove social-approval-seeking ignores the question of why the remaining incentives would actually push people towards actually trying.

It's also unclear what's left of the incentives holding the community together after you remove the social incentives. Yes, talking to each other probably does make it easier to implement utilitarian goals, but at the same time it seems that the accomplishment of utilitarian goals is not in itself a sufficiently powerful incentive, otherwise there wouldn't be effectiveness problems to begin with. If it were, then EAs would just be incentivized to effectively pursue utilitarian goals.