gjm comments on Open thread, Jun. 13 - Jun. 19, 2016 - Less Wrong

2 Post author: MrMind 13 June 2016 06:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread. Show more comments above.

Comment author: Daniel_Burfoot 13 June 2016 11:46:44PM 9 points [-]

I see in the "Recent on Rationality Blogs" panel an article entitled "Why EA is new and obvious". I'll take that as a prompt to list my three philosophical complaints abouts EA:

  • I believe in causality as a basic moral concept. My ethical system absolutely requires me to avoid hurting people, but is much less adamant about helping people. While some people claim to be indifferent to this distinction, in practice people's revealed moral preferences suggest that they agree with me (certainly the legal system agrees with me).
  • I also believe in locality as an ontologically primitive moral issue. I am more morally obligated to my mother than to a random stranger in Africa. Finer gradations are harder to tease out, but I still feel more obligation to a fellow American than to a citizen of another country, ceteris paribus.
  • I do not believe a good ethical system should rely on moral exhortation, at least not to the extent that EA does. Such systems will never succeed in solving the free-rider problem. The best strategy to produce ethical behavior is simply to appeal to self-interest, by offering people membership in a community that confers certain benefits, if the person is willing to follow certain rules.
Comment author: gjm 14 June 2016 01:25:18PM -2 points [-]

The best strategy to produce ethical behavior is simply to appeal to self-interest

This is only true of ethical behaviours that can be produced by appealing to self-interest. That might not be all of them. I don't see how you can claim to know that the best strategies are all in this category without actually doing the relevant cost-benefit calculations.

Comment author: Daniel_Burfoot 14 June 2016 03:20:53PM *  1 point [-]

the relevant cost-benefit calculations.

My claim is based on historical analysis. Historically, the ideas that benefit humanity the most in the long term are things like capitalism, science, the criminal justice system, and (to a lesser extent) democracy. These ideas are all based on aligning individual self-interest with the interests of the society as a whole.

Moral exhortation, it must be noted, also has a hideous dark side, in that it delineates a ingroup/outgroup distinction between those who accept the exhortation and those who reject it, and that distinction is commonly used to justify violence and genocide. Judaism, Christianity and Islam are all based on moral exhortation and were all used in history to justify atrocities against the infidel outgroup. The same is true of communism. Hitler spent a lot of time on his version of moral exhortation. The French revolutionaries had an inspiring creed of "liberty, equality and fraternity" and then used that creed to justify astonishing bloodshed first within France and then throughout Europe.

Comment author: gjm 15 June 2016 03:02:39PM -2 points [-]

I find your list of historical examples less than perfectly convincing. The single biggest success story there is probably science, but (as ChristianKl has also pointed out) science is not at all "based on aligning individual self-interest with the interests of the society as a whole"; if you asked a hundred practising scientists and a hundred eminent philosophers of science to list twenty things each that science is "based on" I doubt anything like that would appear in any of the lists.

(Nor, for that matter, is science based on pursuing the interests of others at the cost of one's own self-interest. What you wrote is orthogonal to the truth rather than opposite.)

I do agree that when self-interest can be made to lead to good things for everyone it's very nice, and I don't dispute your characterization of capitalism, criminal justice, and democracy as falling nicely in line with that. But it's a big leap from "there are some big examples where aligning people's self-interest with the common good worked out well" to "a good moral system should never appeal to anything other than self-interest".

Yes, moral exhortation has sometimes been used to get people to commit atrocities, but atrocities have been motivated by self-interest from time to time too. (And ... isn't your main argument against moral exhortation that it's ineffective? If it turns out to be a more effective way to get people to commit atrocities than appealing to self-interest is, doesn't that undermine that main argument?)

Comment author: bogus 15 June 2016 08:29:41PM *  2 points [-]

The distrust of individual scholars found in science is in fact an example of aligning individual incentives, by making success and prestige dependent on genuine truth-seeking.

But it's a big leap from "there are some big examples where aligning people's self-interest with the common good worked out well" to "a good moral system should never appeal to anything other than self-interest".

The claim is not so much that moral appeals should never be used, but that they should only happen when strictly necessary, once incentives have been aligned to the greatest possible extent. Promoting efficient giving is an excellent example, but moral appeals are of course also relevant on the very small scale. Effective altruists are in fact very good at using self-interest as a lever for positive social change, whenever possible - this is the underlying rationale for the 'earning to give' idea, as well as for the attention paid to extreme poverty in undeveloped countries.

Comment author: ChristianKl 16 June 2016 02:47:05PM 0 points [-]

The distrust of individual scholars found in science is in fact an example of aligning individual incentives, by making success and prestige dependent on genuine truth-seeking.

Scientists generally do trust scientific papers to not lie about the results they report.

Even an organisations like the FDA frequently gives companies the presumption of correct data reporting as demonstrated well in the Ranbaxy case.

Comment author: Daniel_Burfoot 16 June 2016 02:15:13AM 0 points [-]

I think I’ve been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I’ve underestimated it. And never a year passes but I get some surprise that pushes my limit a little farther.

Comment author: ChristianKl 16 June 2016 02:40:16PM 0 points [-]

His favorite example is Federal Express. Of course in a business like Federal Express self-interest incentives are the biggest driver of performance.

That doesn't mean that they are the biggest driver in a project like Wikipedia.

Comment author: ChristianKl 14 June 2016 03:44:14PM *  1 point [-]

Historically, the ideas that benefit humanity the most in the long term are things like capitalism, science, the criminal justice system, and (to a lesser extent) democracy. These ideas are all based on aligning individual self-interest with the interests of the society as a whole.

What does science have to do with self-interest? Making one's claims in a way that they can get falsified by others isn't normally in people self-interest.

Science appeals to sacred values of truth to prevent people from publishing data based on fake data. If it wouldn't do so and people would fake data whenever it would be in their self-interest the scientific system wouldn't get anywhere.