Viliam_Bur comments on Open thread, Jun. 13 - Jun. 19, 2016 - Less Wrong

2 Post author: MrMind 13 June 2016 06:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread. Show more comments above.

Comment author: Daniel_Burfoot 13 June 2016 11:46:44PM 9 points [-]

I see in the "Recent on Rationality Blogs" panel an article entitled "Why EA is new and obvious". I'll take that as a prompt to list my three philosophical complaints abouts EA:

  • I believe in causality as a basic moral concept. My ethical system absolutely requires me to avoid hurting people, but is much less adamant about helping people. While some people claim to be indifferent to this distinction, in practice people's revealed moral preferences suggest that they agree with me (certainly the legal system agrees with me).
  • I also believe in locality as an ontologically primitive moral issue. I am more morally obligated to my mother than to a random stranger in Africa. Finer gradations are harder to tease out, but I still feel more obligation to a fellow American than to a citizen of another country, ceteris paribus.
  • I do not believe a good ethical system should rely on moral exhortation, at least not to the extent that EA does. Such systems will never succeed in solving the free-rider problem. The best strategy to produce ethical behavior is simply to appeal to self-interest, by offering people membership in a community that confers certain benefits, if the person is willing to follow certain rules.
Comment author: Viliam_Bur 14 June 2016 10:14:01PM 1 point [-]

If we look at this issue from an angle "ethics is memetic system evolved by cultural group selection", then I guess it makes sense that (1) systems promoting helping your cultural group would have an advantage over systems promoting helping everyone to the same degree, and (2) systems that allow to achieve the "ethical enough" state reasonably fast would have an advantage over systems where no one can realistically become "ethical enough".

The problem appears when someone tries to do an extrapolation of that concept.

I am not sure how to answer the question "should we extrapolate our ethical concepts?". Because "should" itself is within the domain of ethics, and the question is precisely about whether that "should" should also be extrapolated.