gjm comments on Pluralistic Moral Reductionism - Less Wrong

33 Post author: lukeprog 01 June 2011 12:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (316)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukstafi 02 June 2011 12:42:37PM *  0 points [-]

What: discussion of the "social contract" aspect of ethics, for example of the right to not have one's options (sets of actions) constrained beyond a threshold X, what that threshold should be, e.g. a property that the actions that infringe right-X of others are forbidden-X.

Why should there be more of that on LW: (1) it is equally practical and important aspect of ethics as the self-help aspect, (1) it seems to be simpler than determining optimal well-being conditions.

Comment author: gjm 02 June 2011 01:18:44PM 0 points [-]

Your first #1 doesn't seem to me to be a good justification for having more of it on LW. Lots of things are practical and important but don't belong on LW.

Your second #1 seems to me wrong; deciding what's actually right and wrong is very much not "simpler than determining optimal well-being conditions", for the following reasons. (a) It's debatable whether it's even meaningful (since many people here are moral nonrealists or relativists of one sort or another). (b) There is no obvious way to reach agreement on what actually influences what's right and what's wrong. Net preference satisfaction? The will of a god? Obeying some set of ethical principles somehow built into the structure of the universe? Or what? (c) Most of the theories held by moral realists about what actually matters make it extraordinarily difficult to determine, in difficult cases, whether a given thing is right or wrong. Utilitarianism requires you to sum (or average, or something) the utilities of perhaps infinitely many beings, over a perhaps infinite extent of time and space. The theory Luke calls "desirism" requires you to work out the consequences of having many agents adopt any possible set of preferences. Intuitionist theories and divine-command theories make the details of what's right and wrong entirely inaccessible. Etc.

Now, perhaps in fact you have some specific meta-ethical theory in mind such that, if that theory is true, then the ethical calculations become manageable. In that case, you might want to say what that meta-ethical theory is and why you think it makes the calculations manageable :-).