Manfred comments on Convergence Theories of Meta-Ethics - Less Wrong

7 Post author: Perplexed 07 February 2011 09:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 12 February 2011 04:08:13AM 0 points [-]

Or suppose you asked me to defend my claim, and I submit mathematical proofs that rational agents cannot reach Pareto optimal bargains unless payoffs, consequences, and actions are common knowledge among every participant in the bargain. These proofs are every bit as unchanging as '3 X 3 = 9', but are they also just as irrelevant?

Well, they're relevant if you make a claim that morality should be certain things - but since that's awfully close to a moral claim, I'd say the argument is self-defeating. In fact, that sort of argument might be generalizable to show that this morality is unsupportable - not contradicted, but merely unsupported.

Comment author: Perplexed 12 February 2011 04:22:44AM 0 points [-]

Hmmm. My understanding is that this is a meta-ethical claim; it answers the question of what morality is. Moral claims would answer questions like "What action, if any, does morality require of me?" in some given situation.

Your phrasing of 'what morality is' as 'what morality should be' strikes me as simply playing with words.

Comment author: Manfred 12 February 2011 03:57:45PM *  0 points [-]

If we ignore the object "morality" and just look at basic actions, your proposal about what morality is labels some actions as right and others as wrong (or good and bad, or moral and immoral). It's really by that standard that I call it a "moral claim," in a similar class to "it's immoral to kick puppies."

Comment author: Perplexed 12 February 2011 04:07:10PM *  0 points [-]

I guess I don't agree that my example claim says anything directly about which actions are moral and immoral. What it does is to suggest an algorithm for finding out. And the first step is to find out some empirical facts - for example, "What are puppies and how do people feel about them? If I kick puppies, will there be negative consequences in how other people treat me?"

ETA: Wikipedia seems to back me up on this distinction between metaethics and normative ethics:

A meta-ethical theory, unlike a normative ethical theory, does not attempt to evaluate specific choices as being better, worse, good, bad, or evil; although it may have profound implications as to the validity and meaning of normative ethical claims

Comment author: Manfred 12 February 2011 11:38:59PM *  0 points [-]

But your algorithm is evaluable - I guess I don't see the difference between "the no-kicking-puppies morality is correct" and "don't kick puppies."

Comment author: Perplexed 12 February 2011 11:53:21PM 0 points [-]

I guess I don't see the difference between "the no-kicking-puppies morality is correct" and "don't kick puppies."

I don't see much difference either. But the algorithm I proposed says neither of those two things.

It says "If you want to know whether kicking puppies is moral, here is how to find out." The algorithm is the same for Americans, Laotians, BabyEaters, FAIs, uFAIs, and presumably Neanderthals before the dog was invented as a domesticated wolf. The algorithm instructs the user to consider an idealized version of the society in which he is embedded.

Please consider the possibility that some executions of that algorithm might yield different results than did the execution which you performed, using your own society.

Comment author: Manfred 13 February 2011 12:02:21AM 0 points [-]

Well, but then it's "kicking puppies is immoral if X." A conditional doesn't seem to change the fact that something is a moral claim. Hmm... or would it in some situations? I can't think of any. Oh, you could just rephrase it as "kicking puppies when X is immoral," which is more clearly a moral claim.

Comment author: wedrifid 13 February 2011 02:42:23AM *  0 points [-]

A conditional doesn't seem to change the fact that something is a moral claim. Hmm... or would it in some situations? I can't think of any.

Only (an exception) when there is something after the "IF" that indirectly or directly supplies the moral unit. Then it could be a mere logical claim - but most will be unable to distinguish that from a moral claim anyway. The decision to apply an unambiguous, fully specified logical deduction to based on a moral value is usually considered a moral judgement itself.

Comment author: Perplexed 13 February 2011 12:14:06AM 0 points [-]

Apparently you and I interpret the quoted Wikipedia passage differently, and I don't see how to resolve it.

Nor, now that I think about it, do I see a reason why either of us should care. Why are we engaged in arguing about definitions? I am bowing out.