Vladimir_Nesov comments on Convergence Theories of Meta-Ethics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (87)
Self-improvement (change) of any given explicit consideration, indeed overall decision problem, is possible, but it won't be a change to the mysterious notion of "morality" that normatively guides all of your decisions, for whatever it's good for.
So, if I am understanding you, you think that you and I are guided by some mysterious internal 'notion' of morality, a 'notion' which is incapable of changing. Some questions.
Does the value of (3 X 3) change when you change a calculator? Did it become 9 when the calculator was built, or before? And so on, the analogy breaks for the same reason.
Ah! So this mysterious notion is (like '3 X 3 = 9') something "analytic a priori". Ok, suppose I made the following claim:
Now, further suppose that you disagree with my claim. On what grounds would you disagree? If you say "No, that is not morality!", what evidence or argument could you offer other than your own moral intuitions and those of the rest of mankind? I ask because those moral intuitions do not have the same analytic a priori character as '3 X 3 = 9'. And they can change.
Or suppose you asked me to defend my claim, and I submit mathematical proofs that rational agents cannot reach Pareto optimal bargains unless payoffs, consequences, and actions are common knowledge among every participant in the bargain. These proofs are every bit as unchanging as '3 X 3 = 9', but are they also just as irrelevant?
It doesn't seem to capture the social-signalling side of morality. Morality, in part, is a way for humans to show what goodie-two-shoes they are to other humans - who might be prospective mates, collaborators, or allies. That involves less self-interest - and more signalling unselfishness.
It doesn't seem to capture the "manipulation" side of morality very well either. Moral systems are frequenttly applied to get others to stop doing what you don't want them to do - by punishing, shaming, embarassing, etc.
So, my assessment would be: incomplete hypothesis.
I don't see how this is responsive. You realize, don't you, that this discussion is proceeding under Nesov's stipulation that moral truth is a priori (like '3 X 3 = 9'). We are operating here under a stance of moral realism and ethical non-naturalism.
If your concept of morality doesn't fit into this framework, this is not the place for you to step in.
I thought you were talking about human morality. Checking back, that does appear to have been the context of the discussion.
Science has studied that topic, we have more to go on than intuition. An example of morality-as-signalling: Signaling Goodness: Social Rules and Public Choice.
Your idealisation makes signalling seem pointless - since everybody knows everything about the other players. Indeed, I don't really see the point of your model. You are not attempting to model very much of the biology involved. You asked for criticism - and that is an obvious one. Another criticism is that you present a model - but it isn't clear what it is for.
I was not.
Check again. Carefully.
I did not. I asked a question about Nesov's metaethical position, using that toy theory of ethics as an example. I asked what kinds of grounds might be used to reject the toy theory. (The grounds you suggest don't fit (IMHO) the metaethical stance Nesov had already committed to.)
Was I really so unclear? Please read the wikipedia entry on metaethics and reread the thread before responding, if you wish to respond.
Oh, and when I think back on the number of times you have inserted a comment about signaling into a discussion that seemed to be about something else entirely, I conclude that you really, really want to have a discussion with somebody, anybody on that topic. May I suggest that you produce a top-level posting explaining your ideas.
Well, they're relevant if you make a claim that morality should be certain things - but since that's awfully close to a moral claim, I'd say the argument is self-defeating. In fact, that sort of argument might be generalizable to show that this morality is unsupportable - not contradicted, but merely unsupported.
Hmmm. My understanding is that this is a meta-ethical claim; it answers the question of what morality is. Moral claims would answer questions like "What action, if any, does morality require of me?" in some given situation.
Your phrasing of 'what morality is' as 'what morality should be' strikes me as simply playing with words.
If we ignore the object "morality" and just look at basic actions, your proposal about what morality is labels some actions as right and others as wrong (or good and bad, or moral and immoral). It's really by that standard that I call it a "moral claim," in a similar class to "it's immoral to kick puppies."
I guess I don't agree that my example claim says anything directly about which actions are moral and immoral. What it does is to suggest an algorithm for finding out. And the first step is to find out some empirical facts - for example, "What are puppies and how do people feel about them? If I kick puppies, will there be negative consequences in how other people treat me?"
ETA: Wikipedia seems to back me up on this distinction between metaethics and normative ethics:
But your algorithm is evaluable - I guess I don't see the difference between "the no-kicking-puppies morality is correct" and "don't kick puppies."
I don't see much difference either. But the algorithm I proposed says neither of those two things.
It says "If you want to know whether kicking puppies is moral, here is how to find out." The algorithm is the same for Americans, Laotians, BabyEaters, FAIs, uFAIs, and presumably Neanderthals before the dog was invented as a domesticated wolf. The algorithm instructs the user to consider an idealized version of the society in which he is embedded.
Please consider the possibility that some executions of that algorithm might yield different results than did the execution which you performed, using your own society.