Vladimir_Nesov comments on Strong moral realism, meta-ethics and pseudo-questions. - Less Wrong

18 [deleted] 31 January 2010 08:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: komponisto 31 January 2010 09:12:59PM 11 points [-]

I think there's an ambiguity between "realism" in the sense of "these statements I'm making are answers to a well-formed question and have a truth value" and "morality is a transcendent ineffable stuff floating out there which compels all agents to obey and could make murder right by having a different state".

Yes -- and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view. It's what people are automatically going to think you're talking about if you go around shouting "Yes Virginia, there are moral facts after all!"

Meanwhile, the general public has a term for the view that you and I share: they call it "moral relativism".

I don't recall exactly, and I haven't yet bothered to look it up, but I believe when you first introduced your metaethics, there were people (myself among them, I think), who objected, not to your actual meta-ethical views, but to the way that you vigorously denied that you were a "relativist"; and you misunderstood them/us as objecting to your theory itself (I think you maybe even threw in an accusation of not comprehending the logical subtleties of Loeb's Theorem).

What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans. Thus, it is automatically subject to the "chauvinism" objection with respect to e.g. Babyeaters: we prefer one thing, they prefer another -- why should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer". But people find that answer unpalatable -- and one reason they might is because it would seem to imply that different human cultures should similarly run right over each other if they don't think they share the same values. Now, we may not like the term "relativism", but it seems to me that this "chauvinism" objection is one that you (and I) need to take at least somewhat seriously.

Comment author: Vladimir_Nesov 01 February 2010 09:02:31AM *  0 points [-]

What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans.

Unfortunately, it's not that easy. An agent, given by itself, doesn't determine preference. It probably does so to a large extent, but not entirely. There is no subject matter of "preference" in general. "Human preference" is already a specific question that someone has to state, that doesn't magically appear from a given "human". A "human" might only help (I hope) to pinpoint the question precisely, if you start in the general ballpark of what you'd want to ask.

I suspect that "Vague statement of human preference"+"human" is enough to get a question of "human preference", and the method of using the agent's algorithm is general enough for e.g. "Vague statement of human preference"+"babyeater" to get a precise question of "babyeater preference", but it's not a given, and isn't even expected to "work" for more alien agents, who are compelled by completely different kinds of questions (not that you'd have a way of recognizing such "error").

The reference to humans or babyeaters is in the method of constructing a preference-implementing machine, not in the concept itself. What humans are is not the info that compels you to define human preference in a particular way, although what humans are may be used as a tool in the definition of human preference, simply because you can pull the right levers and point to the chunks of info that go into the definition you choose.

[W]hy should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer"

That's not a justification. They may turn out to do something right, where you were mistaken, and you'll be compelled to correct.

Comment author: komponisto 01 February 2010 11:17:00AM 0 points [-]

The reference to humans or babyeaters is in the method of constructing a preference-implementing machine, not in the concept itself.

Yes.