Vladimir_Nesov comments on Strong moral realism, meta-ethics and pseudo-questions. - Less Wrong

18 [deleted] 31 January 2010 08:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 01 February 2010 09:02:31AM *  0 points [-]

What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans.

Unfortunately, it's not that easy. An agent, given by itself, doesn't determine preference. It probably does so to a large extent, but not entirely. There is no subject matter of "preference" in general. "Human preference" is already a specific question that someone has to state, that doesn't magically appear from a given "human". A "human" might only help (I hope) to pinpoint the question precisely, if you start in the general ballpark of what you'd want to ask.

I suspect that "Vague statement of human preference"+"human" is enough to get a question of "human preference", and the method of using the agent's algorithm is general enough for e.g. "Vague statement of human preference"+"babyeater" to get a precise question of "babyeater preference", but it's not a given, and isn't even expected to "work" for more alien agents, who are compelled by completely different kinds of questions (not that you'd have a way of recognizing such "error").

The reference to humans or babyeaters is in the method of constructing a preference-implementing machine, not in the concept itself. What humans are is not the info that compels you to define human preference in a particular way, although what humans are may be used as a tool in the definition of human preference, simply because you can pull the right levers and point to the chunks of info that go into the definition you choose.

[W]hy should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer"

That's not a justification. They may turn out to do something right, where you were mistaken, and you'll be compelled to correct.

Comment author: komponisto 01 February 2010 11:17:00AM 0 points [-]

The reference to humans or babyeaters is in the method of constructing a preference-implementing machine, not in the concept itself.

Yes.