Here's the thing. Given that there are pretty irreconcilable differences between terminal values -- the question is what attitude you ought to have towards your own values. Are you going to say "Well, this is just my opinion" or are you going to say "My opponents are EVIL"? Are you going to treat your own values with an attitude of confidence or an attitude of timidity and provisionality? I think it's in your interest to approach your values with confidence; if you don't, you will always be unsatisfied, and you will be defeated by people with competing values who are more confident.
There's a scene in the West Wing where a character is urging the Democrats to be more aggressive in talking to the Republicans. The line ends, "We're both right. We're both wrong. Let's have two parties, huh? What do you say?" The point is, if you let the other guys claim the moral high ground and the categorical imperative, the other guys will win. Now you might be wrong and they might be right; but do you really want to live in a world where you never get what you want and your opponents always do? Do you want to live in a world where what happens is always your least favorite outcome? Because that is what will happen if you try to be the most willing to compromise and the most willing to concede your own flaws. The person left out of the discussion will be you.
I'm not sure we're even talking about related things anymore... your post (which uses quite a bit of Dark Arts, by the way) is all about instrumental signaling and how best to lead other people to act the way you'd prefer. I'm talking about Harris's claim to have outlined the process for an objective, universal study of morality. I am not really interested in whether his writings are or are not an efficient way to make others adopt his same morals; I am interested in the intellectual consistency of his positions, which I currently find rather lacking.
Sam Harris has a new book, The Moral Landscape, in which he makes a very simple argument, at least when you express it in the terms we tend to use on LW: he says that a reasonable definition of moral behavior can (theoretically) be derived from our utility functions. Essentially, he's promoting the idea of coherent extrapolated volition, but without all the talk of strong AI.
He also argues that, while there are all sorts of tricky corner cases where we disagree about what we want, those are less common than they seem. Human utility functions are actually pretty similar; the disagreements seem bigger because we think about them more. When France passes laws against wearing a burqa in public, it's news. When people form an orderly line at the grocery store, nobody notices how neatly our goals and behavior have aligned. No newspaper will publish headlines about how many people are enjoying the pleasant weather. We take it for granted that human utility functions mostly agree with each other.
What surprises me, though, is how much flak Sam Harris has drawn for just saying this. There are people who say that there can not, in principle, be any right answer to moral questions. There are heavily religious people who say that there's only one right answer to moral questions, and it's all laid out in their holy book of choice. What I haven't heard, yet, are any well-reasoned objections that address what Harris is actually saying.
So, what do you think? I'll post some links so you can see what the author himself says about it:
"The Science of Good and Evil": An article arguing briefly for the book's main thesis.
Frequently asked questions: Definitely helps clarify some things.
TED talk about his book: I think he devotes most of this talk to telling us what he's not claiming.