If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
[Disclaimer: My ethics and metaethics are not necessarily the same as those of Bound_up; in fact I think they are not. More below.]
I think this argument, in order to work, needs some further premise to the effect that a decision only counts as "definitive" if it is universal, if in some suitable sense everyone would/should arrive at the same decision; and then the second step ("Morality tells you what you should do") needs to say explicitly that morality does this universally.
In that case, the argument works -- but, I think, it works in a rather uninteresting way because the real work is being done by defining "morality" to be universal. It comes down to this: If we define "morality" to be universal, then no account of morality that doesn't make it universal will do. Which is true enough, but doesn't really tell us anything we didn't already know.
I think I largely agree with what I take to be one of your main objections to Eliezer's "metaethics sequence". I think Eliezer's is a nonrealist theory masquerading as a realist theory. He sketches, or at least suggests the existence of, some set of moral values broadly shared by humanity -- so far, so good, though as you say there are a lot of details to be filled in and it may or may not actually be possible to do that. He then says "let us call this Morality, and let us define terms like should and good in terms of these values" -- which is OK in so far as anyone can define any words however they like, I guess. And then he says "and this solves a key problem of metaethics, namely how we can see human values as non-arbitrary even though they look arbitrary: human values are non-arbitrary because they are what words like should and right and bad are about" -- which is mere sophistry, because if you were worried before about human values being arbitrary then you should be equally worried after his definitional move about the definitions of terms like should being arbitrary.
But I don't think (as, IIUC, Eliezer and Bound_up also don't think) we need to be terribly worried about that. Supposing -- and it's a big supposition -- that we are able to identify some reasonably coherent set of values as "human moral values" via CEV or anything else, I don't think the arbitrariness of this set of values is any reason why we shouldn't care about it, strive to live accordingly, program our superpowerful superintelligent godlike AIs to use it, etc. Yes, it's "just a label", but it's a label distinguished by being (in some sense that depends on just where we get this set of values from) what we and the rest of the human race care about.
Ok, but it would have been helpful to have argued the point.
AFAICT, it is only necessary for to have the same decision across a certain reference class, not universally.
... (read more)