You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Open thread, Jul. 25 - Jul. 31, 2016 - Less Wrong Discussion

3 Post author: MrMind 25 July 2016 07:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (133)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 18 August 2016 12:29:54PM -1 points [-]

[Disclaimer: My ethics and metaethics are not necessarily the same as those of Bound_up; in fact I think they are not. More below.]

Human values can conflict. Morality [...] tells you what you should do. A ragbag of conflicting values cannot be used to make a definitive decision. Therefore morality is not a ragbag of conflicting values.

I think this argument, in order to work, needs some further premise to the effect that a decision only counts as "definitive" if it is universal, if in some suitable sense everyone would/should arrive at the same decision; and then the second step ("Morality tells you what you should do") needs to say explicitly that morality does this universally.

In that case, the argument works -- but, I think, it works in a rather uninteresting way because the real work is being done by defining "morality" to be universal. It comes down to this: If we define "morality" to be universal, then no account of morality that doesn't make it universal will do. Which is true enough, but doesn't really tell us anything we didn't already know.

I think I largely agree with what I take to be one of your main objections to Eliezer's "metaethics sequence". I think Eliezer's is a nonrealist theory masquerading as a realist theory. He sketches, or at least suggests the existence of, some set of moral values broadly shared by humanity -- so far, so good, though as you say there are a lot of details to be filled in and it may or may not actually be possible to do that. He then says "let us call this Morality, and let us define terms like should and good in terms of these values" -- which is OK in so far as anyone can define any words however they like, I guess. And then he says "and this solves a key problem of metaethics, namely how we can see human values as non-arbitrary even though they look arbitrary: human values are non-arbitrary because they are what words like should and right and bad are about" -- which is mere sophistry, because if you were worried before about human values being arbitrary then you should be equally worried after his definitional move about the definitions of terms like should being arbitrary.

But I don't think (as, IIUC, Eliezer and Bound_up also don't think) we need to be terribly worried about that. Supposing -- and it's a big supposition -- that we are able to identify some reasonably coherent set of values as "human moral values" via CEV or anything else, I don't think the arbitrariness of this set of values is any reason why we shouldn't care about it, strive to live accordingly, program our superpowerful superintelligent godlike AIs to use it, etc. Yes, it's "just a label", but it's a label distinguished by being (in some sense that depends on just where we get this set of values from) what we and the rest of the human race care about.

Comment author: TheAncientGeek 26 August 2016 09:29:13AM *  0 points [-]

I think this argument, in order to work, needs some further premise to the effect that a decision only counts as "definitive" if it is universal,

Ok, but it would have been helpful to have argued the point.

if in some suitable sense everyone would/should arrive at the same decision; and then the second step ("Morality tells you what you should do") needs to say explicitly that morality does this universally.

AFAICT, it is only necessary for to have the same decision across a certain reference class, not universally.

In that case, the argument works -- but, I think, it works in a rather uninteresting way because the real work is being done by defining "morality" to be universal. It comes down to this: If we define "morality" to be universal, then no account of morality that doesn't make it universal will do. Which is true enough, but doesn't really tell us anything we didn't already know.

Who is defining morality to be universal? I dont think it is me. I think my argument works in a fairly general sense. If morality is a ragbag of values, then in the general case it is going to contain contradictions, and that will stop you making any kind of decision based on it.