gjm comments on Open thread, Jul. 25 - Jul. 31, 2016 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (133)
[Disclaimer: My ethics and metaethics are not necessarily the same as those of Bound_up; in fact I think they are not. More below.]
I think this argument, in order to work, needs some further premise to the effect that a decision only counts as "definitive" if it is universal, if in some suitable sense everyone would/should arrive at the same decision; and then the second step ("Morality tells you what you should do") needs to say explicitly that morality does this universally.
In that case, the argument works -- but, I think, it works in a rather uninteresting way because the real work is being done by defining "morality" to be universal. It comes down to this: If we define "morality" to be universal, then no account of morality that doesn't make it universal will do. Which is true enough, but doesn't really tell us anything we didn't already know.
I think I largely agree with what I take to be one of your main objections to Eliezer's "metaethics sequence". I think Eliezer's is a nonrealist theory masquerading as a realist theory. He sketches, or at least suggests the existence of, some set of moral values broadly shared by humanity -- so far, so good, though as you say there are a lot of details to be filled in and it may or may not actually be possible to do that. He then says "let us call this Morality, and let us define terms like should and good in terms of these values" -- which is OK in so far as anyone can define any words however they like, I guess. And then he says "and this solves a key problem of metaethics, namely how we can see human values as non-arbitrary even though they look arbitrary: human values are non-arbitrary because they are what words like should and right and bad are about" -- which is mere sophistry, because if you were worried before about human values being arbitrary then you should be equally worried after his definitional move about the definitions of terms like should being arbitrary.
But I don't think (as, IIUC, Eliezer and Bound_up also don't think) we need to be terribly worried about that. Supposing -- and it's a big supposition -- that we are able to identify some reasonably coherent set of values as "human moral values" via CEV or anything else, I don't think the arbitrariness of this set of values is any reason why we shouldn't care about it, strive to live accordingly, program our superpowerful superintelligent godlike AIs to use it, etc. Yes, it's "just a label", but it's a label distinguished by being (in some sense that depends on just where we get this set of values from) what we and the rest of the human race care about.
Ok, but it would have been helpful to have argued the point.
AFAICT, it is only necessary for to have the same decision across a certain reference class, not universally.
Who is defining morality to be universal? I dont think it is me. I think my argument works in a fairly general sense. If morality is a ragbag of values, then in the general case it is going to contain contradictions, and that will stop you making any kind of decision based on it.
I disagree with this objection to Eliezer's ethics because I think the distinction between "realist" and "nonrealist" theories is a confusion that needs to be done away with. The question is not whether morality (or anything else) is "something real," but whether or not moral claims are actually true or false. Because that is all the reality that actually matter: tables and chairs are real, as far as I am concerned, because "there is a table in this room" is actually true. (This is also relevant to our previous discussion about consciousness.)
And in Eliezer's theory, some moral claims are actually true, and some are actually false. So I agree with him that his theory is realist.
I do disagree with his theory, however, insofar as it implies that "what we care about" is essentially arbitrary, even if it is what it is.
That (whether moral claims are actually true or false) is exactly how I distinguish moral realism from moral nonrealism, and I think this is a standard way to understand the terms.
But any nonrealist theory can be made into one in which moral claims have truth values by redefining the key words; my suggestion is that Eliezer's theory is of this kind, that it is nearer to a straightforwardly nonrealist theory, which it becomes if e.g. you replace his use of terms like "good" with terms that are explicit about what value system the reference ("good according to human values") than to typical more ambitious realist theories that claim that moral judgements are true or false according to some sort of moral authority that goes beyond any particular person's or group's or system's values.
I agree that the typical realist theory implies more objectivity than is present in Eliezer's theory. But in the same way, the typical non-realist theory implies less objectivity than is present there. E.g. someone who says that "this action is good" just means "I want to do this action" has less objectivity, because it will vary from person to person, which is not the case in Eliezer's theory.
I think we are largely agreed as to facts and disagree only on whether it's better to call Eliezer's theory, which is intermediate between many realist theories and many non-realist theories, "realist" or "non-realist".
I'm not sure, though, that someone who says that "this is good" = "I want to do this" is really a typical non-realist. My notion of a typical non-realist -- typical, I mean, among people who've actually thought seriously about this stuff -- is somewhat nearer to Eliezer's position than that.
Anyway, the reason why I class Eliezer's position as non-realist is that the distinction between Eliezer's position and that of many (other?) non-realists is purely terminological -- he agrees that there are all these various value systems, and that if ours seems special to us that's because it's ours rather than because of some agent-independent feature of the universe that picks ours out in preference to others, but he wants to use words like "good" to refer to one particular value system -- whereas the distinction between his position and that of most (other?) realists goes beyond terminology: they say that the value system they regard as real is actually built into the fabric of reality in some way that goes beyond the mere fact that it's our (or their) value system.
You may weight these differences differently.
I think he wants a system which works like realism, in that there are definite answers to ethical questions ("fixed", "frozen") ,but without spookiness.
Yudkowsky,'s theory entails the same problem as relativism: if morality is whatever people value, and if what people happen to value is intuitively immoral , slavery, torture,whatever, then there's no fixed standard of morality. The label "moral" has been placed on a moving target. (Standard relativism usually has this problem synchronously , ie different communities are said to have different but equally valid moralities at the same time, but it makes little difference if you are asserting that the global community has different but equally valid moralities at different times)
You can avoid the problems of relativism by setting up an external standard, and there are many theories of that type, but they tend to have the problem that the external standard is not naturalistic....God's commands, the Form of the good, and so on. I think Yudkowsky wants a theory that is non arbitrary and also naturalistic. I don't think he arrives a single theory that does both. If the Moral Equation is just a label for human intuition, then it ssuffers from all the vagaries of labeling values as moral, the original theory. If the Moral Equation is something ideal and abstract, why can't aliens partake?
I agree.