Without commenting on whether this presentation matches the original metaethics sequence (with which I disagree), this summary argument seems both unsupported and unfalsifiable.
Would this be an accurate summary of what you think is the meta-ethics sequence? I feel that you captured the important bits but I also feel that we disagree on some aspects:
V(Elves, ) = Christmas spirity
V(Pebblesorters, ) = primality
V(Humans, _ ) = morality
If V(Humans, Alice) =/= V(Humans, ) that doesn't make morality subjective, it is rather i...
Unpacking "should" as " morally obligated to" is potentially helpful, so inasmuch as you can give separate accounts of "moral" and "obligatory".
The elves are not moral. Not just because I, and humans like me happen to disagree with them, no, certainly not. The elves aren’t even trying to be moral. They don’t even claim to be moral. They don’t care about morality. They care about “The Christmas Spirit,” which is about eggnog and stuff
That doesn't generalise to the point that non humans have no morality. You have m...
Morality binds and blinds. People derive moral claims from emotional and intuitive notions. It can feel good and moral to do amoral things. Objective morality has to be tied to evidence what really is human wellbeing; not to moral intuitions that are adaptions to the benefit of ones ingroup; or post hoc thought experiments about knowledge.
Unpacking "should" as " morally obligated to" is potentially helpful, so inasmuch as you can give separate accounts of "moral" and "obligatory".
That doesn't generalise to the point that non humans have no morality. You have made things too easy on yourself by having the elves concede that the Christmas spirit isn't morality. You need to to put forward some criteria for morality and show that the Christmas Spirit doesn't fulfil them. (One of the odd things about the Yudkowskian theory is that he doesnt feel the need to show that human values are the best match to some pretheoretic botion of morality, he instead jumps straight to the conclusion).
The hard case would be some dwarves, say, who have a behavioural code different from our own, and who haven't conceded that they are amoral. Maybe they have a custom whereby any dwarf who hits a rich seam of ore has to raise a cry to let other dwarves have a share, and any dwarf who doesn't do this is criticised and shunned. If their code of conduct passed the duck test .. is regarded as obligatory, involves praise and blame, and so on ... why isn't that a moral system?
If they have failed to grasp that morality is obligatory, have they understood it at all? They might continue caring more about eggnog, of course. That is beside the point... morality means what you should care about, not what you happen to do.
Morality needs to be motivating, and rubber stamping your existing values as moral achieves that, but being motivating is not sufficient. A theory of morality also needs to be able to answer the Open Question objection, meaning in this case, the objection that it is not obvious that you should value something just because you do.
That is arguing from the point that morality is a label for whatever humans care about, not toward it.
There are many ways of refuting relativism, and most don't involve the claim that humans are uniquely moral.
It is human value, or it is fixed.. choose one. Humans have valued many different things. One of the problems with the rubber stamping approach is that things the audience will see as immoral such as slavery and the subjugation of women have been part of human value.
If that is true, then you need to stop saying that morality is human values. and start saying morality is human values at time T. And justify the selection of time, etc. And even at that, you won't support your other claims. because what you need to prove is that morality is unique, that only one thing can fulfil the role.
If it is possible for human values to diverge from morality. then something else must define morality, because human values can't diverge from human values. So you are not using a stipulative definition... here....although you are when you argue that elves can't be moral. Here, you and Yudkowsky have noticed that your theory entails the same problem as relativism: if morality is whatever people value, and if what people happen to value is intuitively immoral , slavery, torture,whatever, then there's no fixed standard of morality. The label "moral" has been placed on a moving target. (Standard relativism usually has this problem synchronously , ie different communities are said to have different but equally valid moralities at the same time, but it makes little difference if you are asserting that the global community has different but equally valid moralities at different times)
There is from many perspectives , but given that human values can differ, you get no definite answer by defining morality as human value. You can avoid the problems of relativism by setting up an external standard, and there are many theories of that type, but they tend to have the problem that the external standard is not naturalistic....God's commands, the Form of the good, and so on. I think Yudkowsky wants a theory that is non arbitrary and also naturalistic. I don't think he arrives a single theory that does both. If the Moral Equation is just a label for human intuition, then it ssuffers from all the vagaries of labeling values as moral, the original theory.
Why doesn't that constitute an admission that you don't actually have a theory of morality?
On the assumption that all human value gets thrown into the equation, it certainly would be complex. But not everyone has that problem. since people have criteria for somethings being moral , and others but being. which simplify the equation. and allow you to answer the questions you were struggling with above. You know, you don't have to pursue assumptions to their illogical conclusions.
On the face of it , it's contradictory. There maybe something else that is smooths out the contradictions, such as the Moral Equation, but that needs justification of its own.
Is that a fact? It's eminently naturalistic, but the flip side to that is that it is, therefore, empirically refutable. If an individual's Morality Equation is just how their moral intuition works, then the evidence indicates that intuitions can vary enough to start a war or two. So the Morality Equation appears not to be conveniently the same in everybody.
What does it mean to do it wrong, if the moral equation is just a label for black box intuitive reasoning? If you had an external standard, as utilitarians and others do, then you could determine whose use of intuition is right use according to it. But in the absence of an external standard, you could have a situation where both parties intuit differently, and both swear they are taking all factors into account. Given such a stalemate, how do you tell who is right? It would be convenient if the only variations to the output of the Morality Equation were caused by variations in the input, but you cannot assume something is true just because it would be convenient.
If the Moral Equation is something ideal and abstract, why can't aliens partake? That model of ethics is just what s needed to explain how you can have multiple varieties of object level morality that actually all are morality: different values fed into the same equation produce different results, so object level morality varies although the underlying principle us the same..
The Open Question argument is theoretically flawed because it relies too much on definitions (see this website's articles on how definitions don't work that way, more specifically http://lesswrong.com/lw/7tz/concepts_dont_work_that_way/).
The truth is that humans have an inherent instinct towards seeing "Good" as an objective thing, that corresponds to no reality. This includes an instinct towards doing what, thanks to both instinct and culture, humans see as "good".
But although I am not a total supporter of Yudowksy's moral support, he... (read more)