I think there's an ambiguity between "realism" in the sense of "these statements I'm making about 'what's right' are answers to a well-formed question and have a truth value" and "the subject matter of moral discourse is a transcendent ineffable stuff floating out there which compels all agents to obey and which could make murder right by having a different state". Thinking that moral statements have a truth value is cognitivism, which sounds much less ambiguous to me, and that's why I prefer to talk about moral cognitivism r...
I think there's an ambiguity between "realism" in the sense of "these statements I'm making are answers to a well-formed question and have a truth value" and "morality is a transcendent ineffable stuff floating out there which compels all agents to obey and could make murder right by having a different state".
Yes -- and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view. It's what people are automatically going to think you're talking about if you go around shouting "Yes Virginia, there are moral facts after all!"
Meanwhile, the general public has a term for the view that you and I share: they call it "moral relativism".
I don't recall exactly, and I haven't yet bothered to look it up, but I believe when you first introduced your metaethics, there were people (myself among them, I think), who objected, not to your actual meta-ethical views, but to the way that you vigorously denied that you were a "relativist"; and you misunderstood them/us as objecting to your theory itself (I think you maybe even threw in an accusation of not comprehending ...
Yes -- and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view.
No, it's not. The naive, common-sense human view is that sneaking into Jane's tent while she's not there and stealing her water-gourd is "wrong". People don't end up talking about transcendent ineffable stuff until they have pursued bad philosophy for a considerable length of time. And the conclusion - that you can make murder right without changing the murder itself but by changing a sort of ineffable stuff that makes the murder wrong - is one that, once the implications are put baldly, squarely disagrees with naive moralism. It is an attempt to rescue a naive misunderstanding of the subject matter of mind and ontology, at the expense of naive morality.
What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans
I agree that this constitutes relativism, and deny that I am a relativist.
why should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer".
See above. The correct answer is "Because c...
I agree that this constitutes relativism, and deny that I am a relativist.
It looks to me like the opposing position is not based on disagreement with this point but rather outright failure to understand what is being said.
I have the same feeling, from the other direction.
I feel like I completely understand the error you're warning against in No License To Be Human; if I'm making a mistake, it's not that one. I totally get that "right", as you use it, is a rigid designator; if you changed humans, that wouldn't change what's right. Fine. The fact remains, however, that "right" is a highly specific, information-theoretically complex computation. You have to look in a specific, narrow region of computation-space to find it. This is what makes you vulnerable to the chauvinism charge; there are lots of other computations that you didn't decide to single out and call "right", and the question is: why not? What makes this one so special? The answer is that you looked at human brains, as they happen to be constituted, and said, "This is a nice thing we've got going here; let's preserve it."
Yes, of course that doesn't constitute a general license t...
surely everyone should admit that people can be mistaken, on occasion, about what they themselves think.
This is far from uncontroversial in the general population.
The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like "If we wanted to eat babies, then that would be the right thing to do."
The moral relativist who says that doesn't really disagree with you. The moral relativist considers a different property of algorithms to be the one that determines whether an algorithm is a morality, but this is largely a matter of definition.
For the relativist, an algorithm is a morality when it is a logic that compels an agent (in the limit of reflection, etc.). For you, an algorithm is a morality when it is the logic that in fact compels human agents (in the limit of reflection, etc.). That is why your view is a kind of relativism. You just say "morality" where other relativists would say "the morality that humans in fact have".
You also seem more optimistic than most relativists that all non-mutant humans implement very nearly the same compulsive logic. But other relativists admit that this is a real possibility, and they wouldn't take it to mean that they were wrong to be relativists.
If there is an advantage to the relativists' use of "morality", it is that their use doesn't prejudge the question of whether all humans implement the same compulsive logic.
Well, you did make a claim about what is the right translation when speaking to babyeaters:
we and they are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as "morality" in both cases. Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating
But there has to be some standard by which you prefer the explanation "we mistranslated the term 'morality'" to "we disagree about morality", right? What is that? Presumably, one could make your argument about any two languages, not just ones with a species gap:
"We and Spaniards are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as "morality" in both cases. Morality is about how to protect freedoms, not restrict them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the Spaniards would agree with us...
I think it would do us all a lot of good (and it would be a lot clearer) to use the word 'morality' to mean all the implications that follow from all terminal values, much as we use the word 'mathematics' to mean all the theorems that follow from all axioms. This would force us to specify which kind of morality we're talking about.
For example, it would be meaningless to ask if I should steal from the rich. It would only be meaningful to ask if I me-should steal from the rich (i.e. if it follows from my terminal values), or if I you-should steal from the rich (i.e. if it follows from your terminal values), or if I us-should steal from the rich (i.e. if it follows from the terminal values we share), or if I Americans-should steal from the rich (i.e. if it follows from the terminal values that Americans share), etc.
I know I'm not explaining anything you don't already know, Eliezer; my point is that your use of the words 'morality' and 'should' has been confusing quite a few people. Or perhaps it would be more accurate to say that your use of those words has failed to extirpate certain people from their pre-existing confusion.
But then morality does not have as its subject matter "Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."
Instead, it has primarily as its subject matter a list of ways to transform the universe into paperclips, cheesecake, needles, orgasmium, and only finally, a long way down the list, into eudaimonium.
I think this is not the subject matter that most people are talking about when they talk about morality. We should have a different name for this new subject, like "decision theory".
How about something like this: There's a certain set of semi abstract criteria that we call 'morality'. And we happen to be the sorts of beings that (for various reasons) happen to care about this morality stuff as opposed to caring about something else. should we care about morality? Well, what is meant by "should"? It sure seems like that's a term that we use to simply point to the same morality criteria/computation. In other words, "should we care about morality" seems to translate to "is it moral to care about morality" or "apply morality function to 'care about morality' and check the output"
It would seem also that the answer is yes, it is moral to care about morality.
Some other creatures might somewhere care about something other than morality. That's not a disagreement about any facts or theory or anything, it's simply that we care about morality and they may care about something like "maximize paperclip production" or whatever.
But, of course, morality is better than paper-clip-ality. (And, of course, when we say "better", we mean "in terms of those criteria we care about"... ie, morality again.)
It's not q...
This would all be a lot clearer if, in these sorts of discussions, we avoided using the dangling "should".
In other words, don't just say that "X should do Y", say that "X should do Y, in order for some specifiable condition to be fulfilled". That condition could be their preferences, your preferences, CEV's preferences if you believe in such a thing, or whatever. Oh yeah, and "...in order to be moral" is ambiguous and thus doesn't count.
Metaethical realism/objectivism makes the prediction that under some conditions, ants will converge on ethical beliefs. The .TP by [deleted] seems to be arguing that realism doesn't have any object level consequences. Which is half true. Absent a method of arriving at object level truth, it doesn't. With one, it does.
Should has many meanings. Which moral system I believe in is meta level, not object level and probably implies an epistemic-should or rational-should rather than moral-should.
Likewise, not all normative judgement is morality. What you should do to maximise personal pleasure, .lor make money, or "win" in some way , is generally not what you morally-should.
If morality is encapsulated by a formal system, by Godel's second theorem there will exist statements--moral statements--which are simultaneously true and not true. Can such a system reject either moral relativism or moral absolutism without contradicting itself?
On Wei_Dai's complexity of values post, Toby Ord writes:
The kind of moral realist positions that apply Occam's razor to moral beliefs are a lot more extreme than most philosophers in the cited survey would sign up to, methinks. One such position that I used to have some degree of belief in is:
Strong Moral Realism: All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory morphism, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.
But most modern philosophers who call themselves "realists" don't mean anything nearly this strong. They mean that that there are moral "facts", for varying definitions of "fact" that typically fade away into meaninglessness on closer examination, and actually make the same empirical predictions as antirealism.
Suppose you take up Eliezer's "realist" position. Arrangements of spacetime, matter and energy can be "good" in the sense that Eliezer has a "long-list" style definition of goodness up his sleeve, one that decides even contested object-level moral questions like whether abortion should be allowed or not, and then tests any arrangement of spacetime, matter and energy and notes to what extent it fits the criteria in Eliezer's long list, and then decrees goodness or not (possibly with a scalar rather than binary value).
This kind of "moral realism" behaves, to all extents and purposes, like antirealism.
I might compare the situation to Eliezer's blegg post: it may be that moral philosophers have a mental category for "fact" that seems to be allowed to have a value even once all of the empirically grounded surrounding concepts have been fixed. These might be concepts such as "would aliens also think this thing?", "Can it be discovered by an independent agent who hasn't communicated with you?", "Do we apply Occam's razor?", etc.
Moral beliefs might work better when they have a Grand Badge Of Authority attached to them. Once all the empirically falsifiable candidates for the Grand Badge Of Authority have been falsified, the only one left is the ungrounded category marker itself, and some people like to stick this on their object level morals and call themselves "realists".
Personally, I prefer to call a spade a spade, but I don't want to get into an argument about the value of an ungrounded category marker. Suffice it to say that for any practical matter, the only parts of the map we should argue about are parts that map-onto a part of the territory.