Konkvistador comments on Could evolution have selected for moral realism? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (53)
Surprised? I would say disappointed.
Except when dealing contrarian Newsome-like weirdness moral anti-realism doesn't rest on a complicated argument and is basic level sanity in my opinion. While certainly you can construct intellectual hipster positions in its favour, it is not something half the community should disagree with. The reason I think this is that I suspect most of those who are firmly against it don't know or understand the arguments for it or they are using "moral realism" in a way that is different from how philosophers use it.
Most of the LWers who voted for moral realism probably believe that Eliezer's position about morality is correct, and he says that morality is subjunctively objective. It definitely fits Wikipedia's definition of moral realism:
To the best of my understanding, "subjunctively objective" means the same thing that "subjective" means in ordinary speech: dependent on something external, and objective once that something is specified. So Eliezer's morality is objective once you specify that it's his morality (or human morality, etc.) and then propositions about it can be true or false. "Turning a person into paperclips is wrong" is an ethical proposition that is Eliezer-true and Human-true and Paperclipper-false, and Eliezer's "subjunctive objective" view is that we should just call that "true".
I disagree with that approach because this is exactly what is called being "subjective" by most people, and so it's misleading. As if the existing confusion over philosophical word games wasn't bad enough.
Despite the fact that we might have a bias toward the Human-[x] subset of moral claims, it's important to understand that such a theory does not itself favor one over the other.
It would be like a utilitarian taking into account only his family's moral weights in any calculations, so that a moral position might be Family-true but Strangers-false. It's perfectly coherent to restrict the theory to a subset of its domain (and speaking of domains, it's a bit vacuous to talk of paperclip morality, at least to the best of my knowledge of the extent of their feelings...), but that isn't really what the theory as a whole is about.
So if we as a species were considering assimilation, and the moral evaluation of this came up Human-false but Borg-true, the theory (in principle) is perfectly well equipped to decide which would ultimately be the greater good for all parties involved. It's not simply false just because it's Human-false. (I say this, but I'm unfamiliar with Eliezer's position. If he's biased toward Human-[x] statements, I'd have to disagree.)
Those same people are badly confused, because they usually believe that if ethical propositions are "subjective", it means that the choice between them is arbitrary. This is an incoherent belief. Ethical propositions don't become objective once you specify the agent's values; they were always objective, because we can't even think about an ethical proposition without reference to some set of values. Ethical propositions and values are logically glued together, like theorems and axioms.
You could say that the concept of something being subjective is itself a confusion, and that all propositions are objective.
That said, I share your disdain for philosophical word games. Personally, I think we should do away with words like 'moral' and 'good', and instead only talk about desires and their consequences.
This is why I voted for moral realism. If instead Moral realism is supposed to mean something stronger, then I'm probably not a moral realist.
The entire issue is a bit of a mess.
http://plato.stanford.edu/entries/moral-anti-realism/
I've not studied the arguments of moral anti-realism, but if I had to make a guess it would be that moral anti-realism probably rests on how you can't extract "ought" statements from "is" statements.
But since "is" statements can be considered as functions operating on "ought" values (e.g. the is-statement "burning people causes them pain", would produce from an ought-statement "you oughtn't cause pain to people" the more specific ought-statement "you oughtn't burn people alive"), the possibility remains open that there can exist universal moral attractive fixed sets, deriving entirely from such "is" transformations, regardless of the opening person-specific or species-specific moral set, much like any starting shape that follows a specific set of transformations will become the Sierpinski triangle.
A possible example for a morally "real" position might e.g. be "You oughtn't decrease everyone's utility in the universe." or "You oughtn't do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn't do."
Baby-eaters and SuperHappies and Humans may not be in agreement about what is best, but all three of them could come up with some ideas about things which would be awful for all of them... I don't think that this need change, no matter how many species with moral instict one adds to the mix. So I "leaned" towards moral realism.
Of course, if all the above has nothing to do with what moral realism and moral anti-realism mean... oops.
So you've got these attractive sets and maybe 90% or 99% or 99.9% or 99.99% of humans or humans plus some broader category of conscious/intelligent entities agree. What to do about the exceptions? Pretend they don't exist? Kill them because they are different and then pretend they never existed or couldn't exist? In my opinion, what you have as a fact is that 99.999% of humans agree X is wrong and .001% don't. The question of moral realism is not a factual one, it is a question of choice: do you CHOOSE to declare what 99.999% have an intuition towards as binding on the .001% that don't, or do you CHOOSE to believe that the facts are that the various intuitions have prevalences, some higher than others, some very high indeed, and that's all you actually KNOW.
I effectively feel bound by a lot of my moral intuitions, that is more or less a fact. As near as I can tell, my moral intuitions evolved as part of the social development of animals, then mammals, then primates, then homo. It is rational to assume that the mix of moral intuitions is fairly fine-tuned to optimize the social contribution to our species fitness, and it is more or less a condensation of facts to say that the social contribution to our fitness is larger than the social contribution to any other species on the planet to their fitness.
So I accept that human moral intuition is an organ like the brain or the islets of langerhans. I accept that a fair amount can be said about how the islets of langerhans function, and how the brain functions, when things are going well. Also, we know a lot about how the islets of langerhans and how the brain function when things are apparently not going so well, diseases one might say. I'd even go so far as to say I would prefer to live in a society dominated by people without diabetes and who are not sociopaths (people who seem to lack many common moral intuitions). I'd go so far as to say I would support policies including killing sociopaths and their minions, and including spending only a finite amount of resources on more expensive non-killing ways of dealing with sociopaths and diabetics.
But it is hard for me to accept that it is rational to fall in to the system instead of seeing it from outside. For me to conclude that my moral intuitions are objectively real like the charge on an electron of the electronic properties of doped silicon is projection, seems to me. It is identical to my concluding that one mammal is beautiful and sexy and another is dull, when it is really the triggering of an evolved sexual mechanism in me that paints the one mammal one way and the other the more boring way. If it is more accurate to understand that the fact that I am attracted to one mammal is not because she is objectively more beautiful than another, then it is more accurate to say that the fact that I have a moral intuition is not because I am plugged in to some moral fact of the universe, and not because of an evolved reaction I have. The fact that most men or many men find woman A beautiful and woman B to be blah doesn't mean that all men ought to find A beautiful and B blah, any more than the fact that many (modern) men feel slavery is wrong means they are not projecting their social construct into a realm of fact which could fruitfully be held to a higher standard.
Indeed, the fact that believing that our social constructs, our political truths, are REAL truths is clearly adaptive in the social species. Societies that encourage strong identifications with the values of the society are robust. Societies in which it is right to kill the apostates because they are wrong, evil, have a staying power. But my life as a scientist has consisted of my understanding that my wanting something to be true is not ANY evidence for its truth. I bring that to my American humanity. So even though I will support the killing of our enemies, I don't think that it is a FACT that it is right to kill the enemies of America any more than it is a FACT that it is right to kill the enemies of Islam.
What does agreement have to do with anything? Anyway such moral attractive sets either include an injuction of what to do with people that disagree with them or they don't. And even if they do have such moral injuctions, it still doesn't mean that my preferences would necessarily be to follow said injuctions.
People aren't physically forced to follow their moral intuitions now, and they aren't physically forced to follow a universal moral attractive set either.
That's what a non moral-realist would say, definitely.
What does 'declaring' have to do with anything? For all I know this moral attractive set would contain an injuction against people declaring it true or binding. Or it might contain an injuction in favour of such declarations of course.
I don't think you understood the concepts I was trying to communicate. I suggest you tone down on the outrage.
Moral realism is NOT the idea that you can derive moral imperatives from a mixture of moral imperatives and other non-moral assumptions. Moral realism is NOT the idea that if you study humans you can describe "conventional morality," make extensive lists of things that humans tend, sometimes overwhelmingly, to consider wrong.
Moral realism IS the idea that there are things that are actually wrong.
If you are a moral realist, and you provide a mechanism for listing some moral truths, then you pretty much by definition are wrong, immoral, if you do not align your action with those moral truths.
An empirical determination of what are the moral rules of many societies, or most societies, or the moral rules that all societies so far have had in common is NOT an instantiation of a moral realist theory, UNLESS you assert that the rules you are learning about are real, that it is in fact immoral or evil to break them. If you meant something wildly different by "moral attractive sets" than what is incorporated by the idea of where people tend to come down on morality, then please elucidate, otherwise I think for the most part i am working pretty consistently with the attractive set idea in saying these things.
If you think you can be a "moral realist" without agreeing that it is immoral to break or not follow a moral truth, then we are just talking past each other and we might as well stop.
Okay, yes. I agree with that statement.
Well, I guess we can indeed define an "immoral" person as someone who does morally wrong things; though a more useful definition would probably be to define an immoral person as someone who does them more so than average. So?
It's reasonable to define an action as "immoral" if it breaks or doesn't follow a moral truth.
But how in the word are you connecting these definitions to all your earlier implications about pretending dissenters don't exist, or killing them and then pretending they never existed in the first place?
Fine, lots of people do immoral things. Lots of people are immoral. How does this "is" statement by itself, indicate anything about whether we ought ignore said people, execute them, or hug and kiss them? It doesn't say anything about how we should treat immoral people, or how we should respond to the immoral actions of others.
I'm the moral realist here, but it's you who seem to be deriving specific "ought" statements from my "is" statements.
This is pretty likely. I spent about a minute trying to determine what the words were actually supposed to mean, then decided that it was pointless, gave up, and refrained from voting on that question. (I did this for a few questions, though I did vote on some, then gave up on the poll.)