thomblake comments on The Amazing Virgin Pregnancy - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (271)
This seems as well-motivated as the position that nothing can exist without someone to create it. That is, it seems intuitively true to a human since we privilege agency, but I don't see any contradiction, logical or otherwise, in having moral facts be real.
This question might reduce to the question of whether mathematical facts are "real", which might not make any more sense. Is there a sense in which there are "two" rocks here, even if there were no agent to count the rocks? Is there a sense in which murder is wrong, even if there was never anyone to murder or observe murder?
I think the only difficulties here are definitional (suggested by the word "sense" above) and the proper thing to do with a definitional dispute is to dissolve it. Most moral realists hereabouts are some sort of relativists (that is, we take it to be a "miracle" that we care about what's right rather than something else, and otherwise would have taken it to be a "miracle" that we care about something else instead of what's right, but that doesn't change what's right).
I can understand what physical conditions you are describing when you say "two rocks". What does it mean, in a concrete and substantive sense, for murder to be "wrong"?
What does it mean in a concrete and substantive sense for pi to be an irrational number?
This is doable... Let d be the length of the diameter of some circle, and c be the circumference of the same circle. Then if you have an integer number (m) of sticks of length d in a straight line, and an integer number (n) of sticks of length c in a different straight line, then the two lines will be of different lengths, no matter how you choose your circle, or how you choose the two integers m and n.
In general, if the axioms that prove a theorem are demonstrable in a concrete and substantive way, then any theorems proved by them should be similarly demonstrable, by deconstructing it into its component axioms. But I could be missing something.
There are sets of axioms that aren't really demonstrable in the physical universe, that mathematicians use, and there are different sets of axioms where different truths hold, ones that are not in line with the way the universe works. Non-euclidean geometry, for example, in which two parallel lines can cross. Any theorem is true only in terms of the axioms that prove it, and the only reason why we attribute certain axioms to this universe is because we can test them and the universe always works the way the axiom predicts.
For morality, you can determine right and wrong from a societal/cultural context, with a set of "axioms" for a given society. But I have no idea how you'd test the universe to see if those cultural "axioms" are "true", like you can for mathematical ones. I don't see any reason why the universe should have such axioms.
This is not doable concretely because you can only measure down to some precision.
I can give you two answers to this, one which maps better to this community and one which fits better with the virtue ethics tradition.
There exists (in the sense that mathematical functions exist) a utility function labeled 'morality' in which actions labeled 'murder' bring the universe into a state of lower-utility. I make no particular claims about the proper way to choose such a utility function, just that there is one that is properly called 'morality', and moral disputes can be characterized as either disputes over which function to call 'morality' or disputes over what the output of that function would be given certain inputs.
'Good' and 'bad' are always evaluated in terms of effects upon a particular thing; a good hammer is one which optimally pounds in nails, a good horse is fast and strong, and a good human experiences eudaimonia. Murder is the sort of thing that makes one a bad human; it makes one less virtuous and thus less able to experience eudaimonia.
It could be the case that the terms 'good', 'bad', and 'eudaimonia' should be evaluated based on the preferences of an agent. But in that case it that does not make it any less the case that moral facts are facts about the world that one could be wrong about. For instance, if I prefer to live, I should not drink drain cleaner. If I thought it was good to drink drain cleaner, I would be wrong according to my own preferences, and an outside agent with different preferences could tell me I was objectively wrong about what's right for me to do.
As a side note, 'murder' is normative; it is tautologically wrong. Denying wrongness in general denies the existence of murder. It might be better to ask, "What does it mean for a particular sort of killing to be 'wrong'?", or else "What does it mean for a killing to be murder?"
What is eudaimonia for...or does the buck stop there?
And tautologies and other apriori procedures can deliver epistemic objectivity without the need for the any appeal to quasi empiricsim.
It was originally defined as where the buck stops. To Aristotle, infinite chains of justification were obviously no good, so the ultimate good was simply that which all other goods were ultimately for.
Regardless of how well that notion stands up, there is a sense in which 'being a good hammer' is not for anything else, but the hammer itself is still for something and serves its purpose better when it's good. Those things are usually unpacked nowadays from the perspective of some particular agent.
Okay, we don't disagree at all.
There is an objective sense in which actions have consequences. I am always surprised when people seem to think I'm denying this. Science works, there is a concrete and objective reality, and we can with varying degrees of accuracy predict outcomes with empirical study. Zero disagreement from me on that point.
So, we judge consequences of actions with our preferences. One can be empirically incorrect about what consequences an action can have, and if you choose to define "wrong" as those actions which reduce the utility of whatever function you happen to care about, then sure, we can determine that objectively too. All I am saying is that there is no objective method for selecting the function to use, and it seems like we're in agreement on that.
Namely, we privilege utility functions which value human life only because of facts about our brains, as shaped by our genetics, evolution, and experiences. If an alien came along and saw humans as a pest to be eradicated, we could say:
"Exterminating us is wrong!"
... and the alien could say:
"LOL. No, silly humans. Exterminating you is right!"
And there is no sense in which either party has an objective "rightness" that the other lacks. They are each referring to the utility functions they care about.
There is a sense in which one party is objectively wrong. The aliens do not want to be exterminated so they should not exterminate.
So, we're working with thomblake's definition of "wrong" as those actions which reduce utility for whatever function an agent happens to care about. The aliens care about themselves not being exterminated, but may actually assign very high utility to humans being wiped out.
Perhaps we would be viewed as pests, like rats or pigeons. Just as humans can assign utility to exterminating rats, the aliens could do so for us.
Exterminating humans has the objectively determinable outcome of reducing the utility in your subjectively privileged function.
Inasmuch as we are talking about objective rightness we are talking are not talking about utility functions, because not everyone is running of the same utility function, and it makes sense to say some UFs are objectively wrong.
What would it mean for a utility function to be objectively wrong? How would one determine that a utility function has the property of "wrongness"?
Please, do not answer "by reasoning about it" unless you are willing to provide that reasoning.
I did provide the reasoning in the alien example.
Let's break this all the way down. Can you give me your thesis?
I mean, I see there is a claim here:
... of the format (X therefore Y). I can understand what the (X) part of it means: aliens with a preference not to be destroyed. Now the (Y) part is a little murky. You're saying that the truth of X implies that they "should not exterminate". What does the word should mean there?
Note that the definitional dispute rears its head in the case where the humans say, "Exterminating us is morally wrong!" in which case strong moral relativists insist the aliens should respond, "No, exterminating you is morally right!", while moral realists insist the aliens should respond "We don't care that it's morally wrong - it's shmorally right!"
There is also a breed of moral realist who insists that the aliens would have somehow also evolved to care about morality, as the Kantians who believe morality follows necessarily from basic reason. I think the burden of proof still falls on them for that, but unfortunately there aren't many smart aliens to test.