thomblake comments on The Amazing Virgin Pregnancy - Less Wrong

22 Post author: Eliezer_Yudkowsky 24 December 2007 02:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (271)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 11 May 2011 03:09:07PM 1 point [-]

I can understand what physical conditions you are describing when you say "two rocks". What does it mean, in a concrete and substantive sense, for murder to be "wrong"?

I can give you two answers to this, one which maps better to this community and one which fits better with the virtue ethics tradition.

  1. There exists (in the sense that mathematical functions exist) a utility function labeled 'morality' in which actions labeled 'murder' bring the universe into a state of lower-utility. I make no particular claims about the proper way to choose such a utility function, just that there is one that is properly called 'morality', and moral disputes can be characterized as either disputes over which function to call 'morality' or disputes over what the output of that function would be given certain inputs.

  2. 'Good' and 'bad' are always evaluated in terms of effects upon a particular thing; a good hammer is one which optimally pounds in nails, a good horse is fast and strong, and a good human experiences eudaimonia. Murder is the sort of thing that makes one a bad human; it makes one less virtuous and thus less able to experience eudaimonia.

It could be the case that the terms 'good', 'bad', and 'eudaimonia' should be evaluated based on the preferences of an agent. But in that case it that does not make it any less the case that moral facts are facts about the world that one could be wrong about. For instance, if I prefer to live, I should not drink drain cleaner. If I thought it was good to drink drain cleaner, I would be wrong according to my own preferences, and an outside agent with different preferences could tell me I was objectively wrong about what's right for me to do.

As a side note, 'murder' is normative; it is tautologically wrong. Denying wrongness in general denies the existence of murder. It might be better to ask, "What does it mean for a particular sort of killing to be 'wrong'?", or else "What does it mean for a killing to be murder?"

Comment author: Peterdjones 11 May 2011 03:34:24PM 1 point [-]

'Good' and 'bad' are always evaluated in terms of effects upon a particular thing; a good hammer is one which optimally pounds in nails, a good horse is fast and strong, and a good human experiences eudaimonia. Murder is the sort of thing that makes one a bad human; it makes one less virtuous and thus less able to experience eudaimonia.

What is eudaimonia for...or does the buck stop there?

As a side note, 'murder' is normative; it is tautologically wrong.

And tautologies and other apriori procedures can deliver epistemic objectivity without the need for the any appeal to quasi empiricsim.

Comment author: thomblake 11 May 2011 06:09:08PM 1 point [-]

What is eudaimonia for...or does the buck stop there?

It was originally defined as where the buck stops. To Aristotle, infinite chains of justification were obviously no good, so the ultimate good was simply that which all other goods were ultimately for.

Regardless of how well that notion stands up, there is a sense in which 'being a good hammer' is not for anything else, but the hammer itself is still for something and serves its purpose better when it's good. Those things are usually unpacked nowadays from the perspective of some particular agent.

Comment author: NMJablonski 11 May 2011 04:15:52PM *  2 points [-]

Okay, we don't disagree at all.

There is an objective sense in which actions have consequences. I am always surprised when people seem to think I'm denying this. Science works, there is a concrete and objective reality, and we can with varying degrees of accuracy predict outcomes with empirical study. Zero disagreement from me on that point.

So, we judge consequences of actions with our preferences. One can be empirically incorrect about what consequences an action can have, and if you choose to define "wrong" as those actions which reduce the utility of whatever function you happen to care about, then sure, we can determine that objectively too. All I am saying is that there is no objective method for selecting the function to use, and it seems like we're in agreement on that.

Namely, we privilege utility functions which value human life only because of facts about our brains, as shaped by our genetics, evolution, and experiences. If an alien came along and saw humans as a pest to be eradicated, we could say:

"Exterminating us is wrong!"

... and the alien could say:

"LOL. No, silly humans. Exterminating you is right!"

And there is no sense in which either party has an objective "rightness" that the other lacks. They are each referring to the utility functions they care about.

Comment author: Peterdjones 11 May 2011 04:17:58PM 0 points [-]

And there is no sense in which either party has an objective "rightness" that the other > lacks. They are each referring to the utility functions they care about.

There is a sense in which one party is objectively wrong. The aliens do not want to be exterminated so they should not exterminate.

Comment author: NMJablonski 11 May 2011 04:24:37PM *  1 point [-]

So, we're working with thomblake's definition of "wrong" as those actions which reduce utility for whatever function an agent happens to care about. The aliens care about themselves not being exterminated, but may actually assign very high utility to humans being wiped out.

Perhaps we would be viewed as pests, like rats or pigeons. Just as humans can assign utility to exterminating rats, the aliens could do so for us.

Exterminating humans has the objectively determinable outcome of reducing the utility in your subjectively privileged function.

Comment author: Peterdjones 11 May 2011 04:31:17PM -1 points [-]

Inasmuch as we are talking about objective rightness we are talking are not talking about utility functions, because not everyone is running of the same utility function, and it makes sense to say some UFs are objectively wrong.

Comment author: NMJablonski 11 May 2011 04:33:10PM 1 point [-]

What would it mean for a utility function to be objectively wrong? How would one determine that a utility function has the property of "wrongness"?

Please, do not answer "by reasoning about it" unless you are willing to provide that reasoning.

Comment author: Peterdjones 11 May 2011 04:41:43PM *  -1 points [-]

I did provide the reasoning in the alien example.

There is a sense in which one party is objectively wrong. The aliens do not want to be exterminated so they should not exterminate.

Comment author: NMJablonski 11 May 2011 04:52:43PM 2 points [-]

Let's break this all the way down. Can you give me your thesis?

I mean, I see there is a claim here:

The aliens do not want to be exterminated so they should not exterminate.

... of the format (X therefore Y). I can understand what the (X) part of it means: aliens with a preference not to be destroyed. Now the (Y) part is a little murky. You're saying that the truth of X implies that they "should not exterminate". What does the word should mean there?

Comment author: thomblake 11 May 2011 06:19:07PM 1 point [-]

"Exterminating us is wrong!" ... and the alien could say: "LOL. No, silly humans. Exterminating you is right!" And there is no sense in which either party has an objective "rightness" that the other lacks. They are each referring to the utility functions they care about.

Note that the definitional dispute rears its head in the case where the humans say, "Exterminating us is morally wrong!" in which case strong moral relativists insist the aliens should respond, "No, exterminating you is morally right!", while moral realists insist the aliens should respond "We don't care that it's morally wrong - it's shmorally right!"

There is also a breed of moral realist who insists that the aliens would have somehow also evolved to care about morality, as the Kantians who believe morality follows necessarily from basic reason. I think the burden of proof still falls on them for that, but unfortunately there aren't many smart aliens to test.