ArisKatsaris comments on The "Intuitions" Behind "Utilitarianism" - Less Wrong

29 Post author: Eliezer_Yudkowsky 28 January 2008 04:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (193)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: gRR 20 February 2012 11:51:50PM *  1 point [-]

I notice I'm confused here. The morality is a computation. And my computation, when given the TORTURE vs SPECKS problem as input, unambiguously computes SPECKS. If probed about reasons and justifications, it mentions things like "it's unfair to the tortured person", "specks are negligible", "the 3^^^3 people would prefer to get a SPECK than to let the person be tortured if I could ask them", etc.

There is an opposite voice in the mix, saying "but if you multiply, then...", but it is overwhelmingly weaker.

I assume, since we're both human, Eliezer's morality computation is not significantly different from mine. Yet, he says I should SHUT UP AND MULTIPLY. His computation gives the single utilitarian voice the majority vote. Isn't this a Paperclip Maximizer-like morality instead of a human morality?

I'm confused => something is probably wrong with my understanding here. Please help?

When lives are at stake, I shut up and multiply. It is more important that lives be saved, than that we conform to any particular ritual in saving them.

This is inconsistent. Why should you shut up and multiply in this specific case and not in others? Especially, when you (persusively) argued against "human life is of infinite worth" several paragraphs above?

What if the ritual matters, in terms of the morality computation?

For example: what if there's a man, accused of murder, of whose guilt we're 50% certain. If guilty and not executed, he'll probably (90%) kill three other random people. Should we execute him?

Comment author: ArisKatsaris 21 February 2012 01:02:26AM *  1 point [-]

For example: what if there's a man, accused of murder, of whose guilt we're 50% certain. If guilty and not executed, he'll probably (90%) kill two other random people. Should we execute him?

If we're weighing equally the lives of everyone, both guilty and innocent, and ignore other sideeffects, this reduces to:
- if we execute him, 100% of one death
- if we don't execute him, 45% chance of two deaths.

Comment author: NancyLebovitz 21 February 2012 01:49:43AM 0 points [-]

How big are the error bars on the odds that the murderer will kill two more people?

Comment author: gRR 21 February 2012 02:02:14AM 1 point [-]

Does it matter? The point is that (according to my morality computation) it is unfair to execute a 50%-probably innocent person, even though the "total number of saved lives" utility of this action may be greater than that of the alternative. And fairness of the procedure counts for something, even in terms of the "total number of saved lives".

Comment author: pedanterrific 21 February 2012 03:34:13AM *  2 points [-]

So, let's say this hypothetical situation was put to you several times in sequence. The first time you decline on the basis of fairness, and the guy turns out to be innocent. Yay! The second time he walks out and murders three random people. Oops. After the hundredth time, you've saved fifty lives (because if the guy turns out to be a murderer you end up executing him anyway) and caused a hundred and thirty-five random people to be killed.

Success?

Comment author: gRR 21 February 2012 04:32:42AM 3 points [-]

No :( Not when you put it like that...

Do you conclude then that fairness worth zero human lives? Not even a 0.0000000001% probability of saving a life should be sacrificed for its sake?

Maybe it's my example that was stupid and better ones exist.

Comment author: orthonormal 21 February 2012 04:42:38AM *  1 point [-]

Upvoted for gracefully conceding a point. (EDIT: I mean, conceding the specific example, not necessarily the argument.)

I think that fairness matters a lot, but a big chunk of the reason for that can be expressed in terms of further consequences: if the connection between crime and punishment becomes more random, then punishment stops working so well as a deterrent, and more people will commit murder.

Being fair even when it's costly affects other people's decisions, not just the current case, and so a good consequentialist is very careful about fairness.

Comment author: gRR 21 February 2012 05:10:02AM 0 points [-]

I thought of trying to assume that fairness only matters when other people are watching. But then, in my (admittedly already discredited) example, wouldn't the solution be "release the man in front of everybody, but later kill him quietly. Or, even better, quietly administer a slow fatal poison before releasing?" Somehow, this is still unfair.

Comment author: orthonormal 21 February 2012 05:33:55AM 2 points [-]

Well, that gets into issues of decision theory, and my intuition is that if you're playing non-zero-sum games with other agents smart enough to deduce what you might think, it's often wise to be predictably fair/honest.

(The idea you mention seems like "convince your partner to cooperate, then secretly defect", which only works if you're sure you can truly predict them and that they will falsely predict you. More often, it winds up as defect-defect.)

Comment author: gRR 21 February 2012 05:56:22AM *  2 points [-]

Hmm. Decision theory and corresponding evolutionary advantages explain how the feelings and concepts of fairness/honesty first appeared. But now that they are already here, do we have to assume that these values are purely instrumental?

Well, maybe. I'm less sure than before.

But I'm still miles from relinquishing SPECKS :)

EDIT: Understood your comment better after reading the articles. Love the PD-3 and rationalist ethical inequality, thanks!

Comment author: gRR 21 February 2012 01:52:05AM 0 points [-]

Right. Changed to "three random people".