gRR comments on The "Intuitions" Behind "Utilitarianism" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (193)
I notice I'm confused here. The morality is a computation. And my computation, when given the TORTURE vs SPECKS problem as input, unambiguously computes SPECKS. If probed about reasons and justifications, it mentions things like "it's unfair to the tortured person", "specks are negligible", "the 3^^^3 people would prefer to get a SPECK than to let the person be tortured if I could ask them", etc.
There is an opposite voice in the mix, saying "but if you multiply, then...", but it is overwhelmingly weaker.
I assume, since we're both human, Eliezer's morality computation is not significantly different from mine. Yet, he says I should SHUT UP AND MULTIPLY. His computation gives the single utilitarian voice the majority vote. Isn't this a Paperclip Maximizer-like morality instead of a human morality?
I'm confused => something is probably wrong with my understanding here. Please help?
This is inconsistent. Why should you shut up and multiply in this specific case and not in others? Especially, when you (persusively) argued against "human life is of infinite worth" several paragraphs above?
What if the ritual matters, in terms of the morality computation?
For example: what if there's a man, accused of murder, of whose guilt we're 50% certain. If guilty and not executed, he'll probably (90%) kill three other random people. Should we execute him?
If we're weighing equally the lives of everyone, both guilty and innocent, and ignore other sideeffects, this reduces to:
- if we execute him, 100% of one death
- if we don't execute him, 45% chance of two deaths.
How big are the error bars on the odds that the murderer will kill two more people?
Does it matter? The point is that (according to my morality computation) it is unfair to execute a 50%-probably innocent person, even though the "total number of saved lives" utility of this action may be greater than that of the alternative. And fairness of the procedure counts for something, even in terms of the "total number of saved lives".
So, let's say this hypothetical situation was put to you several times in sequence. The first time you decline on the basis of fairness, and the guy turns out to be innocent. Yay! The second time he walks out and murders three random people. Oops. After the hundredth time, you've saved fifty lives (because if the guy turns out to be a murderer you end up executing him anyway) and caused a hundred and thirty-five random people to be killed.
Success?
No :( Not when you put it like that...
Do you conclude then that fairness worth zero human lives? Not even a 0.0000000001% probability of saving a life should be sacrificed for its sake?
Maybe it's my example that was stupid and better ones exist.
Upvoted for gracefully conceding a point. (EDIT: I mean, conceding the specific example, not necessarily the argument.)
I think that fairness matters a lot, but a big chunk of the reason for that can be expressed in terms of further consequences: if the connection between crime and punishment becomes more random, then punishment stops working so well as a deterrent, and more people will commit murder.
Being fair even when it's costly affects other people's decisions, not just the current case, and so a good consequentialist is very careful about fairness.
I thought of trying to assume that fairness only matters when other people are watching. But then, in my (admittedly already discredited) example, wouldn't the solution be "release the man in front of everybody, but later kill him quietly. Or, even better, quietly administer a slow fatal poison before releasing?" Somehow, this is still unfair.
Well, that gets into issues of decision theory, and my intuition is that if you're playing non-zero-sum games with other agents smart enough to deduce what you might think, it's often wise to be predictably fair/honest.
(The idea you mention seems like "convince your partner to cooperate, then secretly defect", which only works if you're sure you can truly predict them and that they will falsely predict you. More often, it winds up as defect-defect.)
Hmm. Decision theory and corresponding evolutionary advantages explain how the feelings and concepts of fairness/honesty first appeared. But now that they are already here, do we have to assume that these values are purely instrumental?
Well, maybe. I'm less sure than before.
But I'm still miles from relinquishing SPECKS :)
EDIT: Understood your comment better after reading the articles. Love the PD-3 and rationalist ethical inequality, thanks!
Right. Changed to "three random people".
It's not any computation. It's certainly not just what your brain does. What you actually observe is that your brain thinks certain thoughts, not that morality makes certain judgments.
(I don't agree it's a "computation", but that is unimportant for this thread.)
I understood the "computation" theory as: there's this abstract algorithm, approximately embedded in the unreliable hardware of my brain, and the morality judgments are its results, which are normally produced in the form of quick intuitions. But the algorithm is able to flexibly respond to arguments, etc. Then the observation of my brain thinking certain thoughts is how the algorithm feels from the inside.
I think it is at least a useful metaphor. You disagree? Do you have an exposition of your views on this?
It's some evidence about what the algorithm judges, but not the algorithm itself. Humans make errors, while morality is the criterion of correctness of judgment, which can't be reliably observed by unaided eye, even if that's the best we have.