Vladimir_Nesov comments on The "Intuitions" Behind "Utilitarianism" - Less Wrong

29 Post author: Eliezer_Yudkowsky 28 January 2008 04:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (193)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: gRR 20 February 2012 11:51:50PM *  1 point [-]

I notice I'm confused here. The morality is a computation. And my computation, when given the TORTURE vs SPECKS problem as input, unambiguously computes SPECKS. If probed about reasons and justifications, it mentions things like "it's unfair to the tortured person", "specks are negligible", "the 3^^^3 people would prefer to get a SPECK than to let the person be tortured if I could ask them", etc.

There is an opposite voice in the mix, saying "but if you multiply, then...", but it is overwhelmingly weaker.

I assume, since we're both human, Eliezer's morality computation is not significantly different from mine. Yet, he says I should SHUT UP AND MULTIPLY. His computation gives the single utilitarian voice the majority vote. Isn't this a Paperclip Maximizer-like morality instead of a human morality?

I'm confused => something is probably wrong with my understanding here. Please help?

When lives are at stake, I shut up and multiply. It is more important that lives be saved, than that we conform to any particular ritual in saving them.

This is inconsistent. Why should you shut up and multiply in this specific case and not in others? Especially, when you (persusively) argued against "human life is of infinite worth" several paragraphs above?

What if the ritual matters, in terms of the morality computation?

For example: what if there's a man, accused of murder, of whose guilt we're 50% certain. If guilty and not executed, he'll probably (90%) kill three other random people. Should we execute him?

Comment author: Vladimir_Nesov 21 February 2012 10:01:34AM *  0 points [-]

The morality is a computation. And my computation, when given the TORTURE vs SPECKS problem as input, unambiguously computes SPECKS.

It's not any computation. It's certainly not just what your brain does. What you actually observe is that your brain thinks certain thoughts, not that morality makes certain judgments.

(I don't agree it's a "computation", but that is unimportant for this thread.)

Comment author: gRR 21 February 2012 05:55:12PM 0 points [-]

I understood the "computation" theory as: there's this abstract algorithm, approximately embedded in the unreliable hardware of my brain, and the morality judgments are its results, which are normally produced in the form of quick intuitions. But the algorithm is able to flexibly respond to arguments, etc. Then the observation of my brain thinking certain thoughts is how the algorithm feels from the inside.

I think it is at least a useful metaphor. You disagree? Do you have an exposition of your views on this?

Comment author: Vladimir_Nesov 21 February 2012 08:00:11PM 0 points [-]

Then the observation of my brain thinking certain thoughts is how the algorithm feels from the inside.

It's some evidence about what the algorithm judges, but not the algorithm itself. Humans make errors, while morality is the criterion of correctness of judgment, which can't be reliably observed by unaided eye, even if that's the best we have.