gRR comments on The "Intuitions" Behind "Utilitarianism" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (193)
I notice I'm confused here. The morality is a computation. And my computation, when given the TORTURE vs SPECKS problem as input, unambiguously computes SPECKS. If probed about reasons and justifications, it mentions things like "it's unfair to the tortured person", "specks are negligible", "the 3^^^3 people would prefer to get a SPECK than to let the person be tortured if I could ask them", etc.
There is an opposite voice in the mix, saying "but if you multiply, then...", but it is overwhelmingly weaker.
I assume, since we're both human, Eliezer's morality computation is not significantly different from mine. Yet, he says I should SHUT UP AND MULTIPLY. His computation gives the single utilitarian voice the majority vote. Isn't this a Paperclip Maximizer-like morality instead of a human morality?
I'm confused => something is probably wrong with my understanding here. Please help?
This is inconsistent. Why should you shut up and multiply in this specific case and not in others? Especially, when you (persusively) argued against "human life is of infinite worth" several paragraphs above?
What if the ritual matters, in terms of the morality computation?
For example: what if there's a man, accused of murder, of whose guilt we're 50% certain. If guilty and not executed, he'll probably (90%) kill three other random people. Should we execute him?
If we're weighing equally the lives of everyone, both guilty and innocent, and ignore other sideeffects, this reduces to:
- if we execute him, 100% of one death
- if we don't execute him, 45% chance of two deaths.
Right. Changed to "three random people".