dxu comments on Torture vs. Dust Specks - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (596)
(Side note: this conversation is taking a rather strange turn, but whatever.)
If its butt feels itchy, and it would prefer for its butt to not feel itchy, and the best way to make its butt not feel itchy is to scratch it, and there are no external moral consequences to its decision (like, say, someone threatening to kill 3^^^3 people iff it scratches its butt)... well, it's increasing its own utility by scratching its butt, isn't it? If it increases its own utility by doing so and doesn't decrease net utility elsewhere, then that's a net increase in utility. Scratch away, I say.
Sure. I agree I did just handwave a lot of stuff with respect to what an "action" is... but would you agree that, conditional on having a good definition of "action", we can evaluate "actions" morally? (Moral by human standards, of course, not Pebblesorter standards.)
Agreed, but if you come up with a way to make good/moral decisions in the idealized situation of omniscience, you can generalize to uncertain situations simply by applying probability theory.
Again, I agree... but then, knowledge of the Banach-Tarski paradox isn't of much use to most people.
Fair enough. I don't have enough domain expertise to really analyze your position in depth, but at a glance, it seems reasonable.
The assumption that morality boils down to utility is a rather huge assumption :-)
Conditional on having a good definition of "action" and on having a good definition of "morally".
I don't think so, at least not "simply". An omniscient being has no risk and no risk aversion, for example.
Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries... :-)
It's not an assumption; it's a normative statement I choose to endorse. If you have some other system, feel free to endorse that... but then we'll be discussing morality, and not meta-morality or whatever system originally produced your objection to Jiro's distinction between good and bad.
Agree.
Well, it could have risk aversion. It's just that risk aversion never comes into play during its decision-making process due to its omniscience. Strip away that omniscience, and risk aversion very well might rear its head.
I disagree. Take the following two statements:
There is no contradiction in these two statements.
But they have a consequence: Morality currently is not useful for practical purposes.
That's... an interesting position. Are you willing to live with it? X-)
You can, of course define morality in this particular way, but why would you do that?