Christian_Szegedy comments on The Preference Utilitarian’s Time Inconsistency Problem - Less Wrong

25 Post author: Wei_Dai 15 January 2010 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nick_Tarleton 15 January 2010 04:16:34PM *  4 points [-]

One can't really equate risking a life with outright killing.

Even if you can cleanly distinguish them for a human, what's the difference from the perspective of an effectively omniscient and omnipotent agent? (Whether or not an actual AGI would be such, a proposed morality should work in that case.)

If we want any system that is aligned with human morality we just can't make decision based on the desirability of the outcome. For example: "Is it right to kill a healthy person to give its organs to five terminally ill patients and therefore save five lives at a cost of one." Our sense says killing an innocent bystander as immoral, even if it saves more lives. (See http://www.justiceharvard.org/)

Er, doesn't that just mean human morality assigns low desirability to the outcome innocent bystander killed to use organs? (That is, if that actually is a pure terminal value - it seems to me that this intuition reflects a correct instrumental judgment based on things like harms to public trust, not a terminal judgment about the badness of a death increasing in proportion to the benefit ensuing from that death or something.)

If we want a system to be well-defined, reflectively consistent, and stable under omniscience and omnipotence, expected-utility consequentialism looks like the way to go. Fortunately, it's pretty flexible.

Comment author: Christian_Szegedy 15 January 2010 10:04:56PM *  1 point [-]

Even if you can cleanly distinguish them for a human, what's the difference from the perspective of an effectively omniscient and omnipotent agent? (Whether or not an actual AGI would be such, a proposed morality should work in that case.)

To me, "omniscience" and "omnipotence" seem to be self-contradictory notions. Therefore, I consider it a waste of time to think about beings with such attributes.

reflects a correct instrumental judgment based on things like harms to public trust, not a terminal judgment about the badness of a death increasing in proportion to the benefit ensuing from that death or something.

OK. Do you think that if someone (e.g. an AI) kills random people for positive overall effect but manages to convince the public that they were random accidents (and therefore public trust is maintained), then it is a morally acceptable option?