Christian_Szegedy comments on The Preference Utilitarian’s Time Inconsistency Problem - Less Wrong

25 Post author: Wei_Dai 15 January 2010 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: Christian_Szegedy 15 January 2010 10:04:56PM *  1 point [-]

Even if you can cleanly distinguish them for a human, what's the difference from the perspective of an effectively omniscient and omnipotent agent? (Whether or not an actual AGI would be such, a proposed morality should work in that case.)

To me, "omniscience" and "omnipotence" seem to be self-contradictory notions. Therefore, I consider it a waste of time to think about beings with such attributes.

reflects a correct instrumental judgment based on things like harms to public trust, not a terminal judgment about the badness of a death increasing in proportion to the benefit ensuing from that death or something.

OK. Do you think that if someone (e.g. an AI) kills random people for positive overall effect but manages to convince the public that they were random accidents (and therefore public trust is maintained), then it is a morally acceptable option?