Kaj_Sotala comments on (Almost) every moral theory can be represented by a utility function - Less Wrong

5 Post author: lukeprog 30 April 2012 03:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (42)

You are viewing a single comment's thread.

Comment author: Kaj_Sotala 30 April 2012 07:40:31AM 3 points [-]
Comment author: lukeprog 30 April 2012 08:45:00AM *  2 points [-]

I don't think so, if I understand Alicorn correctly.

Alicorn says that a "consequentialist doppelganger"

applies the following transformation to some non-consequentialist theory X:

  1. What would the world look like if I followed theory X?
  2. You ought to act in such a way as to bring about the result of step 1.

But that's not what Peterson is doing. Instead, his approach (along with several previous, incomplete and failed attempts to do this) merely captures whatever rules and considerations the deontologist cares about in what a decision-theoretic agent (a consequentialist) calls the "outcome." For example, the agent's utility function can be said to assign very, very low utility to an outcome in which (1) the agent has just lied, or (2) the agent has just broken a promise previously sworn to, or (3) the agent has just violated the rights of a being that counts as a moral agent according to criterion C. Etc.

Comment author: gjm 01 May 2012 10:08:30AM -1 points [-]

What is the important difference between (1) assigning low utilities to outcomes in which the agent has just lied, and (2) attempting consequentialitically to make the world look just like it did if the agent doesn't lie? I mean, surely the way you do #2 is precisely by assigning low utilities to outcomes in which the agent lies, no?