lukeprog comments on (Almost) every moral theory can be represented by a utility function - Less Wrong

5 Post author: lukeprog 30 April 2012 03:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (42)

You are viewing a single comment's thread. Show more comments above.

Comment author: CarlShulman 03 May 2012 01:57:35AM *  4 points [-]

I can represent a rigid prohibition against lying using time-relative lexicographic preferences or hyperreals, e.g. "doing an act that I now (at t1) believe has too high a probability of being a lie has infinite and overriding disutility, but I can do this infallibly (defining the high disutility act to enable this), and after taking that into account I can then optimize for my own happiness or the welfare of others, etc."

All well and good for t1, but then I need a new utility function for the next moment, t2, that places infinite weight on lying at t2 (edit: where the t1 utility function did not). The indexical description of the utility function hides the fact that we need a different ranking of consequences for most every moment and situation. I can't have a stable "Kantian utility function" that values weightings over world-histories and is consistent over time.

There are also some problems with the definition of acts and epistemic procedures such that one can have 100% certainty that one is not violating the deontological rules (otherwise they override any other lesser consequences).

Comment author: lukeprog 12 August 2012 02:13:33PM 0 points [-]

Also see Brown, Consequentialize This.