lukeprog comments on (Almost) every moral theory can be represented by a utility function - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (42)
I can represent a rigid prohibition against lying using time-relative lexicographic preferences or hyperreals, e.g. "doing an act that I now (at t1) believe has too high a probability of being a lie has infinite and overriding disutility, but I can do this infallibly (defining the high disutility act to enable this), and after taking that into account I can then optimize for my own happiness or the welfare of others, etc."
All well and good for t1, but then I need a new utility function for the next moment, t2, that places infinite weight on lying at t2 (edit: where the t1 utility function did not). The indexical description of the utility function hides the fact that we need a different ranking of consequences for most every moment and situation. I can't have a stable "Kantian utility function" that values weightings over world-histories and is consistent over time.
There are also some problems with the definition of acts and epistemic procedures such that one can have 100% certainty that one is not violating the deontological rules (otherwise they override any other lesser consequences).
Also see Brown, Consequentialize This.