gjm comments on (Almost) every moral theory can be represented by a utility function - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (42)
What is the important difference between (1) assigning low utilities to outcomes in which the agent has just lied, and (2) attempting consequentialitically to make the world look just like it did if the agent doesn't lie? I mean, surely the way you do #2 is precisely by assigning low utilities to outcomes in which the agent lies, no?