benelliott comments on Consequentialism FAQ - Less Wrong

20 Post author: Yvain 26 April 2011 01:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (117)

You are viewing a single comment's thread. Show more comments above.

Comment author: benelliott 18 June 2013 03:17:50PM 0 points [-]

In that case, I would say their true utility function was "follow the deontological rules" or "avoid being smited by divine clippy", and that maximising paperclips is an instrumental subgoal.

In many other cases, I would be happy to say that the person involved was simply not utilitarian, if their actions did not seem to maximise anything at all.

Comment author: blacktrance 18 June 2013 07:44:29PM 0 points [-]

If you define "utility function" as "what agents maximize" then your above statement is true but tautological. If you define "utility function" as "an agent's relation between states of the world and that agent's hedons" then it's not true that you can only maximize your utility function.

Comment author: benelliott 18 June 2013 09:07:26PM 0 points [-]

I certainly do not define it the second way. Most people care about something other than their own happiness, and some people may care about their own happiness very little, not at all, or negatively, I really don't see why a 'happiness function' would be even slightly interesting to decision theorists.

I think I'd want to define a utility function as "what an agent wants to maximise" but I'm not entirely clear how to unpack the word 'want' in that sentence, I will admit I'm somewhat confused.

However, I'm not particularly concerned about my statements being tautological, they were meant to be, since they are arguing against statements that are tautologically false.