blacktrance comments on Consequentialism FAQ - Less Wrong

20 Post author: Yvain 26 April 2011 01:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (117)

You are viewing a single comment's thread. Show more comments above.

Comment author: benelliott 27 April 2011 03:32:27PM *  4 points [-]

Good point.

Perhaps I should have said "its impossible to intentionally maximise anything other than your utility function".

Comment author: blacktrance 18 June 2013 07:43:53AM 1 point [-]

People can intentionally maximize anything, including the number of paperclips in the universe. Suppose there was a religion or school of philosophy that taught that maximizing paperclips is deontologically the right thing to do - not because it's good for anyone, or because Divine Clippy would smite them for not doing it, just that morality demands that they do it. And so they choose to do it, even if they hate it.

Comment author: benelliott 18 June 2013 03:17:50PM 0 points [-]

In that case, I would say their true utility function was "follow the deontological rules" or "avoid being smited by divine clippy", and that maximising paperclips is an instrumental subgoal.

In many other cases, I would be happy to say that the person involved was simply not utilitarian, if their actions did not seem to maximise anything at all.

Comment author: blacktrance 18 June 2013 07:44:29PM 0 points [-]

If you define "utility function" as "what agents maximize" then your above statement is true but tautological. If you define "utility function" as "an agent's relation between states of the world and that agent's hedons" then it's not true that you can only maximize your utility function.

Comment author: benelliott 18 June 2013 09:07:26PM 0 points [-]

I certainly do not define it the second way. Most people care about something other than their own happiness, and some people may care about their own happiness very little, not at all, or negatively, I really don't see why a 'happiness function' would be even slightly interesting to decision theorists.

I think I'd want to define a utility function as "what an agent wants to maximise" but I'm not entirely clear how to unpack the word 'want' in that sentence, I will admit I'm somewhat confused.

However, I'm not particularly concerned about my statements being tautological, they were meant to be, since they are arguing against statements that are tautologically false.