You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MadRocketSci comments on Artificial Utility Monsters as Effective Altruism - Less Wrong Discussion

10 [deleted] 25 June 2014 09:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread.

Comment author: MadRocketSci 26 June 2014 12:41:55PM 6 points [-]

The problem that I've always had with the "utility monster" idea is that it's a misuse of what information utility functions actually encode.

In game theory or economics, a utility function is a rank ordering of preferred states over less preferred states for a single agent (who presumably has some input he can adjust to solve for his preferred states). That's it. There are no "global" utility functions or "collective" utility measures that don't run into problems when individual goals conflict.

Given that an agent's utility function only encodes preferences, turning up the gain on it really really high (meaning agent A really reaaaally cares about all of his preferences) doesn't mean that agents B,C,D, etc should take A's preferences any more or less seriously. Multiplying it by a large number is like multiplying a probability distribution or an eigenvector by a really large number - the relative frequencies, pointing direction are exactly the same.

Before some large number of people should sacrifice their previous interests on the altar of Carethulu, there should be some new reason why these others (not Carethulu) should want to do so (implying a different utility function for them).

Comment author: [deleted] 27 June 2014 11:08:39AM *  1 point [-]

Before some large number of people should sacrifice their previous interests on the altar of Carethulu, there should be some new reason why these others (not Carethulu) should want to do so (implying a different utility function for them).

I think the misunderstanding here is that some of you interpret the post as a call to change your values. However, it is merely a suggestion for the implementation of values that already exist, such as utilitarian preferences.

The idea is clearly never going to be attractive to people who care exactly zero about the SWB of others. But those are not a target group of effective altruism or any charity really.