AlexMennen comments on Humans are utility monsters - Less Wrong

67 Post author: PhilGoetz 16 August 2013 09:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlexMennen 22 August 2013 05:32:01PM *  1 point [-]

As far as I can tell, the definition involving increasing marginal returns was invented by some wikipedian. Wikipedia does not cite a source for that definition. According to every other source, a utility monster is an agent who gets more utility from having resources than anyone else gets from having resources, regardless of how the utility monster's marginal value of resources changes with the amount of resources already controlled.

Either way, the argument for giving the utility monster all the resources comes from maximizing the sum of the utilities of each agent. I'm not sure what you mean by this being incompatible with the assumption of the utility monster.

Edit: Also, rereading my previous comment, I notice that I was actually not taking a sum across the utilities of all agents. Pareto-optimal does not mean maximizing such a sum. It means a state such that it is impossible to make anyone better off without making anyone else worse off.

Comment author: Decius 22 August 2013 05:40:24PM 0 points [-]

A +utility outcome for one agent is incomparable to a -utility for a different agent on the object layer. It is impossible to compare how much the utility monster gains from security to how much the peasant loses from lack of autonomy without taking a third point- this third viewpoint becomes the only agent in the meta-level (or, if there are multiple agents in the first meta, it goes up again, until there is only one agent at a particular level of meta).

Comment author: AlexMennen 22 August 2013 06:20:21PM *  0 points [-]

This is true; there is no canonical way to aggregate utilities. An agent can only be a utility monster with respect to some scheme for comparing utilities between agents.

Comment author: Decius 23 August 2013 06:59:41AM 0 points [-]

Such a scheme is only measuring its own utility of different states of the universe; a utility monster is not a problem for such a scheme/agent, any more than preventing 3^^^3 people being tortured for a million years at zero cost would be a problem.

Comment author: AlexMennen 23 August 2013 05:57:08PM 0 points [-]

I'm not quite sure what you mean. If you mean that any agent that cares disproportionately about a utility monster would not regret that it cares disproportionately about a utility monster, then that is true. However, if humans propose some method of aggregating their utilities, and then they notice that in practice, their procedure disproportionately favors one of them at the expense of the others, the others would likely complain that it was not a fair aggregation. So a utility monster could be a problem.

Comment author: Decius 24 August 2013 11:23:08AM 1 point [-]

If humans propose some method of aggregating their utilities, and later notice that following that method is non-optimal, it is because the method they proposed does not match their actual values.

That's a characteristic of the method, not of the world.

Comment author: AlexMennen 24 August 2013 04:08:40PM 0 points [-]

That's right; being a utility monster is only with respect to an aggregation. However, the concept was invented and first talked about by people who thought there was a canonical aggregation, and as an unfortunate result, the dependency on the aggregation is typically not mentioned in the definition.

Comment author: Decius 27 August 2013 12:31:34AM 0 points [-]

I can't resolve paradoxes that come up with regard to people who have internally inconsistent value systems; were they afraid that the canonical aggregation was such that they personally were left out, in a manner that proved they were bad (because they preferred outcomes where they did better than they did at the global maximum of the canonical aggregation)?