conchis comments on Fairness and Geometry - Less Wrong

9 Post author: cousin_it 22 July 2009 10:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: conchis 22 July 2009 08:58:21PM 0 points [-]

The point was that applying positive affine transformation to utility doesn't change preference, and so shouldn't change the fair decision.

I get that (although my NB assumed that we were talking about preferences over certain outcomes rather than lotteries). My point is that this doesn't follow, because fairness can depend on things that may not affect preferences - like the fact that one player is already incredibly well off.

Comment author: Vladimir_Nesov 22 July 2009 09:05:56PM *  0 points [-]

Utility function isn't even real, it's a point in an equivalence class, and you only see the equivalence class. The choice of a particular point should affect the decisions no more than the epiphenomenal consciousness of Searle should affect how the meathead Searle writes his consciousness papers, or than the hidden abolute time should affect the timeless dynamic. State of the world is a different matter entirely. Only if for some reason your preferences include a term about specific form of utility function that is engraved on your mind, should this arbitrary factor matter (but then it won't be exactly about the utility function).

Comment author: ArthurB 23 July 2009 02:23:49PM 0 points [-]

The equivalence class of the utility function should be the set of monotonous function of a canonical element.

However, what von Neumann-Morgenstern shows under mild assumptions is that for each class of utility functions, there is a subset of utility functions generated by the affine transforms of a single canonical element for which you can make decisions by computing expected utility. Therefore, looking at the set of all affine transforms of such an utility function really is the same as looking at the whole class. Still, it doesn't make utility commensurable.

Comment author: conchis 22 July 2009 10:10:40PM *  0 points [-]

I'm not sure I understand your final sentence, but I suspect we may just be using different senses of the word utility function. Insofar as I do understand you, I agree with you for utility-functions-defined-as-representations-of-preferences. It's just that I would take utility-functions-defined-in-terms-of-well-being as the relevant informational base for any discussion of fairness. Preferences are not my primitives in this respect.

Comment author: Vladimir_Nesov 22 July 2009 10:27:52PM 2 points [-]

Let's consider another agent with which you consider cooperating as an instrumental installation, not valued in itself, but only as a means of achieving your goals that lie elsewhere. Of such agent, you're only interested in behavior. Preference is a specification of behavior, saying what the agent does in each given state of knowledge (under a simplifying assumption that the optimal action is always selected). How this preference is represented in that agent's mind is irrelevant as it doesn't influence its behavior, and so can't matter for how you select a cooperative play with this agent.

Comment author: conchis 22 July 2009 10:39:10PM 0 points [-]

How this preference is represented in that agent's mind is irrelevant as it doesn't influence its behavior, and so can't matter for how you select a cooperative play with this agent.

Agreed. Which I think brings us back to it not really being about fairness.

Comment author: Wei_Dai 22 July 2009 10:29:48PM *  1 point [-]

In other words, conchis is taking a welfarist perspective on fairness, instead of a game theoretic one. (I'd like to once again recommend Hervé Moulin's Fair Division and Collective Welfare which covers both of these approaches.)

In this case, the agents are self-modifying AIs. How do we measure and compare the well-being of such creatures? Do you have ideas or suggestions?

Comment author: conchis 22 July 2009 10:42:19PM 0 points [-]

How do we measure and compare the well-being of such creatures? Do you have ideas or suggestions?

None, I'm afraid. I'm not even sure whether I'd care about their well-being even if I could conceive of what that would mean. (Maybe I would; I just don't know.)