Wei_Dai comments on Fairness and Geometry - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (34)
I'm not sure I understand your final sentence, but I suspect we may just be using different senses of the word utility function. Insofar as I do understand you, I agree with you for utility-functions-defined-as-representations-of-preferences. It's just that I would take utility-functions-defined-in-terms-of-well-being as the relevant informational base for any discussion of fairness. Preferences are not my primitives in this respect.
In other words, conchis is taking a welfarist perspective on fairness, instead of a game theoretic one. (I'd like to once again recommend Hervé Moulin's Fair Division and Collective Welfare which covers both of these approaches.)
In this case, the agents are self-modifying AIs. How do we measure and compare the well-being of such creatures? Do you have ideas or suggestions?
None, I'm afraid. I'm not even sure whether I'd care about their well-being even if I could conceive of what that would mean. (Maybe I would; I just don't know.)