You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Qiaochu_Yuan comments on A Difficulty in the Concept of CEV - Less Wrong Discussion

5 [deleted] 27 March 2013 01:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread.

Comment author: Qiaochu_Yuan 27 March 2013 02:47:47AM *  18 points [-]

Harsanyi's social aggregation theorem seems more relevant than Arrow.

And for anyone else who was wondering which condition in Kalai and Schmeidler's theorem fails for adding up utility functions, the answer as far as I can tell is cardinal independence of alternatives, but the reason is unsatisfying (again as far as I can tell): namely, restricting a utility function to a subset of outcomes changes the normalization used in their definition of adding up utility functions. If you're willing to bite the bullet and work with actual utility functions rather than equivalence classes of functions, this won't matter to you, but then you have other issues (e.g. utility monsters).

Edit: I would also like to issue a general warning against taking theorems too seriously. Theorems are very delicate creatures; often if their assumptions are relaxed even slightly they totally fall apart. They aren't necessarily well-suited for reasoning about what to do in the real world (for example, I don't think the Aumann agreement theorem is all that relevant to humans).

Comment author: khafra 27 March 2013 04:05:53PM 0 points [-]

I would also like to issue a general warning against taking theorems too seriously. Theorems are very delicate creatures; often if their assumptions are relaxed even slightly they totally fall apart.

Is the criteria for antifragility formal enough that there could be a list of antifragile theorems?

Comment author: AlexMennen 27 March 2013 05:52:09PM 3 points [-]

No. The fragility is in humans' ability to misinterpret theorems, not in the theorems themselves, and humans are complex enough that I highly doubt that you'd be able to come up with a useful list of criteria that could guarantee that no human would ever misinterpret a theorem.