Kaj_Sotala comments on Harsanyi's Social Aggregation Theorem and what it means for CEV - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (86)
What if we also add a requirement that the FAI doesn't make anyone worse off in expected utility compared to no FAI? That seems reasonable, but conflicts the other axioms. For example, suppose there are two agents: A gets 1 util if 90% of the universe is converted into paperclips, 0 utils otherwise, and B gets 1 util if 90% of the universe is converted into staples, 0 utils otherwise. Without an FAI, they'll probably end up fighting each other for control of the universe, and let's say each has 30% chance of success. An FAI that doesn't make one of them worse off has to prefer a 50/50 lottery of the universe turning into either paperclips or staples to a certain outcome of either, but that violates VNM rationality.
And things get really confusing when we also consider issues of logical uncertainty and dynamical consistency.
Sounds obviously unreasonable to me. E.g. a situation where a person derives a large part of their utility from having kidnapped and enslaved somebody else: the kidnapper would be made worse off if their slave was freed, but the slave wouldn't become worse off if their slavery merely continued, so...
The way I said that may have been too much of a distraction from the real problem, which I'll restate as: considerations of fairness, which may arise from bargaining or just due to fairness being a terminal value for some people, can imply that the most preferred outcome lies on a flat part of the Pareto frontier of feasible expected utilities, in which case such preferences are not VNM rational and the result described in the OP can't be directly applied.