You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

AlexMennen comments on Harsanyi's Social Aggregation Theorem and what it means for CEV - Less Wrong Discussion

21 Post author: AlexMennen 05 January 2013 09:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 07 January 2013 09:12:10AM 3 points [-]

What if we also add a requirement that the FAI doesn't make anyone worse off in expected utility compared to no FAI? That seems reasonable, but conflicts the other axioms. For example, suppose there are two agents: A gets 1 util if 90% of the universe is converted into paperclips, 0 utils otherwise, and B gets 1 util if 90% of the universe is converted into staples, 0 utils otherwise. Without an FAI, they'll probably end up fighting each other for control of the universe, and let's say each has 30% chance of success. An FAI that doesn't make one of them worse off has to prefer a 50/50 lottery of the universe turning into either paperclips or staples to a certain outcome of either, but that violates VNM rationality.

And things get really confusing when we also consider issues of logical uncertainty and dynamical consistency.

Comment author: AlexMennen 07 January 2013 07:26:20PM 1 point [-]

I expect that with actual people, in practice, the FAI would leave no one worse off. But I wouldn't want to hardwire that into the FAI because then its behavior would be too status quo-dependent.