You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on Harsanyi's Social Aggregation Theorem and what it means for CEV - Less Wrong Discussion

21 Post author: AlexMennen 05 January 2013 09:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 08 January 2013 05:57:23PM *  2 points [-]

Looking over my old emails, it seems that my email on Jan 21, 2011 proposed a solution to this problem. Namely, if the agents can agree on a point on the Pareto frontier given their current state of knowledge (e.g. the point where agent A and agent B each have 50% probability of winning), then they can agree on a procedure (possibly involving coinflips) whose result is guaranteed to be a Bayesian-rational merged agent, and the procedure yields the specified expected utilities to all agents given their current state of knowledge. Though you didn't reply to that email, so I guess you found it unsatisfactory in some way...

Comment author: Wei_Dai 08 January 2013 08:20:28PM *  1 point [-]

I must not have been paying attention to the decision theory mailing list at that time. Thinking it over now, I think technically it works, but doesn't seem very satisfying, because the individual agents jointly have non-VNM preferences, and are having to do all the work to pick out a specific mixed strategy/outcome. They're then using a coin-flip + VNM AI just to reach that specific outcome, without the VNM AI actually embodying their joint preferences.

To put it another way, if your preferences can only be implemented by picking a VNM AI based on a coin flip, then your preferences are not VNM rational. The fact that any point on the Pareto frontier can be reached by a coin-flip + VNM AI seems more like a distraction to trying to figure how to get an AI to correctly embody such preferences.