Vladimir_Nesov comments on Explicit Optimization of Global Strategy (Fixing a Bug in UDT1) - Less Wrong

17 Post author: Wei_Dai 19 February 2010 01:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 19 February 2010 07:58:24PM 0 points [-]

What about when there are agents with difference source codes and different preferences? The result here suggests that one of our big unsolved problems, that of generally deriving a "good and fair" global outcome from agents optimizing their own preferences while taking logical correlations into consideration, may be unsolvable, since consideration of logical correlations does not seem powerful enough to always obtain a "good and fair" global outcome even in the single-player case.

I don't understand this statement. What do you mean by "logical correlations", and how does this post demonstrate that they are insufficient for getting the right solution?