wedrifid comments on [SEQ RERUN] Living in Many Worlds - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (32)
Are you confusing UDT with AIXI? It is certainly possible for an agent to act as described and the tricky part isn't anything to do with "UDT" (but rather the possible but difficult task of making the predictions.)
The case given is sufficient. Anyone who is capable of one-boxing on Newcomb's problem will, if consistent, also cooperate with agents that cross out of the future light cone based on utility maximisation grounds given the payoffs described. If they either two box or defect then they are implementing a faulty decision algorithm.
For an example that doesn't include any potential exploitation of loved ones see Belief in the Implied Invisible.
My understanding is that UDT requires agent A to have some prediction for what agent B will do. This is, in general, not computable. (The proof follows from Rice's theorem.)