Eliezer_Yudkowsky comments on Counterfactual Mugging - Less Wrong

52 Post author: Vladimir_Nesov 19 March 2009 06:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (257)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 19 March 2009 07:32:03PM 8 points [-]

If you have two agents trying to precommit not to be blackmailed by each other / precommit not to pay attention to the others precommitment, then any attempt to take a limit of this Newcomblike problem does depend on how you approach the limit. (I don't know how to solve this problem.)

Comment author: SoullessAutomaton 20 March 2009 02:17:23AM 3 points [-]

The value(s) for which the limit is being taken here is unidirectional predictive power, which is loosely a function of the difference in intelligence between the two agents; intuitively, I think a case could be made that (assuming ideal rationality) the total accuracy of mutual behavior prediction between two agents is conserved in some fashion, that doubling the predictive power of one unavoidably would roughly halve the predictive power of the other. Omega represents an entity with a delta-g so large vs. us that predictive power is essentially completely one-sided.

From that basis, allowing the unidirectional predictive power of both agents to go to infinity is probably inherently ill-defined and there's no reason to expect the problem to have a solution.