SoullessAutomaton comments on Counterfactual Mugging - Less Wrong

52 Post author: Vladimir_Nesov 19 March 2009 06:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (257)

Sort By: Controversial

You are viewing a single comment's thread. Show more comments above.

Comment author: SoullessAutomaton 19 March 2009 11:27:40AM 15 points [-]

In fact, Newcomb-like problems fall naturally out of any ability to simulate and predict the actions of other agents. Omega as described is essentially the limit as predictive power goes to infinity.

Comment deleted 19 March 2009 01:42:59PM [-]
Comment author: pengvado 19 March 2009 03:43:30PM 6 points [-]

If we define an imperfect predictor as a perfect predictor plus noise, i.e. produces the correct prediction with probability p regardless of the cognition algorithm it's trying to predict, then Newcomb-like problems are very robust to imperfect prediction: for any p > .5 there is some payoff ratio great enough to preserve the paradox, and the required ratio goes down as the prediction improves. e.g. if 1-boxing gets 100 utilons and 2-boxing gets 1 utilon, then the predictor only needs to be more than 50.5% accurate. So the limit in that direction favors 1-boxing.

What other direction could there be? If the prediction accuracy depends on the algorithm-to-be-predicted (as it would in the real world), then you could try to be an algorithm that is mispredicted in your favor... but a misprediction in your favor can only occur if you actually 2-box, so it only takes a modicum of accuracy before a 1-boxer who tries to be predictable is better off than a 2-boxer who tries to be unpredictable.

I can't see any other way for the limit to turn out.

Comment author: Eliezer_Yudkowsky 19 March 2009 07:32:03PM 8 points [-]

If you have two agents trying to precommit not to be blackmailed by each other / precommit not to pay attention to the others precommitment, then any attempt to take a limit of this Newcomblike problem does depend on how you approach the limit. (I don't know how to solve this problem.)

Comment author: SoullessAutomaton 20 March 2009 02:17:23AM 3 points [-]

The value(s) for which the limit is being taken here is unidirectional predictive power, which is loosely a function of the difference in intelligence between the two agents; intuitively, I think a case could be made that (assuming ideal rationality) the total accuracy of mutual behavior prediction between two agents is conserved in some fashion, that doubling the predictive power of one unavoidably would roughly halve the predictive power of the other. Omega represents an entity with a delta-g so large vs. us that predictive power is essentially completely one-sided.

From that basis, allowing the unidirectional predictive power of both agents to go to infinity is probably inherently ill-defined and there's no reason to expect the problem to have a solution.