MBlume comments on Counterfactual Mugging - Less Wrong

52 Post author: Vladimir_Nesov 19 March 2009 06:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (257)

You are viewing a single comment's thread. Show more comments above.

Comment author: kurige 19 March 2009 10:34:18AM *  5 points [-]

That's not the situation in question. The scenario laid out by Vladimir_Nesov does not allow for an equal probability of getting $10000 and paying $100. Omega has already flipped the coin, and it's already been decided that I'm on the "losing" side. Join that with the fact that me giving $100 now does not increase the chance of me getting $10000 in the future because there is no repetition.

Perhaps there's something fundamental I'm missing here, but the linearity of events seems pretty clear. If Omega really did calculate that I would give him the $100 then either he miscalculated, or this situation cannot actually occur.

-- EDIT --

There is a third possibility after reading Cameron's reply... If Omega is correct and honest, then I am indeed going to give up the money.

But it's a bit of a trick question, isn't it? I'm going to give up the money because Omega says I'm going to give up the money and everything Omega says is gospel truth. However, if Omega hadn't said that I would give up the money, then I wouldn't of given up the money. Which makes this a bit of an impossible situation.

Assuming the existence of Omega, his intelligence, and his honesty, this scenario is an impossibility.

Comment author: MBlume 19 March 2009 10:52:53AM 15 points [-]

I feel like a man in an Escher painting, with all these recursive hypothetical mes, hypothetical kuriges, and hypothetical omegas.

I'm saying, go ahead and start by imagining a situation like the one in the problem, except it's all happening in the future -- you don't yet know how the coin will land.

You would want to decide in advance that if the coin came up against you, you would cough up $100.

The ability to precommit in this way gives you an advantage. It gives you half a chance at $10000 you would not otherwise have had.

So it's a shame that in the problem as stated, you don't get to precommit.

But the fact that you don't get advance knowledge shouldn't change anything. You can just decide for yourself, right now, to follow this simple rule:

If there is an action to which my past self would have precommited, given perfect knowledge, and my current preferences, I will take that action.

By adopting this rule, in any problem in which the oppurtunity for precommiting would have given you an advantage, you wind up gaining that advantage anyway.

Comment deleted 19 March 2009 11:07:35AM [-]
Comment author: MBlume 19 March 2009 11:19:44AM *  14 points [-]

I'm actually not quite satisfied with it. Probability is in the mind, which makes it difficult to know what I mean by "perfect knowledge". Perfect knowledge would mean I also knew in advance that the coin would come up tails.

I know giving up the $100 is right, I'm just having a hard time figuring out what worlds the agent is summing over, and by what rules.

ETA: I think "if there was a true fact which my past self could have learned, which would have caused him to precommit etc." should do the trick. Gonna have to sleep on that.

ETA2: "What would you do in situation X?" and "What would you like to pre-commit to doing, should you ever encounter situation X?" should, to a rational agent, be one and the same question.

Comment author: conchis 19 March 2009 10:24:56PM *  7 points [-]

"Perfect knowledge would mean I also knew in advance that the coin would come up tails."

This seems crucial to me.

Given what I know when asked to hand over the $100, I would want to have pre-committed to not pre-committing to hand over the $100 if offered the original bet.

Given what I would know if I were offered the bet before discovering the outcome of the flip I would wish to pre-commit to handing it over.

From which information set I should evaluate this? The information set I am actually at seems the most natural choice, and it also seems to be the one that WINS (at least in this world).

What am I missing?

Comment author: fractalman 21 July 2013 04:10:56AM -2 points [-]

I'll give you the quick and dirty patch for dealing with omega: There is no way to know that, at that moment, you are not inside of his simulation. by giving him the 100$, there is a chance you are tranfering that money from within a simulation-which is about to be terminated-to outside of the simulation, with a nice big multiplier.

Comment author: Vladimir_Nesov 19 March 2009 04:52:34PM *  6 points [-]

MBlume:

"What would you do in situation X?" and "What would you like to pre-commit to doing, should you ever encounter situation X?" should, to a rational agent, be one and the same question.

This phrasing sounds about right. Whatever decision-making algorithm you have drawing your decision D when it's in situation X, should also come to the same conditional decision before the situation X appeared, "if(X) then D". If you actually don't give away $100 in situation X, you should also plan to not give away $100 in case of X, before (or irrespective of whether) X happens. Whichever decision is the right one, there should be no inconsistency of this form. This grows harder if you must preserve the whole preference order.

Comment author: Eliezer_Yudkowsky 19 March 2009 07:51:42PM 8 points [-]

ETA2: "What would you do in situation X?" and "What would you like to pre-commit to doing, should you ever encounter situation X?" should, to a rational agent, be one and the same question.

...and that's an even better way of putting it.

Comment author: The_Duck 23 August 2012 09:11:22PM *  0 points [-]

"What would you do in situation X?" and "What would you like to pre-commit to doing, should you ever encounter situation X?" should, to a rational agent, be one and the same question.

Not if precommiting potentially has other negative consequences. As Caspian suggested elsewhere in the thread, you should also consider the possibility that the universe contains No-megas who punish people who would cooperate with Omega.

Comment author: MBlume 23 August 2012 10:11:45PM 1 point [-]

...why should you also consider that possibility?

Comment author: The_Duck 23 August 2012 11:17:00PM *  5 points [-]

Because if that possibility exists, you should not necessarily precommit to cooperate with Omega, since that risks being punished by No-mega. In a universe of No-megas, precommiting to cooperate with Omega loses. This seems to me to create a distinction between the questions "what would you do upon encountering Omega?" and "what will you now precommit to doing upon encountering Omega?"

I suppose my real objection is that some people seem to have concluded in this thread that the correct thing to do is to, in advance, make some blanket precommitment to do the equivalent of cooperating with Omega should they ever find themselves in any similar problem. But I feel like these people have implicitly made some assumptions about what kind of Omega-like entities they are likely to encounter: for instance that they are much more likely to encounter Omega than No-mega.

Comment author: pengvado 24 August 2012 02:01:19AM 1 point [-]

But No-mega also punishes people who didn't precommit but would have chosen to cooperate after meeting Omega. If you think No-mega is more likely than Omega, then you shouldn't be that kind of person either. So it still doesn't distinguish between the two questions.

Comment author: fractalman 21 July 2013 04:07:39AM -1 points [-]

|Perfect knowledge

use a Quantum coin-it conveniently comes up both.