datadataeverywhere comments on You're in Newcomb's Box - Less Wrong

40 Post author: HonoreDB 05 February 2011 08:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: datadataeverywhere 01 February 2011 04:56:25PM 2 points [-]

Others in this thread have pointed this out, but I will try to articulate my point a little more clearly.

Decision theories that require us to two-box do so because we have incomplete information about the environment. We might be in a universe where Omega thinks that we'll one-box; if we think that Omega is nearly infallible, we increase this probability by choosing to one-box. Note that probability is about our own information, not about the universe. We're not modifying the universe, we're refining our estimates.

If the box is transparent, and we can see the money, we simply don't care what Omega says. As long as we trust that the bottom won't fall out (or any number of other possibilities), we can make our decision because our information (about which universe we are in) is not incomplete.

Likewise, our information about whether we exist is not incomplete; we can't change it by choosing to go against the genes that got us here.

For situations where our knowledge is incomplete, we actually can derive information (about what kind of a world we inhabit) from our desires, but it is evidence, not certainty, and certainly not acasual negotiation. We can easily have evidence that outweighs this relatively meager data.

Comment author: Vladimir_Nesov 04 February 2011 01:36:34PM 2 points [-]

If the box is transparent, and we can see the money, we simply don't care what Omega says. As long as we trust that the bottom won't fall out (or any number of other possibilities), we can make our decision because our information (about which universe we are in) is not incomplete.

In transparent Newcomb's, you're uncertain about probability of what you've observed, even if not about its utility. You need Omega to make this probability what you prefer.

Comment author: datadataeverywhere 04 February 2011 04:04:53PM *  0 points [-]

Is this a MWI concern? I have observed the money with probability 1. There is no probability distribution. The expected long-run frequency distribution of seeing that money is still unknown, but I don't expect this experiment to be repeated, so that's an abstract concern.

Again, if I have reason to believe that (with reasonable probability) I'm being simulated and won't get to experience the utility of that money (unless I one-box), my decision matrix changes, but then I'm back to having incomplete information.

Likewise, perhaps pre-committing to one-box before you see the money makes sense given the usual setup. But if you can break your commitment once the money is already there, that's the right choice (even though it means Omega failed). If you can't, then too bad, but can't != shouldn't.

Under what circumstances would you one-box if you were certain that this was the only trial you would experience, the money was visible under both boxes, and your decision will not impact the amount of money available to any other agent in any other trial?

Comment author: Vladimir_Nesov 04 February 2011 05:42:59PM 1 point [-]

Is this a MWI concern? I have observed the money with probability 1. There is no probability distribution.

No, it's a UDT concern. What you've observed is merely one event among other possibilities, and you should maximize expected utility over all these possibilities.

Comment author: datadataeverywhere 04 February 2011 06:32:58PM 0 points [-]

I'm really not trying to be obtuse, but I still don't understand. The other possibilities don't exist. If my actions don't affect the environment that other agents (including my future or other selves) experience, then I should maximize my utility. If, by construction, my actions have the potential of impacting other agents, then yes, I should take that under consideration, and if my algorithm before I see the money needs to decide to one-box in order for the money to be there in the first place, then that is also relevant.

I'm afraid you'll need to be a little more explicit in describing why I shouldn't two-box if I can be sure that doing so will not impact any other agents.

I probably don't need to harp back on this, but the only other reason I can see is that Omega is infallible and wouldn't have put the money in B if we were also going to take A. If we two-box, then there is a paradox; decision theories needn't and can't deal with paradoxes since they don't exist. Either Omega is fallible or B is empty or we will one-box. If Omega is probabilistic, it is still in our best interest to decide to one-box before hand, but if we can get away with taking both, we should (it is more important to commit to one-boxing than it is to be able to break that commitment, but the logic still stands).

That is, if given the opportunity to permanently self-modify to exclusively one-box, I would. But if I appear out of nowhere, and Omega shows me the money but assures me I have already permanently self-modified to one-box, I will take both boxes if it turns out that Omega is wrong (and there are no other consequences to me or other agents).

Comment author: Vladimir_Nesov 04 February 2011 06:58:24PM 1 point [-]

I'm really not trying to be obtuse, but I still don't understand. The other possibilities don't exist.

Doesn't matter. See Counterfactual Mugging.

Comment author: ArisKatsaris 04 February 2011 10:25:34PM 1 point [-]

If this problem is to be seen as equivalent to the counterfactual mugging then that's evidence against the logic espoused by counterfactual mugging.

I'm far FAR from certain they're equivalent, mind you -- one point of difference is I can choose to commit to honor all favourable bets, even ones made without my specific consent, but there's no point to committing to honoring my non-existence, as there's no alternative me who would be able to honor it likewise.

At some point we must see lunacy for what it is. Achilles can outrun the turtle, if someone logically proves he can't, then it's the logic used that's wrong, not the reality.