Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

The paradox is designed to give your decision the practical effect of causing Box B to contain the money or not, without actually labeling this effect "causation." But I think that if Box B acts as though its contents are caused by your choice, then you should treat it as though they were. So I don't think the puzzle is really something deep; rather, it is a word game about what it means to cause something.

Perhaps it would be useful to think about how Omega might be doing its prediction. For example, it might have the ability to travel into the future and observe your action before it happens. In this case what you do is directly affecting what the box contains, and the problem's statement that whatever you choose won't affect the contents of the box is just wrong.

Or maybe it has a copy of the entire state of your brain, and can simulate you in a software sandbox inside its own mind long enough to see what you will do. In this case it makes sense to think of the box as not being empty or full until you've made your choice, if you are the copy in the sandbox. If you aren't the copy in the sandbox then you'd be better off choosing both boxes, but the way the problem's set up you can't tell this. You can still try to maximize future wealth. My arithmetic says that choosing Box B is the best strategy in this case. (Mixed strategies, where you hope that the sandbox version of yourself will randomly choose Box B alone and the outside one will choose both, are dominated by choosing Box B. Also I assume that if you are in the sandbox, you want to maximize the wealth of the outside agent. I think this is reasonable because it seems like there is nothing else to care about, but perhaps someone will disagree.)

You could interpret Omega differently than in these stories, although I think my first point above that you should think of your choice as causing Omega to put money in the box, or not, is reasonable. I would say that the fact that Omega put the money in the box chronologically before you make the decision is irrelevant. I think uncertainty about an event that has already happened, but that hasn't been revealed to you, is basically the same thing as uncertainty about something that hasn't happened yet, and it should be modeled the same way.