Sorry, but I'm not in the habit of taking one for the quantum superteam.
If you're not willing to "take one for the team" of superyous, I'm not sure you understand the implications of "every implementation of you is you."
And I don't think that it really helps to solve the problem;
It does solve the problem, though, because it's a consistent way to formalize the decision so that on average for things like this you are winning.
it just means that you don't necessarily care so much about winning any more. Not exactly the point.
I think you're missing the point here. Winning in this case is doing the thing that on average nets you the most success for problems of this class, one single instance of it notwithstanding.
Plus we are explicitly told that the coin is deterministic and comes down tails in the majority of worlds.
And this explains why you're missing the point. We are told no such thing. We are told it's a fair coin and that can only mean that if you divide up worlds by their probability density, you win in half of them. This is defined.
What seems to be confusing you is that you're told "in this particular problem, for the sake of argument, assume you're in one of the worlds where you lose." It states nothing about those worlds being over represented.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
That's what the problem is asking!
This is a decision-theoretical problem. Nobody cares about it for immediate practical purpose. "Stick to your decision theory, except when you non-rigorously decide not to" isn't a resolution to the problem, any more than "ignore the calculations since they're wrong" was a resolution to the ultraviolet catastrophe.
Again, the point of this experiment is that we want a rigorous, formal explanation of exactly how, when, and why you should or should not stick to your precommitment. The original motivation is almost certainly in the context of AI design, where you don't HAVE a human homunculus implementing a decision theory, the agent just is its decision theory.
Well, if we're designing an AI now, then we have the capability to make a binding precommitment, simply by writing code. And we are still in a position where we can hope for the coin to come down heads. So yes, in that privileged position, we should bind the AI to pay up.
However, to the question as stated, "is the decision to give up $100 when you have no real benefit from it, only counterfactual benefit, an example of winning?" I would still answer, "No, you don't achieve your goals/utility by paying up." We're specifically told that the coin has already been flipped. Losing $100 has negative utility, and positive utility isn't on the table.
Alternatively, since it's asking specifically about the decision, I would answer, If you haven't made the decision until after the coin comes down tails, then paying is the wrong decision. Only if you're deciding in advance (when you still hope for heads) can a decision to pay have the best expected value.
Even if deciding in advance, though, it's still not a guaranteed win, but rather a gamble. So I don't see any inconsistency in saying, on the one hand, "You should make a binding precommitment to pay", and on the other hand, "If the coin has already come down tails without a precommitment, you shouldn't pay."
If there were a lottery where the expected value of a ticket was actually positive, and someone comes to you offering to sell you their ticket (at cost price), then it would make sense in advance to buy it, but if you didn't, and then the winners were announced and that ticket didn't win, then buying it no longer makes sense.