Actually, we don't know that our decision affects the contents of Box B. In fact, we're told that it contains a million dollars if-and-only-if Omega predicts we will only take Box B.

It is possible that we could pick Box B even tho Omega predicted we would take both boxes. Omega has only observed to have predicted correctly 100 times. And if we are sufficiently doubtful whether Omega would predict that we would take only Box B, it would be rational to take both boxes.

Only if we're somewhat confident of Omega's prediction can we confidently one-box and rationally expect it to contain a million dollars.

Eliezer, I have a question about this: "There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.0001% chance of living a googolplex years and an 80% chance of living forever. This is a sufficient condition to imply that my utility function is unbounded."

I can see that this preference implies an unbounded utility function, given that a longer life has a greater utility. However, simply stated in that way, most people might agree with the preference. But consider this gamble instead:

A: Live 500 years and then die, with certainty.

B: Live forever, with probability 0.000000001%; die within the next ten seconds, with probability 99.999999999%

Do you choose A or B? Is it possible to choose A and have an unbounded utility function with respect to life? It seems to me that an unbounded utility function implies the choice of B. But then what if the probability of living forever becomes one in a googleplex, or whatever? Of course, this is a kind of Pascal's Wager; but it seems to me that your utility function implies that you should accept the Wager.

It also seems to me that the intuitions suggesting to you and others that Pascal's Mugging should be rejected similarly are based on an intuition of a bounded utility function. Emotions can't react infinitely to anything; as one commenter put it, "I can only feel so much horror." So to the degree that people's preferences reflect their emotions, they have bounded utility functions. In the abstract, not emotionally but mentally, it is possible to have an unbounded function. But if you do, and act on it, others will think you a fanatic. For a fanatic cares infinitely for what he perceives to be an infinite good, whereas normal people do not care infinitely about anything.

This isn't necessarily against an unbounded function; I'm simply trying to draw out the implications.

If this was the only chance you ever get to determine your lifespan - then choose B.

In the real world, it would probably be a better idea to discard both options and use your natural lifespan to search for alternative paths to immortality.