(This is the first, and most newcomer-accessible, post in a planned sequence.)
Newcomb's Problem:
Joe walks out onto the square. As he walks, a majestic being flies by Joe's head with a box labeled "brain scanner", drops two boxes on the ground, and departs the scene. A passerby, known to be trustworthy, comes over and explains...
If Joe aims to get the most money, should Joe take one box or two?
What are we asking when we ask what Joe "should" do? It is common to cash out "should" claims as counterfactuals: "If Joe were to one-box, he would make more money". This method of translating "should" questions does seem to capture something of what we mean: we do seem to be asking how much money Joe can expect to make "if he one-boxes" vs. "if he two-boxes". The trouble with this translation, however, is that it is not clear what world "if Joe were to one-box" should refer to -- and, therefore, it is not clear how much money we should say Joe would make, "if he were to one-box". After all, Joe is a deterministic physical system; his current state (together with the state of his future self's past light-cone) fully determines what Joe's future action will be. There is no Physically Irreducible Moment of Choice, where this same Joe, with his own exact actual past, "can" go one way or the other.
To restate the situation more clearly: let us suppose that this Joe, standing here, is poised to two-box. In order to determine how much money Joe "would have made if he had one-boxed", let us say that we imagine reaching in, with a magical sort of world-surgery, and altering the world so that Joe one-boxes instead. We then watch to see how much money Joe receives, in this surgically altered world.
The question before us, then, is what sort of magical world-surgery to execute, before we watch to see how much money Joe "would have made if he had one-boxed". And the difficulty in Newcomb’s problem is that there is not one but two obvious world-surgeries to consider. First, we might surgically reach in, after Omega's departure, and alter Joe's box-taking only -- leaving Omega's prediction about Joe untouched. Under this sort of world-surgery, Joe will do better by two-boxing:
Expected value ( Joe's earnings if he two-boxes | some unchanged probability distribution on Omega's prediction ) >
Expected value ( Joe's earnings if he one-boxes | the same unchanged probability distribution on Omega's prediction ).
Second, we might surgically reach in, after Omega's departure, and simultaneously alter both Joe's box-taking and Omega's prediction concerning Joe's box-taking. (Equivalently, we might reach in before Omega's departure, and surgically alter the insides of Joe brain -- and, thereby, alter both Joe's behavior and Omega's prediction of Joe's behavior.) Under this sort of world-surgery, Joe will do better by one-boxing:
Expected value ( Joe's earnings if he one-boxes | Omega predicts Joe accurately) >
Expected value ( Joe's earnings if he two-boxes | Omega predicts Joe accurately).
The point: Newcomb's problem -- the problem of what Joe "should" do, to earn most money -- is the problem which type of world-surgery best cashes out the question "Should Joe take one box or two?". Disagreement about Newcomb's problem is disagreement about what sort of world-surgery we should consider, when we try to figure out what action Joe should take.
That you will change your mind in response to bystander's advice is the property of your mind. If the bystander is very very unexpected (by Omega), then maybe this will work. If the bystander is a priori expected to give this advice, and you are expected to heed the advice if it's given, then this means that you are expected to two-box, and thus won't get the million.