One day, you and the presumptuous philosopher are walking along, arguing about the size of the universe, when suddenly Omega jumps out from behind a bush and knocks you both out with a crowbar. While you're unconscious, she builds two hotels, one with a million rooms, and one with just one room. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.
You wake up in a hotel room, in bed with the presumptuous philosopher, with a note on the table from Omega, explaining what she's done.
"Which hotel are we in, I wonder?" you ask.
"The big one, obviously" says the presumptuous philosopher. "Because of anthropic reasoning and all that. Million to one odds."
"Rubbish!" you scream. "Rubbish and poppycock! We're just as likely to be in any hotel omega builds, regardless of the number of observers in that hotel."
"Unless there are no observers, I assume you mean" says the presumptuous philosopher.
"Right, that's a special case where the number of observers in the hotel matters. But except for that it's totally irrelevant!"
"In that case," says the presumptuous philosopher, "I'll make a deal with you. We'll go outside and check, and if we're at the small hotel I'll give you ten bucks. If we're at the big hotel, I'll just smile smugly."
"Hah!" you say. "You just lost an expected five bucks, sucker!"
You run out of the room to find yourself in a huge, ten thousand story attrium, filled with throngs of yourselves and smug looking presumptuous philosophers.
I don't see how. Omega doesn't make the prediction because you made the action - he makes it because he can predict that a person of a particular mental configuration at time T will make decision A at time T+1. If I were to play the part of Omega, I couldn't achieve perfect prediction, but might be able to achieve, say, 90% by studying what people say they will do on blogs about Newcombe's paradox, and performing observation as to what such people actually do (so long as my decision criteria weren't known to the person I was testing).
Am I violating causality by doing this? Clearly not - my prediction is caused by the blog post and my observations, not by the action. The same thing that causes you to say you'd decide one way is also what causes you to act one way. As I get better and better, nothing changes, nor do I see why something would if I am able to simulate you perfectly, achieving 100% accuracy (some degree of determinism is assumed there, but then it's already in the original thought experiment if we assume literally 100% accuracy).
Assuming I'm understanding it correctly, the same would be true for a manipulationist definition. If we can manipulate your mental state, we'd change both the prediction (assuming Omega factors in this manipulation) and the decision, thus your mental state is a cause of both. However if we could manipulate your action without changing the state that causes it in a way that would affect Omega's prediction, our actions would not change the prediction. In practice, this may be impossible (it requires Omega not to factor in our manipulation, which is contradicted by assuming he is a perfect predictor), but in principle it seems valid.