If you update too little, and accept too many bets, you will lose a lot of money to people with better information than you. On the other hand, you can also go too far in the other direction. If your response to being offered a five-cent bet is to immediately update to accept their probabilities (and refuse the bet), you will be very easy to fool (although hard to exploit by betting).
This, incidentally, is a real bookie thing. They worry constantly about informed bettors making money off them, but they also want the 'flow' in order to update their odds. So there's a constant cat-and-mouse between the betting syndicates who have informational edges of various sorts, and the bookmakers, who try to limit the bettors to small amounts while then turning around and updating their odds based on the revealed information (to the bettors' perennial frustration, as the bookmakers can close accounts unilaterally or seize winnings).
Adversarial evidence (that which is specifically crafted to make your beliefs worse) is tricky. It still fits, but you need to expand the circle of beliefs you're updating. What are your priors that the bet/evidence comes from Omicron or Omega?
Well, the perplexing situation doesn't actually happen if the predictors are good enough, because they'll predict you both won't update and won't take the bet. Thus you'll never have been approached in the first place.
Your process of deciding what to do may at some point include simulating Omega and Omicron. If so, this means that when Omega and Omicron are simulating you, they are now trying to solve the Halting Problem. I am skeptical that Omega or Omicron can solve the Halting Problem.
They don't need to solve the whole Halting Problem, for the same reason you don't need to contradict Rice's theorem if you had some proof (which I take as an axiom for the sake of the hypothetical) that the predictor was in fact perfect and that it is utility maximizing. Also, we can just try saying that there is a high probability that they will do this. Furthermore, you can imagine a restricted subset of Turing machines for which the Halting problem is computable. But also the only computers that exist in reality are really finite state machines.
Suppose the US presidential election is tomorrow. You currently assign a probability of 50% to each outcome. (We are ignoring the small possibility that neither of the main party candidates will win).
A man approaches you, and offers you a bet of $10, at 2-1 odds. In other words, if candidate one wins, he pays you $20, if candidate two wins, you pay him $10.
Should you accept this bet? What if the bet was for $10000 instead? Assume that your utility is linear in dollars (or assume that the bet is for utilons instead, whatever). If not, why not? Try to think about this before reading on.
The answer is that it depends on your priors - in particular, it depends on how you interpret the evidence of being offered the bet. In general, if someone offers you a large bet on some outcome, it's probably safe to assume they have access to a reasonable amount of information about the outcome. Depending on how much information your own probability estimate is based on, you should update towards the odds they offered you.
If you update too little, and accept too many bets, you will lose a lot of money to people with better information than you. On the other hand, you can also go too far in the other direction. If your response to being offered a five-cent bet is to immediately update to accept their probabilities (and refuse the bet), you will be very easy to fool (although hard to exploit by betting).
Now suppose there are two superintelligences, Omega and Omicron. They are both excellent at modelling both you and the presidential election. Omega has a strong preference for money, and a weak preference for having you believe false things about the presidential election. Omicron has this swapped - it wants you to believe that actual outcome of the election (which it has predicted) is extremely unlikely, and has a weak preference for money.
Omega executes the following plan: It looks through a large number of possible bets, looking for ones that will give it a lot of money (according to its predicted outcome of the election). For each of them, it predicts whether you will, if offered, take the bet, or simply update your belief to be more accurate (if the bet is such that Omega wins money, it must be "in the correct direction"). It finds the best bet you will take (if any), and offers you this bet.
Omicron does a similar thing, but instead looks for bets that you won't take - bets which will instead cause you to update strongly in the wrong direction, and not take the bet (since it doesn't want to give you money). Again, it finds the best bet you won't take (if any) and approaches you.
You are approached by someone and offered a bet, although you don't know if it's Omega or Omicron. What should your policy be?
If you accept a bet, the bet is likely to have come from Omega, and thus be extremely costly for you. So you shouldn't take the bet.
On the other hand, if you update strongly in the direction of the bet, it most likely came from Omicron, and this means it's probably an update in the wrong direction. So you shouldn't update.
This leaves you in the perplexing situation of believing that the bet is probably extremely good, but not wanting to take it.