Well, the quoted version being used here posits that I have "knowledge of the Predictor's infallibility" and doesn't give an error rate. So there's one counterexample, at least.
Of course, "knowledge" doesn't mean I have a confidence of exactly 1 -- Predictor may be infallible, but I'm not. If Predictor is significantly more baseline-accurate than I am, then for EV calculations the primary factor to consider is my level of confidence in the things I "know," and Predictor's exact error rate is noise by comparison.
In practice I would say that if I somehow found myself in the state where I knew Predictor was infalllible the first thing I should do is ask myself how I came to know that, and whether I endorse my current confidence in that conclusion on reflection based on those conditions.
But I don't think any of that is terribly relevant. I mean, OK, I find myself instead in the state where I know Predictor is infalllible and I remember concluding a moment earlier that I reflectively endorse my current confidence in that conclusion. To re-evaluate again seems insane. What do I do next?
Here is Wikipedia's description of Newcomb's problem:
Most of this is a fairly general thought experiment for thinking about different decision theories, but one element stands out as particularly arbitrary: the ratio between the amount the Predictor may place in box B and the amount in box A. In the Newcomb formulation conveyed by Nozick, this ratio is 1000:1, but this is not necessary. Most decision theories that recommend one-boxing do so as long as the ratio is greater than 1.
The 1000:1 ratio strengthens the intuition for one-boxing, which is helpful for illustrating why one might find one-boxing plausible. However, given uncertainty about normative decision theory, the decision to one-box can diverge from one's best guess at the best decision theory, e.g. if I think there is a 1 in 10 chance that one-boxing decision theories I may one-box on Newcomb's problem with a potential payoff ratio of 1000:1 but not if the ratio is only 2:1.
So the question, "would you one-box on Newcomb's problem, given your current state of uncertainty?" is not quite the same as "would the best decision theory recommend one-boxing?" This occurred to me in the context of this distribution of answers among target philosophy faculty from the PhilPapers Survey:
Newcomb's problem: one box or two boxes?
If all of these answers are about the correct decision theory (rather than what to do in the actual scenario), then two-boxing is the clear leader, with a 2.85:1 ratio of support (accept or lean) in its favor, but this skew would seem far short of that needed to justify 1000:1 confidence in two-boxing on Newcomb's Problem.
Here are Less Wrong survey answers for 2012:
NEWCOMB'S PROBLEM
One-box: 726, 61.4%
Two-box: 78, 6.6%
Not sure: 53, 4.5%
Don't understand: 86, 7.3%
No answer: 240, 20.3%
Here one-boxing is overwhelmingly dominant. I'd like to sort out how much of this is disagreement about theory, and how much reflects the extreme payoffs in the standard Newcomb formulation. So, I'll be putting a poll in the comments below.