Pearl's counterfactuals (or even causal diagrams) are unhelpful, as they ignore the finer points of logical control that are possibly relevant here. For example, that definitions (facts) are independent should refer to the absence of logical correlation between them, that is inability to infer (facts about) one from the other. But this, too, is shaky in the context of this puzzle, where the nature of logical knowledge is called into question.
Is it a trivial remark regarding the probability theory behind Pearl's "causality", or an intuition with regard to future theories that resemble Pearl's approach?
Consider the following thought experiment ("Counterfactual Calculation"):
Should you write "even" on the counterfactual test sheet, given that you're 99% sure that the answer is "even"?
This thought experiment contrasts "logical knowledge" (the usual kind) and "observational knowledge" (what you get when you look at a calculator display). The kind of knowledge you obtain by observing things is not like the kind of knowledge you obtain by thinking yourself. What is the difference (if there actually is a difference)? Why does observational knowledge work in your own possible worlds, but not in counterfactuals? How much of logical knowledge is like observational knowledge, and what are the conditions of its applicability? Can things that we consider "logical knowledge" fail to apply to some counterfactuals?
(Updateless analysis would say "observational knowledge is not knowledge" or that it's knowledge only in the sense that you should bet a certain way. This doesn't analyze the intuition of knowing the result after looking at a calculator display. There is a very salient sense in which the result becomes known, and the purpose of this thought experiment is to explore some of counterintuitive properties of such knowledge.)