Consider the following thought experiment
You have a bag with a red and a blue ball in it. You pull a ball from the bag, but don't look at it. What is the probability that it is blue?
Now imagine a counterfactual world. In this other world you drew the red ball from the bag. Now imagine a hippo eating an octopus. What is the probability that you drew the blue ball?
"Why does observational knowledge work in your own possible worlds, but not in counterfactuals?" is the key question here. Perhaps it's easier to parse like this: "Why isn't anything you can think of evidence?"
EDIT: Note that although that last question makes my answer to Vladimir's question obvious, answering the question itself requires, basically, defining what evidence is. I suppose I may as well be helpful: evidence is what you get when an event happens that lets you apply Bayes' rule to learn something new - not just any old event will do, it has to be an event that gives you different information under different circumstances.
It seems like one answer to "Why isn't anything you can think of evidence?" might be that "anything you can think of" becomes incomputable very quickly.
Let's say you were to ask a computer to consider "Anything you can think of" with respect to this problem. Imagine each unique hard drive configuration is a thought, And it can process 1 thought per second per hertz. Let's make it a 5ghz computer.
It can think of anything on a 32 bit drive in a bit less then 1 second since 2^32 is 4,294,967,296, which is less then 5 billion.
The ...
Consider the following thought experiment ("Counterfactual Calculation"):
Should you write "even" on the counterfactual test sheet, given that you're 99% sure that the answer is "even"?
This thought experiment contrasts "logical knowledge" (the usual kind) and "observational knowledge" (what you get when you look at a calculator display). The kind of knowledge you obtain by observing things is not like the kind of knowledge you obtain by thinking yourself. What is the difference (if there actually is a difference)? Why does observational knowledge work in your own possible worlds, but not in counterfactuals? How much of logical knowledge is like observational knowledge, and what are the conditions of its applicability? Can things that we consider "logical knowledge" fail to apply to some counterfactuals?
(Updateless analysis would say "observational knowledge is not knowledge" or that it's knowledge only in the sense that you should bet a certain way. This doesn't analyze the intuition of knowing the result after looking at a calculator display. There is a very salient sense in which the result becomes known, and the purpose of this thought experiment is to explore some of counterintuitive properties of such knowledge.)