In what way, if any, is this problem importantly different from the following "less mathy" problem?
You have a sealed box containing a loose coin. You shake the box and then set it on the table. There is no a priori reason for you to think that the coin is more or less likely to have landed heads than tails. You then take a test, which includes the question: "Did the coin land heads?" Fortunately, you have a scanning device, which you can point at the box and which will tell you whether the coin landed heads or tails. Unfortunately, the opaque box presents some difficulty even to the scanning device, so the device's answer is right only 99% of the time. Furthermore, its errors are stochastic (or even involve quantum randomness), so, for any given coin-in-a-box, the device is probably correct but has a chance of making an error. You point the scanning device at the box and observe the result (it's "heads").
Then, unsurprisingly, Omega appears and presents you with the following decision. Consider the counterfactual world where the coin landed the same as it did in your world, but where the scanning device displayed "tails" instead of "heads", after you pointed it at the box. You are to determine what Omega writes on the test sheet in that counterfactual world.
I don't think it's any different. You could have a Q in the box, and include a person that types it in a calculator as part of the scanning device. Does your variant evoke different intuitions about observational knowledge? It looks similar in all relevant respects to me.
Consider the following thought experiment ("Counterfactual Calculation"):
Should you write "even" on the counterfactual test sheet, given that you're 99% sure that the answer is "even"?
This thought experiment contrasts "logical knowledge" (the usual kind) and "observational knowledge" (what you get when you look at a calculator display). The kind of knowledge you obtain by observing things is not like the kind of knowledge you obtain by thinking yourself. What is the difference (if there actually is a difference)? Why does observational knowledge work in your own possible worlds, but not in counterfactuals? How much of logical knowledge is like observational knowledge, and what are the conditions of its applicability? Can things that we consider "logical knowledge" fail to apply to some counterfactuals?
(Updateless analysis would say "observational knowledge is not knowledge" or that it's knowledge only in the sense that you should bet a certain way. This doesn't analyze the intuition of knowing the result after looking at a calculator display. There is a very salient sense in which the result becomes known, and the purpose of this thought experiment is to explore some of counterintuitive properties of such knowledge.)