I agree with Scott Aaronson' objections to the paper. I think an inconsistency can be shown with a simpler argument:
Suppose two agents, each of which can be in two states, are prepared like in the paper and Aaronson's post.
Using the reasoning of the paper, if agent A finds it's in the state, it can deduce that B is in the state, so it can deduce that B is certain that A is in the state, so it can be certain that it's in the state, so it's not actually in the state as it sees itself to be....
I'd say that if a superintelligent cat is trying to predict the outcome of someone's measurement of it in a complicated basis it'll only be more accurate if it uses information about its 'true' state as the observer sees it.