This problem is solved by identifying prior (notion of reality) not with explicit data given by psychological anticipation, but with normative anticipation. That is, reality is explained as that which we should expect, where the shouldness of expectation is not a line from Litany of Tarski, suggesting how one ought to keep an accurate map of reality, but literally explanation of what reality is.
I don't understand how this is different from believing in reality-fluid. If it's the same thing, I cannot accept that. If it's different, could you explain how?
This is an explanation of reality in terms of decision-theoretic heuristics we carry in our heads, as a notion similar to morality and platonic truth. This is of course a mere conceptual step, it doesn't hand you much explanatory power, but I hope it can make reality a bit less mysterious. Like saying that Boeing 747 is made out of atoms, but not pointing out any specific details about its systems.
I don't understand what exactly you refer to by reality-fluid, in what sense you see an analogy, and what problem that points out. The errors and confusions of evaluating one's anticipation in practice have little bearing on how anticipation should be evaluated.
As argued here, debates about probability can be profitably replaced with decision problems. This often dissolves the debate - there is far more agreement as to what decision sleeping beauty should take than on what probabilities she should use.
The concept of subjective anticipation or subjective probabilities that cause such difficulty here, can, I argue, be similarly replaced by a simple decision problem.
If you are going to be copied, uncopied, merged, killed, propagated through quantum branches, have your brain tasered with amnesia pills while your parents are busy flipping coins before deciding to reproduce, and are hence unsure as to whether you should subjectively anticipated being you at a certain point, the relevant question should not be whether you feel vaguely connected to the putative future you in some ethereal sense.
Instead the question should be akin to: how many chocolate bars would your putative future self have to be offered, for you to forgo one now? What is the tradeoff between your utilities?
Now, altruism is of course a problem for this approach: you might just be very generous with copy #17 down the hallway, he's a thoroughly decent chap and all that, rather than anticipating being him. But humans can generally distinguish between selfish and altruistic decisions, and the setup can be tweaked to encourage the maximum urges towards winning, rather than letting others win. For me, a competitive game with chocolate as the reward would do the trick...
Unlike for the sleeping beauty problem, this rephrasing does not instantly solve the problems, but it does locate them: subjective anticipation is encoded in the utility function. Indeed, I'd argue that subjective anticipation is the same problem as indexical utility, with a temporal twist thrown in.