Your explanation still doesn't work for me, I'm afraid.
Do you mean "prior" as part of my mind's software, or "prior" as something ethereal and universal? If the former, how can my tiny brain have beliefs about all elementary particles in the universe, why did evolution build such a thing if robots using ordinary software can survive just fine, and where should I tweak my mind if I want to win the lottery? If the latter, what makes you believe that there is such a prior, and isn't this "measure" just reality-fluid by another name, which is a well-known antipattern? Or is there some third alternative that I missed?
The disparity between the level of detail in reality/prior, and imprecision and mutability of psychological anticipation was an open problem for my attack at the problem which I made in autumn (and discussed previously here).
This problem is solved by identifying prior (notion of reality) not with explicit data given by psychological anticipation, but with normative anticipation. That is, reality is explained as that which we should expect, where the shouldness of expectation is not a line from Litany of Tarski, suggesting how one ought to keep an accurate...
As argued here, debates about probability can be profitably replaced with decision problems. This often dissolves the debate - there is far more agreement as to what decision sleeping beauty should take than on what probabilities she should use.
The concept of subjective anticipation or subjective probabilities that cause such difficulty here, can, I argue, be similarly replaced by a simple decision problem.
If you are going to be copied, uncopied, merged, killed, propagated through quantum branches, have your brain tasered with amnesia pills while your parents are busy flipping coins before deciding to reproduce, and are hence unsure as to whether you should subjectively anticipated being you at a certain point, the relevant question should not be whether you feel vaguely connected to the putative future you in some ethereal sense.
Instead the question should be akin to: how many chocolate bars would your putative future self have to be offered, for you to forgo one now? What is the tradeoff between your utilities?
Now, altruism is of course a problem for this approach: you might just be very generous with copy #17 down the hallway, he's a thoroughly decent chap and all that, rather than anticipating being him. But humans can generally distinguish between selfish and altruistic decisions, and the setup can be tweaked to encourage the maximum urges towards winning, rather than letting others win. For me, a competitive game with chocolate as the reward would do the trick...
Unlike for the sleeping beauty problem, this rephrasing does not instantly solve the problems, but it does locate them: subjective anticipation is encoded in the utility function. Indeed, I'd argue that subjective anticipation is the same problem as indexical utility, with a temporal twist thrown in.