The material I have in mind is Chapter 18 of PT:LOS. You can see the section headings on page 8 (numbered vii because the title page is unnumbered) here. One of the section titles is "Outer and Inner Robots"; when rhollerith says 72%, he's giving the outer robot answer. To give an account of how unstable your probability estimates are, you need to give the inner robot answer.
What does it tell us about that? Doesn't it depend on which piece of evidence we're talking about?
When we receive new evidence, we assign a likelihood function for the probability. (We take the perspective of the inner robot reasoning about what the outer robot will say.) The width of the interval for the probability tells us how narrow the likelihood function has to be to shift the center of that interval by a non-neglible amount.
Do you have to specify a prior over which variables you are likely to observe next?
No.
That is a strange little chapter, but I should note that if you talk about the probability that you will make some future probability estimate, then the distribution of a future probability estimate does make a good way of talking about the instability of a state of knowledge. As opposed to the notion of talking about the probability of a current probability estimate, which sounds much more like you're doing something wrong.
There is a common intuition and feeling that our most fundamental goals may be uncertain in some sense. What causes this intuition? For this topic I need to be able to pick out one’s top level goals, roughly one’s context insensitive utility function, and not some task specific utility function, and I do not want to imply that the top level goals can be interpreted in the form of a utility function. Following from Eliezer’s CFAI paper I thus choose the word “supergoal” (sorry Eliezer, but I am fond of that old document and its tendency to coin new vocabulary). In what follows, I will naturalistically explore the intuition of supergoal uncertainty.
To posit a model, what goal uncertainty (including supergoal uncertainty as an instance) means is that you have a weighted distribution over a set of possible goals and a mechanism by which that weight may be redistributed. If we take away the distribution of weights how can we choose actions coherently, how can we compare? If we take away the weight redistribution mechanism we end up with a single goal whose state utilities may be defined as the weighted sum of the constituent goals’ utilities, and thus the weight redistribution mechanism is necessary for goal uncertainty to be a distinct concept.
(ps I may soon post and explore the effects of supergoal uncertainty in its various reifications on making decisions. For instance, what implications, if any, does it have on bounded utility functions (and actions that depend on those bounds) and negative utilitarianism (or symmetrically positive utilitarianism)? Also, if anyone knows of related literature I would be happy to check it out.)
(pps Dang, the concept of supergoal uncertainty is surprisingly beautiful and fun to explore, and I now have a vague wisp of an idea of how to integrate a subset of these with TDT/UDT)