If you have a probability of probabilities, you can just collapse it into one probability. Suppose you're 50% sure that A has 80% probability, and 50% sure it has 60% probability. Let B be that A has 80% probability.
P(A|B) = 0.8
P(A|!B) = 0.6
P(B) = 0.5
P(A&B) = P(A|B)P(B) = 0.8*0.5 = 0.4
P(A&!B) = P(A|!B)P(!B) = 0.6*0.5 = 0.3
P(A) = P(A&B)+P(A&!B) = 0.4+0.3 = 0.7
So you can just say that A has 70% probability and be done with it. No need for a confidence interval.
Why don't probabilities come with error margins, or other means of describing uncertainty in their assessments?
If I evaluate a prior probability P(new glacial period starting within the next 100 years) to, say, 0.1, shouldn't I then also communicate how certain I feel about that judgement?
A scientist might make the same estimate but be more sure about it's accuracy than I.
In our everyday judgements we often use such package deals:
A: where's Jamie?
B: I think he went to the club house, but you know Jamie - he could be anywhere.
High P, high uncertainty
A: Where's Susie? Do you think she ran astray after that hefty argument?
B: no I'm certain she would *never* do that. She must have gone to a friends place.
High P, low uncertainty.