Probabilities do not necessarily need confidence intervals. A probability is already an assessment of uncertainty.
Qualitatively Confused seems relevant. If you assign probability 0.8 that Jamie is at the club, it doesn't make sense to attach an error margin to this number. An error margin would mean something like "I think there is probability 0.8 that Jamie is at the club, but the real probability might be 0.2-0.9." But this is an error. Probability is map, not territory: there is no "real probability" that Jamie is at the club. He's either there or he's not, but you don't know which, and your degree of uncertainty is quantified as a probability.
Why don't probabilities come with error margins, or other means of describing uncertainty in their assessments?
If I evaluate a prior probability P(new glacial period starting within the next 100 years) to, say, 0.1, shouldn't I then also communicate how certain I feel about that judgement?
A scientist might make the same estimate but be more sure about it's accuracy than I.
In our everyday judgements we often use such package deals:
A: where's Jamie?
B: I think he went to the club house, but you know Jamie - he could be anywhere.
High P, high uncertainty
A: Where's Susie? Do you think she ran astray after that hefty argument?
B: no I'm certain she would *never* do that. She must have gone to a friends place.
High P, low uncertainty.