Forecasters vary on at least three dimensions:
- accuracy- as measured in (e.g.) average brier score over time (brier score is a measure of error where if you think (say) p is 0.7 likely and p turns out to be true, then your brier score on this forecast is (1 - 0.7)^2).
- calibration - how close are they to perfect calibration where for any x, if they assign a probability of x% to a given statement, in x% of cases, they are right?
- reliability - how much evidence does a given forecast of yours provide for the proposition in question being true? I think of this as "for a given confidence level c, whats the bayesfactor P(you say the probability of x is c|x)/P(you say the probability of x is c|not-x)?"
I wonder how these three properties relate to each other.
(A) Assume that you are perfectly calibrated at 90% and you say "It will rain today with 90% probability" - how should I update on your claim given I know your perfect calibration? My first intuition is that, given your perfect calibration,
P(you say rain with 90%|rain) is 90% and P(you say rain with 90%| no rain) is 10% likely. But that doesn't follow from the fact that you are perfectly calibrated, does it? Does your calibration have any bearing at all on your reliability (apart from the fact that both positively correlate with forecasting competence)? If it doesn't - why do we care about being calibrated?
(B) How does accuracy relate to reliability? Can infer something about your reliability from knowing your over-time brier score?
Hey, thanks for the answer and sorry for my very late response. In particular thanks for the link to the OpenPhil report, very interesting! To your question - I now changed my mind again and tentatively think that you are right. Here's how I think about it now, but I still feel unsure whether I made a reasoning error somewhere:
There's some distribution of your probabilistic judgments that shows how frequently you report a given probability in a proposition that turned out to be true. It might show e.g. that for true propositions you report 90% probability in 10% of all your probability judgements. This might be the case even if you are perfectly calibrated as long as, for false propositions, you report 90% in (10/9) % of all your probability judgements. Then, it would still be the case that 90% of your 90% probability judgements turn out to be true - and hence you are perfectly calibrated at 90%.
So, given these assumptions, what would the Bayes factor for your 90% judgement in "rain today" be?
P(you give rain 90%|rain) should be 10% since I'm sort of randomly sampling your 90% judgement from the distribution where your 90% judgement occurs 10% of the time. For the same reason, P(you give rain 90%|no rain) = 10/9 %. Therefore, the Bayes factor is 10%/(10/9)% = 9.
I suspect that my explanation is overly complicated feel free to point out more elegant ones :)