A forecaster is well-calibrated if, for every p∈[0,1], of the propositions that they assign probability approximately p to, the fraction of them that are true is approximately p. However, there is no natural probability distribution over propositions, so this notion is not well-defined.
Often, people aren't even using an implicit probability distribution over propositions when they talk about calibration, and instead are refering to limiting densities over a particular sequence of propositions. For instance, a forecaster may be asked to predict every bit in a bitstream, and be judged well-calibrated if for every p∈[0,1], the fraction of the first n of the propositions that they assign probability approximately p to that are true approximately converges to p as n goes to infinity.
Calibration is not just a relationship between probability assignments and the truth, but a relationship between probability assignments, the truth, and some model for what it means to say that some percentage of a set of propositions is true. This model could be a probability distribution over propositions, or an explicit sequence of them. The dependence on what you mean by percentage of propositions is fairly dramatic.
For any atomless probability measure, you can pick a sequence of propositions such that, in terms of limiting frequencies along the sequence, the probability distribution is guaranteed to be well-calibrated, no matter what the ground truth is. To make a sequence of propositions, all of which are given probability nm∈Q∩[0,1], and limiting frequency nm of which are true: First pick a sequence (Xi)i∈N of independent random variables, where each Xi is uniformly distributed on {1,...,m}. For each i∈N and S⊆{1,...,m} with |S|=n, let Pi,S be the proposition that Xi∈S. For each i, exactly nm of the propositions {Pi,S∣S⊆{1,...,m},|S|=n} are true, no matter what Xi actually is. So if you list {Pi,S∣i∈N,S⊆{1,...,m},|S|=n} in order of increasing i, then the fraction of them that are true converges to nm as you go along the list. Now you can create sequences of propositions with varying probabilities assigned to them, and well-calibrated on all probability levels, by interspersing these sequences for each rational number between 0 and 1.
Alternatively, for any atomless probability measure, you can pick a sequence of propositions such that, in terms of limiting frequencies along the sequence, the probability distribution is extremely poorly calibrated, no matter what the ground truth is. To do this, let X be a random variable distributed uniformly on [0,1]. For each q∈Q∩[0,1], let Pq be the proposition that X<q. Pq gets assigned probability q. No matter what X actually is, for each p∈[0,1] (except for one, namely p=X), either all of the Pq for rational q≈p are true, or none of them are, rather than the desired fraction p of them. One might object to this example on the grounds that the reason for poor calibration is that every proposition that gets assigned probability approximately p is approximately the same proposition. But this is merely an extreme version of something that could realistically happen with real forecasting questions; sufficiently consequential events can have causal effects on a large fraction of questions a forecaster might predict, so a non-negligible-probability event may throw off even a good forecaster's calibration via correlated effects across many questions.
The previous two sequences of propositions don't have to be on different subjects; they could be different ways of organizing the exact same information. For instance, the propositions in the first example could be about the digits in the base-m expansion of the random variable in the second example. So this is more fundamental than just that people may be well calibrated on some topics but poorly calibrated on others; how well someone scores on a calibration test will depend on how the test is organized, not just on what information they're forecasting.
So what are people actually measuring when they measure calibration? What it's intended to measure is failure to form a coherent probability distribution at all, rather than any notion of how accurate a given probability distribution is. Forecasters don't give explicit probability distributions over everything that could happen; they just attach numbers to a certain set of propositions, and these numbers are intended to be interpreted as probabilities. But if you produce these numbers by thinking about how qualitatively likely something is and then attempting to represent that with an appropriate-seeming number, then these aren't likely to actually be the probabilities that those propositions have in any actual probability distribution. I.e. if you report probabilities for propositions by having some underlying probability distribution, and then applying some monotonicly increasing bijection f:[0,1]→[0,1] to probabilities of all propositions, then you will be poorly calibrated on sequences of propositions on which the original probability distribution are well-calibrated, and you won't be reporting probabilities from any actual probability distribution, because, for instance, if you take three equally likely possible outcomes, exactly one of which must occur, then the probabilities you assign to each of these will be f(13), and their sum will be 3f(13), rather than 1. A suggested way to train to become calibrated is to try giving probabilities to large numbers of propositions whose truth-values can be checked later, and then calculate the fraction q of times the propositions you assigned probability approximately p to turned out to be true, so that next time you feel the level of confidence you previously assigned probability p to, you can now assign it probability q instead. If your reported probabilities were a monotonically increasing function f of the probabilities of some probability distribution which is well-calibrated on the given propositions, then this allows you to learn and undo the function f, so you can accurately report the probabilities from your underlying distribution. But if you already were reporting probabilities from a coherent probability distribution, and that probability distribution was poorly calibrated on the given propositions for whatever reason, then this will make you a worse forecaster, by making the probabilities you give not even form a coherent probability distribution. For example, if X and Y are independent random variables uniformly distributed on [0,1], then forecasting a bunch of propositions of the form X<p, and then discovering the true value of X, and adjusting so that you only ever give probabilities 0 or 1 so that you would have been well-calibrated on those questions, will make you worse at forecasting Y. So calibration training implicitly assumes that if you adjust your probabilities so that they are coherent, then they will be well-calibrated on the questions people tend to train on in practice. I expect this assumption is likely to be close to true, provided the questions being forecasted are sufficiently diverse that correlations between them don't throw off calibration.
If the things you're predicting are completely independent, then naive "calibration" works fine: if you're good at putting things into an "80% likely" bucket, then in practice ~80% of those predictions will be true.
If the things you're predicting are highly correlated with each other—e.g. questions like "Will company X fail?", "Will company Y fail?", and so on, when the most likely way for company X to fail involves a general economic downturn that affects all the companies—then even if you were perfect at putting propositions into the 5% bucket, the actual outcomes may look a lot more like "0% became true" or "100% became true" than like "5% became true".
Therefore, when evaluating someone's calibration, or creating a set of predictions one plans to evaluate later, one should take these correlations into account.
If one expects correlated outcomes, probably the best thing is to factor out the correlated part into its own prediction—e.g. "Chance of overall downturn [i.e. GDP is below X or something]: 4%" and "Chance of company X failing, conditional on overall downturn: 70%" and "Chance of company X failing, conditional on no downturn: 2.3%" (which comes out to ~5% total).
If the predictor didn't do this, but there was an obvious-in-retrospect common cause affecting many propositions... well, you still don't know what probability the predictor would have assigned to that common cause, which is unfortunate, and makes it difficult to judge. Seems like the most rigorous thing you can do is pick one of the correlated propositions, and throw out the rest, so that the resulting set of propositions is (mostly) independent. If this leaves you with too few propositions to do good statistics with, that is unfortunate.
One might think that if you're evaluating buckets separately (e.g. "the 80% bucket", "the 90% bucket"), it's ok if there's a proposition in one bucket that's correlated with a proposition in another bucket; as long as there's no correlation within each bucket, it remains the case that, if the predictor was good, then ~80% of the propositions in the 80% bucket should be true. But then you can't do a meta-evaluation at the end that combines the results of separate buckets: e.g. if they said "5% company X fails, 10% company Y fails, 15% company Z fails, 20% company Q fails", and there was a downturn and they all failed, then saying "The predictor tended to be underconfident" would be illegitimate.
A forecaster is well-calibrated if, for every p∈[0,1], of the propositions that they assign probability approximately p to, the fraction of them that are true is approximately p. However, there is no natural probability distribution over propositions, so this notion is not well-defined.
Often, people aren't even using an implicit probability distribution over propositions when they talk about calibration, and instead are refering to limiting densities over a particular sequence of propositions. For instance, a forecaster may be asked to predict every bit in a bitstream, and be judged well-calibrated if for every p∈[0,1], the fraction of the first n of the propositions that they assign probability approximately p to that are true approximately converges to p as n goes to infinity.
Calibration is not just a relationship between probability assignments and the truth, but a relationship between probability assignments, the truth, and some model for what it means to say that some percentage of a set of propositions is true. This model could be a probability distribution over propositions, or an explicit sequence of them. The dependence on what you mean by percentage of propositions is fairly dramatic.
For any atomless probability measure, you can pick a sequence of propositions such that, in terms of limiting frequencies along the sequence, the probability distribution is guaranteed to be well-calibrated, no matter what the ground truth is. To make a sequence of propositions, all of which are given probability nm∈Q∩[0,1], and limiting frequency nm of which are true: First pick a sequence (Xi)i∈N of independent random variables, where each Xi is uniformly distributed on {1,...,m}. For each i∈N and S⊆{1,...,m} with |S|=n, let Pi,S be the proposition that Xi∈S. For each i, exactly nm of the propositions {Pi,S∣S⊆{1,...,m},|S|=n} are true, no matter what Xi actually is. So if you list {Pi,S∣i∈N,S⊆{1,...,m},|S|=n} in order of increasing i, then the fraction of them that are true converges to nm as you go along the list. Now you can create sequences of propositions with varying probabilities assigned to them, and well-calibrated on all probability levels, by interspersing these sequences for each rational number between 0 and 1.
Alternatively, for any atomless probability measure, you can pick a sequence of propositions such that, in terms of limiting frequencies along the sequence, the probability distribution is extremely poorly calibrated, no matter what the ground truth is. To do this, let X be a random variable distributed uniformly on [0,1]. For each q∈Q∩[0,1], let Pq be the proposition that X<q. Pq gets assigned probability q. No matter what X actually is, for each p∈[0,1] (except for one, namely p=X), either all of the Pq for rational q≈p are true, or none of them are, rather than the desired fraction p of them. One might object to this example on the grounds that the reason for poor calibration is that every proposition that gets assigned probability approximately p is approximately the same proposition. But this is merely an extreme version of something that could realistically happen with real forecasting questions; sufficiently consequential events can have causal effects on a large fraction of questions a forecaster might predict, so a non-negligible-probability event may throw off even a good forecaster's calibration via correlated effects across many questions.
The previous two sequences of propositions don't have to be on different subjects; they could be different ways of organizing the exact same information. For instance, the propositions in the first example could be about the digits in the base-m expansion of the random variable in the second example. So this is more fundamental than just that people may be well calibrated on some topics but poorly calibrated on others; how well someone scores on a calibration test will depend on how the test is organized, not just on what information they're forecasting.
So what are people actually measuring when they measure calibration? What it's intended to measure is failure to form a coherent probability distribution at all, rather than any notion of how accurate a given probability distribution is. Forecasters don't give explicit probability distributions over everything that could happen; they just attach numbers to a certain set of propositions, and these numbers are intended to be interpreted as probabilities. But if you produce these numbers by thinking about how qualitatively likely something is and then attempting to represent that with an appropriate-seeming number, then these aren't likely to actually be the probabilities that those propositions have in any actual probability distribution. I.e. if you report probabilities for propositions by having some underlying probability distribution, and then applying some monotonicly increasing bijection f:[0,1]→[0,1] to probabilities of all propositions, then you will be poorly calibrated on sequences of propositions on which the original probability distribution are well-calibrated, and you won't be reporting probabilities from any actual probability distribution, because, for instance, if you take three equally likely possible outcomes, exactly one of which must occur, then the probabilities you assign to each of these will be f(13), and their sum will be 3f(13), rather than 1. A suggested way to train to become calibrated is to try giving probabilities to large numbers of propositions whose truth-values can be checked later, and then calculate the fraction q of times the propositions you assigned probability approximately p to turned out to be true, so that next time you feel the level of confidence you previously assigned probability p to, you can now assign it probability q instead. If your reported probabilities were a monotonically increasing function f of the probabilities of some probability distribution which is well-calibrated on the given propositions, then this allows you to learn and undo the function f, so you can accurately report the probabilities from your underlying distribution. But if you already were reporting probabilities from a coherent probability distribution, and that probability distribution was poorly calibrated on the given propositions for whatever reason, then this will make you a worse forecaster, by making the probabilities you give not even form a coherent probability distribution. For example, if X and Y are independent random variables uniformly distributed on [0,1], then forecasting a bunch of propositions of the form X<p, and then discovering the true value of X, and adjusting so that you only ever give probabilities 0 or 1 so that you would have been well-calibrated on those questions, will make you worse at forecasting Y. So calibration training implicitly assumes that if you adjust your probabilities so that they are coherent, then they will be well-calibrated on the questions people tend to train on in practice. I expect this assumption is likely to be close to true, provided the questions being forecasted are sufficiently diverse that correlations between them don't throw off calibration.