Whenever we talk about the probability of an event that we do not have perfect information about, we generally use qualitative descriptions (e.g. possible but improbable). When we do use numbers, we usually just stick to a probability range (e.g. 1/4 to 1/3). A Bayesian should be able to assign a probability estimate to any well-defined hypothesis. For a human, trying to assign a numerical probability estimate is uncomfortable and seems arbitrary. Even when we can give a probability range, we resist averaging the probabilities we expect. For instance, I'd say that Republicans are more likely than not to take over the House, but the Democrats still have a chance. After pressing myself, I managed to say that the probability of the Democratic party keeping control of the House next election is somewhere between 25% and 40%. Condensing this to 32.5% just feels wrong and arbitrary. Why is this? I have thought of three possible reasons, which I listed in order of likeliness:
Maybe our brains are just built like frequentists. If we innately think of probability of probabilities of being properties of hypotheses, it makes sense that we would not give an exact probability. If this is correct, it would mean that the tendency to think in frequentist terms is too entrenched to be easily untrained, as I try to think like a Bayesian, and yet still suffer from this effect, and I suspect the same is true of most of you.
Maybe since our ancestors never needed to express numerical probabilities, our brains never developed the ability to. Even if we have data spaces in our brains to represent probabilities of hypotheses, it could be buried in the decision-making portion of our brains, and the signal could get garbled when we try to pull it out in verbal form. However, we also get uncomfortable when forced to make important decisions on limited information, which would be evidence against this.
Maybe there is selection pressure against giving specific answers because it makes it harder to inflate your accuracy after the fact, resulting in lower status. This seems highly unlikely, but since I thought of it, I felt compelled to write it anyway.
As there are people on this forum who actually know a thing or two about cognitive science, I expect I'll get some useful responses. Discuss.
Edit: I did not mean to imply that it is wise for humans to give a precise probability estimate, only that a Bayesian would but we don't.
I don't think a mean of the probabilities is the correct way to average; I think the logistic of the mean of the log odds (suggested by Douglas Knight) is better, and averages 25% and 40% to ~32%. Obviously that's not far off, so in this case it's a nitpick. It might be the best way of handing estimates from a group though; a weighted average would even work if one trusts different members of the group differently.
For the truly crazy (crazy as in wanting to go to lots of extra work for not much gain), I think we can subvert our mental facilities by asking ourselves for the probabilities in the absolute best and worse cases for each side; since we're not very good at those estimations, treat them as a the 25th and 75th percentile, and construct a beta distribution that matches those parameters. This, however, is a huge pain, because not only do you need to find two parameters, but the CDF of the beta function is not terribly convenient. You'd then have a mean and a standard deviation, and if those seem way off base, you might want to revise your estimates.