Whenever we talk about the probability of an event that we do not have perfect information about, we generally use qualitative descriptions (e.g. possible but improbable). When we do use numbers, we usually just stick to a probability range (e.g. 1/4 to 1/3). A Bayesian should be able to assign a probability estimate to any well-defined hypothesis. For a human, trying to assign a numerical probability estimate is uncomfortable and seems arbitrary. Even when we can give a probability range, we resist averaging the probabilities we expect. For instance, I'd say that Republicans are more likely than not to take over the House, but the Democrats still have a chance. After pressing myself, I managed to say that the probability of the Democratic party keeping control of the House next election is somewhere between 25% and 40%. Condensing this to 32.5% just feels wrong and arbitrary. Why is this? I have thought of three possible reasons, which I listed in order of likeliness:

Maybe our brains are just built like frequentists. If we innately think of probability of probabilities of being properties of hypotheses, it makes sense that we would not give an exact probability. If this is correct, it would mean that the tendency to think in frequentist terms is too entrenched to be easily untrained, as I try to think like a Bayesian, and yet still suffer from this effect, and I suspect the same is true of most of you.

Maybe since our ancestors never needed to express numerical probabilities, our brains never developed the ability to. Even if we have data spaces in our brains to represent probabilities of hypotheses, it could be buried in the decision-making portion of our brains, and the signal could get garbled when we try to pull it out in verbal form. However, we also get uncomfortable when forced to make important decisions on limited information, which would be evidence against this.

Maybe there is selection pressure against giving specific answers because it makes it harder to inflate your accuracy after the fact, resulting in lower status. This seems highly unlikely, but since I thought of it, I felt compelled to write it anyway.

As there are people on this forum who actually know a thing or two about cognitive science, I expect I'll get some useful responses. Discuss.

Edit: I did not mean to imply that it is wise for humans to give a precise probability estimate, only that a Bayesian would but we don't.

New Comment
15 comments, sorted by Click to highlight new comments since:

I would point out that humans can only count to about 7 (without taking advantage of symmetry). We only have about that many intuitive levels of probability. For example you give "between 25% and 40% probability", I'd guess that humans have an emotional(?) level of probability that's kinda like 1/4 and another kinda like 2/5 and you don't actually feel good about either so you have to hedge. The suggestion that we only have a few quantum levels of probability dovetails nicely with how rationalists get condemned by many people for giving arbitrarily high probabilities that aren't 100% - humans have a "certainty" level, but no level that's certainty minus epsilon so rationalists tend to come off as relatively unconfident when we give nuanced probabilities. Asking why we can't give intuitive numerical probabilities is pretty stupid, we had to invent numbers - they're not in the source code.

[-]Cyan50

I would point out that humans can only count to about 7...

A nitpick: that's subitizing, not counting.

The answer is that human minds are not Bayesian, nor is it possible for them to become such. For just about any interesting question you may ask, the algorithm that your brain uses to find the answer is not transparent to your consciousness -- and its output doesn't include a numerical probability estimate, merely a vague and coarsely graded feeling of certainty. The only exceptions are situations where a phenomenon can be modeled mathematically in a way that allows you to work through the probability calculations explicitly, but even then, your confidence that the model captures reality ultimately comes down to a common-sense judgment produced by your non-transparent brain circuits.

In your concrete example, if you're knowledgeable about politics, you can have a good hunch for how likely a certain future election outcome is. But this insight is produced by a mostly opaque process in your brain, which doesn't give you any numerical probabilities. This is not a problem you can attack with an explicit mathematical calculation, and even if you devised a way to do so, the output of this calculation would be altogether different from the conclusion you'll make using common sense, and it makes no sense to assign the probability calculated by the former to the latter.

Therefore, insisting on attaching a numerical probability to your common-sense conclusions makes no sense, except insofar as such numbers are sometimes used as vague figures of speech.

[-]matt80

But attaching those estimates is clearly useful.
Consider training: predictionbook.com

Recently I started phrasing probabilities in my head as odds; it feels arbitrary for me to say "this has a 30% probability" but it doesn't feel arbitrary for me to say "I would put $20 down against $10 that this will happen and feel like I wasn't losing money" in my head, then notice that this is roughly equivalent to a 33% probability.

To somewhat reiterate the edit on the OP, I don't mean for this to be prescriptive or to postulate a serious theory behind it, I'm just observing what makes me feel more or less weird in terms of stating probabilities.

For any given state of information there is a true probability that any given hypothesis is true. Why can't we view people as just estimating this probability when they something like "the probability of the Democratic party keeping control of the House next election is somewhere between 25% and 40%?" Saying an exact number sounds like you have a great deal of precision in your estimate when in general you don't.

I don't think a mean of the probabilities is the correct way to average; I think the logistic of the mean of the log odds (suggested by Douglas Knight) is better, and averages 25% and 40% to ~32%. Obviously that's not far off, so in this case it's a nitpick. It might be the best way of handing estimates from a group though; a weighted average would even work if one trusts different members of the group differently.

For the truly crazy (crazy as in wanting to go to lots of extra work for not much gain), I think we can subvert our mental facilities by asking ourselves for the probabilities in the absolute best and worse cases for each side; since we're not very good at those estimations, treat them as a the 25th and 75th percentile, and construct a beta distribution that matches those parameters. This, however, is a huge pain, because not only do you need to find two parameters, but the CDF of the beta function is not terribly convenient. You'd then have a mean and a standard deviation, and if those seem way off base, you might want to revise your estimates.

This makes me want to see that post on Dempster–Shafer Theory someone had been considering in an open thread a while back.

Perhaps we're used to coming up with quantitative answers only for numerical data, and don' t know how to convert from impressions or instincts--even well-informed ones--into those numbers. It feels arbitrary to say 32.5% because it is arbitrary. You can't break that down into other numbers which explain it. Or, well, you could--you could build a statistical model which incorporated past voting data and trends, and then come up with a figure you could back up with quantifiable evidence--but I'm willing to bet that if you did that, you wouldn't feel weird about it any more.

For similar reasons, 32.5% seems just too precise a number for the amount of data you're incorporating into your estimate. You don't know whether it's 25% or 40%, a 15% gap, but you're proposing a mean which adds one significant figure of precision? There's definitely something wrong with that. I think your discomfort is entirely valid.

I find the idea of having a range of probabilities sensible.

If you give a single probability, all you tell someone is your expectation of what will happen.

If you give a probability range you're transmitting both your probability AND your certainty that your probabiliy is correct; IOW (In Other Words) you are telling the listener how much evidence you believe yourself to have.

So, as you gain more evidence your probability range will narrow.

To clarify:

When you find a probability through bayes-fu, you need a prior. Precisely one prior. Which is going to be pretty arbitrary.

But what if you picked two symmetrical priors, and applied the evidence to those? You start with prior 1: 99.99% Prior 2: 0.01%

When you have a reasonable amount of evidence you'll find that prior 1 and prior 2 result in similar values (ie. prior 1 might give 40% while prior 2 gives 25%) If you were to get more evidence, the difference would gradually vanish, and the smaller the probability range the more firm your probability is.

[-]sfb00

However, we also get uncomfortable when forced to make important decisions on limited information, which would be evidence against this.

You get uncomfortable when forced to make important decisions on limited information, that doesn't mean everybody does.

After pressing myself, I managed to say that the probability of the Democratic party keeping control of the House next election is somewhere between 25% and 40%.

What do you mean by this (apart from that you think it more likely they will lose than that they will win)? If the election was run twice over two days and the same people voted, their vote should stay the same both days, so it's not "this election run a hundred times would lead to a win 25-40 times".

Where is your uncertainty? The probability of ... what, specifically?

I wouldn't assume that the ability to assign numerical probabilities would even be useful for epistemic rationality, since even if there were a way to calculate a consistent measure of 'degree of belief' it might be so computationally intensive that to give this number in any reasonable amount of time would require corner-cutting which would result in the final number being less than fully representative of your 'subjective probability' anyways.

(Granted: such an ability might be of aid to gamblers)

It is curious that we definitely have degrees of belief, but we usually can't precisely introspect them. I would guess this is because introspecting the reasons pro/con a belief, they're temporarily made more salient. So we have a feeling of vacillation.

I don't find ranges of probabilities of final binary outcomes (for making decisions) to be useful at all. A single number is all I need. Just because this number may change as I focus on different pieces of evidence, doesn't mean that it needs to be represented as a range. But if your model contains inside it parameters that represent the probability of some latent variable, then you should indeed be integrating over the distributions of all those parameters.

First, I find it helpful to reframe questions in a way that reminds me to consider what I think I know and how I think I know it. "I think the Republicans will take over the House; what is the probability that my prediction will be wrong?"

Second, when assigning a number to an intuitive judgment, keep in mind that you should be indifferent about taking a small-stakes bet at the odds you chose.