Recently, I found myself in a conversation with someone advocating the use of Knightian uncertainty. I pointed out that it doesn't really matter what uncertainty you call "normal" and what uncertainty you call "Knightian" because, at the end of the day, you still have to cash out all your uncertainty into a credence so that you can actually act.

My conversation partner, who I'm anonymizing as "Sir Percy", acknowledged that this is true if your goal is to maximize your expected gains, but denies that he should maximize expected gains. He proposes maximizing minimum expected gains given Knightian uncertainty ("using the MMEU rule"), and when using such a rule, the distinction between normal uncertainty and Knightian uncertainty does matter. I motivate the MMEU rule in my previous post, and in the next post, I'll explore it in more detail.

In this post, I will be examining Knightian uncertainty more broadly. The MMEU rule is one way of cashing out Knightian uncertainty into decisions in a way that looks non-Bayesian. But this decision rule is only one way in which the concept of Knightian uncertainty could prove useful, and I want to take a post to explore the concept of Knightian uncertainty in its own right.


According to Wikipedia:

In economics, Knightian uncertainty is risk that is immeasurable, not possible to calculate.

There are many ways to interpret this. In Sir Percy's coin toss, we cash out the idea of Knightian uncertainty by saying that we have "Knightian uncertainty" about whether the coin was weighted, and that we can narrow down our credence in the event H to a "Knightian interval" [.4, .6], but no further. This indicates a failure of introspection: an agent with this sort of Knightian uncertainty cannot get a precise credence for every event.

Another common phrase tossed around when people mention Knightian uncertainty is "black swan events", events that are unpredictable in foresight but which have very high impact. One common example of a black swan event was the dawn of personal computing: in the 1940's, very few people would have predicted that personal computers would become so pervasive, yet when they did, they completely altered the course of history.

People who expect black swan events to occur often claim they have Knightian uncertainty about the future. This indicates a failure of prediction: an agent with this sort of Knightian uncertainty expects that even their best predictions will be significantly flawed.

I don't like the term "Knightian uncertainty", but I mostly don't like it because it is one label that tries to cover a few very different concepts, including failures of introspection and failures of prediction. (I also dislike the term because it's named for a person instead of for its function, but until I can convince everybody to refer to "Bayesian reasoning" by a better name ("ratience?") I won't complain.)

Regardless, the concepts introduced by "Knightian uncertainty" are not mysterious unknowable immeasurable horrible no-good very bad uncertainty. Indeed, these concepts merely capture certain states of knowledge in bounded Bayesian reasoners. Allow me to repeat that:

Knightian uncertainty is not a special, immeasurable uncertainty. It's just a term that captures a few different states of limited knowledge in a bounded reasoner.

I'll expand upon that point.

Failures of Prediction (black swans)

You can't predict the future, say the advocates of Knightian uncertainty. Or, rather, you can, but you'll be completely wrong. Your Bayesian reasoning allows you to predict what is likely among the outcomes in your hypothesis space, but your hypothesis space is sorely lacking. The correct hypothesis is so far outside your hypothesis space that it hasn't even been brought to your attention, and yet it is so different from everything you can consider that your ability to predict anything about the future is completely ruined.

This is the black swan effect: sometimes, fate throws you a curveball so completely outside your hypothesis space that all your predictions are shattered by a single "black swan" event (which usually has the gall to seem obvious in hindsight). This phenomena is real, and vindicated by history: but we don't need a new kind of uncertainty to consider it.

Black swan effects occur primarily when a bounded agent fails to consider part of the hypothesis space. A perfect Solomonoff inductor in a computable universe is not vulnerable to black swans: it can have a bad prior, and it can be surprised to find itself in an overly complex universe, but there is no hypothesis which is likely but which the inductor fails to consider. Unbounded reasoners need not encounter this failure mode.

But we are bounded reasoners, and we usually can't consider all available hypotheses. We can't expect to generate even the top ten most likely hypotheses, no matter how long we have to brainstorm. It's not that our evidence doesn't imply the correct hypothesis, it's that we can't generate all the hypotheses that our evidence entails. This is a large part of why black swan events seem obvious in retrospect: once we have the hypothesis, it is obviously entailed by our evidence, and so it seems like it should have been obvious. But it wasn't, because we aren't good at generating the right ideas.

This phenomena is worrisome when attempting to predict the future, but we don't need a new kind of uncertainty to deal with the failure mode. In fact, this failure mode is nothing but a description of one of the limitations of a bounded Bayesian reasoner.

Bounded Bayesian reasoners should expect that they don't have access to the full hypothesis space. Bounded Bayesian reasoners can expect that their first-order predictions are incorrect due to a want of the right hypothesis, and thus place high credence on "something I haven't thought of", and place high value on new information or other actions that expand their hypothesis space. Bounded Bayesians can even expect that their credence for an event will change wildly as new information comes in.

Let's make things more concrete. Consider the event "there is a cure for Alzheimer's disease 70 years from now".

As an aspiring Bayesian, I can assign a credence to this event. But as a denizen in a world of chaos, I can also expect black swan events. Dealing with the black swans doesn't require any new type of probability, though: I can account for it within the Bayesian framework.

Let's pretend, to make things simple, that I assign 50% credence to this event. Sir Percy might call this repugnant, claiming Knightian uncertainty. How can I assign a credence when I expect black swans? How can I even claim to know the shape of the distribution?

But, of course, I'm accounting for black swans (insofar as I can) with my 50% credence. Let's consider a few black swans that could affect this event. The average person considering an alzheimer's cure in seventy years probably imagines the status quo continuing in the interim, and then asks whether medical science (extrapolated out seventy years at the same rate of growth) will lead to an Alzheimer's cure. The average person probably does not consider the following potential black swans:

  1. Within 70 years, human civilization will have collapsed.
  2. Within 70 years, we will have achieved a positive singularity.
  3. Within 70 years, all modern diseases will be eliminated by whole-brain emulation.

Of course, these aren't black swan events to me, because these are in my hypothesis space. But they'd be black swans to the average person, and I was capable of taking them into account when assigning my credence. So in a way, yes, I can account for black swans.

I'm still susceptible to black swans that I don't see coming, of course. My black swans are hypothesis that are just as weird to me as whole-brain emulation is to my grandmother, and there's a decent chance that I'll be blindsided by one of these strange events sometime in the next seventy years.

But I can still account for this. I don't know where to expect black swans, but I can ask questions like "how will the average black swan affect Alzheimer's cures?". If I expect that most black swans will make Alzheimer's cures easier to achieve, then I adjust my credence upwards. If I expect the opposite, then I adjust my credence downwards.

And if I expect that I have absolutely no idea what the black swans will look like but also have no reason to believe black swans will make this event any more or less likely, then even though I won't adjust my credence further, I can still increase the variance of my distribution over my future credence for this event.

In other words, even if my current credence is 50% I can still expect that in 35 years (after encountering a black swan or two) my credence will be very different. This has the effect of making me act uncertain about my current credence, allowing me to say "my credence for this is 50%" without much confidence. So long as I can't predict the direction of the update, this is consistent Bayesian reasoning.

As a bounded Bayesian, I have all the behaviors recommended by those advocating Knightian uncertainty. I put high value on increasing my hypothesis space, and I often expect that a hypothesis will come out of left field and throw off my predictions. I'm happy to increase my error bars, and I often expect my credences to vary wildly over time. But I do all of this within a Bayesian framework, with no need for exotic "immeasurable" uncertainty.

Failures of Introspection (imprecise credences)

Black swan events are not a good reason to fail to produce a credence. Black swan events are a good reason to lower your confidence and increase your error bars, and they are a good reason to expect your credence to vary, but they don't prohibit you from using the (admittedly incomplete) information you have right now to give the best guess you can right now, even if you expect it to be wrong.

There are other scenarios, though, where advocates of Knightian uncertainty claim that they simply cannot generate a sharp credence. This happens during failures of introspection. Humans are not perfect Bayesians, and we can't simply ask our intuitions to take all of the evidence and weight it appropriately and output a precise number. Because first of all, our intuitions don't weigh things very well. Our credence calculations depend upon the framing of the question and on our mood. Our ability to use the evidence depends upon our memory and is limited by our vulnerability to various biases.

Secondly, even if these confounding factors didn't exist, we'd still lack the ability to query our intuitions and get a precise number out. The best we can get is a vague, fuzzy feeling. Even if our brains were doing good Bayesian reasoning, we would lack the introspection to translate our feelings into sharp numbers. All we can get is an interval, or at best a fuzzy distribution over possible credences that we should hold.

This phenomena occurs whenever an aspiring Bayesian can't generate enough significant digits, for one reason or another. Perhaps the agent didn't start with a precise prior. Perhaps the agent can't do perfect Bayesian updates. Perhaps it simply lacks perfect introspection. As someone without a precise prior who can't do perfect Bayesian updates and lacks perfect introspection, I sympathize.

Have you heard the joke about the Tyrannosaurus Rex?

A tourist goes to the museum, and sees a Tyrannosaurus Rex. "Wow", she says. "This looks old." Turning to the tour guide, she asks "How old is this skeleton?"

"66 million years, three weeks, and two days old!" the tour guide says triumphantly.

"Dang", the tourist says, "how do you know its age with such precision?"

"Well, three weeks and two days ago, I asked the paleontologist, and she said it was 66 million years old."

Advocates of Knightian uncertainty may well feel like aspiring Bayesians are acting like the tour guide. Indeed, this is a possible failure mode among aspiring Bayesians. In general, bounded Bayesian reasoners should not expect that they are able to generate significant digits indefinitely.

If you asked me to guess the age of a Tyrannosaur skeleton, I would say that it's likely between 66 and 67 million years old. But if you asked me to guess the millennia in which the Tyrannosaur lived, I'd be fairly uncomfortable, and if you asked me to guess the year it died, I'd look at you funny.

Given a perfect Bayesian, you could query their sharp credences until you found an event "This Tyrannosaur was born at or before minute X" for some X to which the Bayesian assigns 50% credence. But if you tried that trick with me, I'd be somewhat miffed. And if you made me take a bet about the exact minute in which the T-Rex was born, I'd be quite annoyed.

While I can generate credences for when the Dinosaur was born, asking for a prediction down to the minute of its birth is asking for way more significant digits than I have access to.

If you want, you can say I have "Knightian uncertainty" about when the Tyrannosaur was born. I surely don't want to make bets in scenarios where the bet depends upon more significant digits then I am capable of producing.

And yet, there are scenarios where the world will demand more precision than I can produce. So the question is, what then?

The classical Bayesian answer is that you calculate as many significant digits as you can until you have a distribution over which credences you should have, and then you pick the mean. In actual fact, as a bounded agent, you won't be able to get very many significant digits at all, and you won't be able to get a clear credence distribution, so "picking the mean" will be another fuzzy and vague task.

But you still have to do it, because even though the situation is annoying, cashing out your credences is the best option you've got.

Sure, you say that you're uncomfortable guessing the exact minute for which you assign credence 50% to the event "the T-Rex was born at or before this minute". But consider the following game. Omega comes down to you and says:

Listen. You must pick a precise minute in Earth's history. Then, I'll create a clone of you that has exactly the same knowledge, but exactly opposite preferences. That clone will choose either "before" or "after" your chosen minute. If the T-Rex was born in the timespan chosen by your evil twin, then I'll destroy the world. Otherwise, I'll help you solve global coordination.

In this scenario, you maximize the chances of the world's survival by picking the minute before which you assign a 50% credence to the event 'the T-Rex was born before this minute'.

Now, I agree that this scenario is ridiculous. And that it sucks. And I agree that picking a precise minute feels uncomfortable. And I agree that this is demanding way more precision than you are able to generate. But if you find yourself in the game, you'd best pick the minute as well as you can. When the gun is pressed against your temple, you cash out your credences.

Yes, you can say you have "Knightian uncertainty" about the precise minute in which the T-Rex was born. But this doesn't mean that the uncertainty is "immeasurable" or "not possible to calculate". It just means that nature is demanding more precision than you feel comfortable generating.

Bounded Bayesians have Knightian powers

I take issue with the term "Knightian uncertainty" for a number of reasons. It is one label used for many things, and I find such labels unhelpful. It is touted as "immeasurable" and "impossible to calculate" when actually it only describes certain limitations of bounded agents. The scary description doesn't seem to help.

That said, many of the objections made by advocates of Knightian uncertainty against ideal Bayesian reasoning are sound objections: the future will often defy expectation. In many complicated scenarios, you should expect that the correct hypothesis is inaccessible to you. Humans lack introspective access to their credences, and even if they didn't, such credences often lack precision.

These are shortcomings not found in an idealized Bayesian, but they are prevalent in any bounded reasoner. But none of these shortcomings suggest that standard probability theory is inadequate for reasoning about our environment due to some exotic "immeasurable" uncertainty.

I understand some of the aversion to a Bayesian framework. Bayesians do tent to fetishize bets. When offered the two bets in Sir Percy's coin toss, there is a certain appeal to refusing both bets. Bets often come with stigma and this (when paired with loss aversion) can make both bets seem unappealing despite the fact that we are told a Bayesian reasoner always prefers one bet or the other.

But the thing is, a bounded Bayesian reasoner may also prefer not to take the bets. If I expect my credence for H to vary wildly then I may delay my decision as long as possible. Furthermore, if the bets are for money (rather than utility) then I'm all for risk aversion.

But from another perspective, every decision in life involves a "bet" of sorts on which action to take. The best available action may involve keeping your options open, delaying decisions, and gathering more information. But even those choices are still "part of the bet". At the end of the day, you still have to choose an action.

Humans can't generate precise credences. Even our fuzzy intuitions vary with framing and context. Our reasoning involves heuristics with known failure modes. We are subject to innumerable biases, and we can't trust introspection. But when it comes time to act, we still have to cash out our uncertainty.

If you expect you're biased one way or the other, then adjust. If you still expect you're biased but you don't know which way, then you've done the best you can. The universe doesn't give you the option to refuse its bets, and any complaints about insufficient precision will fall upon deaf ears.


Failures of prediction and introspection are not the only states which the "Knightian uncertainty" label covers (and indeed, the label seems somewhat fuzzy). I don't mean to imply that the above post completely dispels the term. Rather, I make the claim that all "immeasurable" uncertainties can be dealt with in a bounded Bayesian framework.

You can say that Knightian uncertainty is uncertainty about which you know nothing (not even the shape of the distribution), and you can feel helpless in the face of unknown unknowns, but a bounded Bayesian can handle these feelings: adjust insofar as you can, and then act.

As such, I am initially skeptical of the suggestion that any specific uncertainties (corresponding to limitations in bounded agents) should be given special treatment. If you want to say that "Knightian uncertainty" is somehow different, then I only note that when it comes time to act, you still have to cash it out into Good Old Fashioned Uncertainty — unless you refuse to maximize expected utility.

And this brings us back to the MMEU rule, a decision rule that actually does treat Knighitan uncertainty differently. Does this rule give us powers that the Bayesians know not? This question will be explored further in the next post.

New Comment
3 comments, sorted by Click to highlight new comments since:

If I expect my credence for H to vary wildly then I may delay my decision as long as possible. Furthermore, if the bets are for money (rather than utility) then I'm all for risk aversion.

My utility for money is so close to linear, that for any bet amount I've ever encountered in real life, any non-linearity can be ignored. There's a better reason to be bet-averse: the very fact that the bet is offered is evidence of potential foul play. In the real world, those who offer gambles often turn out to be con artists. As the saying goes, "If you look around the table and don't know who the sucker is, it's you."

[-][anonymous]00

Ordo ab incertitudinem. To increase certainty or to increase tolerance of uncertainty? Asking in itself is seeking certainty. That's an artifact on my Generalised Anxiety Disorder, so maybe I shouldn't care for the answer.

How do I see the next post?

Is infrabayesianism insufficient for covering knightian uncertainty too?