In risk modeling, there is a well-known distinction between aleatory and epistemic uncertainty, which is sometimes referred to, or thought of, as irreducible versus reducible uncertainty. Epistemic uncertainty exists in our map; as Eliezer put it, “The Bayesian says, ‘Uncertainty exists in the map, not in the territory.’” Aleatory uncertainty, however, exists in the territory. (Well, at least according to our map that uses quantum mechanics, according to Bells Theorem – like, say, the time at which a radioactive atom decays.) This is what people call quantum uncertainty, indeterminism, true randomness, or recently (and somewhat confusingly to myself) ontological randomness – referring to the fact that our ontology allows randomness, not that the ontology itself is in any way random. It may be better, in Lesswrong terms, to think of uncertainty versus randomness – while being aware that the wider world refers to both as uncertainty. But does the distinction matter?

To clarify a key point, many facts are treated as random, such as dice rolls, are actually mostly uncertain – in that with enough physics modeling and inputs, we could predict them. On the other hand, in chaotic systems, there is the possibility that the “true” quantum randomness can propagate upwards into macro-level uncertainty. For example, a sphere of highly refined and shaped uranium that is *exactly* at the critical mass will set off a nuclear chain reaction, or not, based on the quantum physics of whether the neutrons from one of the first set of decays sets off a chain reaction – after enough of them decay, it will be reduced beyond the critical mass, and become increasingly unlikely to set off a nuclear chain reaction. Of course, the question of whether the nuclear sphere is above or below the critical mass (given its geometry, etc.) can be a difficult to measure uncertainty, but it’s not aleatory – though some part of the question of whether it kills the guy trying to measure whether it’s just above or just below the critical mass will be random – so maybe it’s not worth finding out. And that brings me to the key point.

In a large class of risk problems, there are factors treated as aleatory – but they may be epistemic, just at a level where finding the “true” factors and outcomes is prohibitively expensive. Potentially, the timing of an earthquake that would happen at some point in the future could be determined exactly via a simulation of the relevant data. Why is it considered aleatory by most risk analysts? Well, doing it might require a destructive, currently technologically impossible deconstruction of the entire earth – making the earthquake irrelevant. We would start with measurement of the position, density, and stress of each relatively macroscopic structure, and the perform a very large physics simulation of the earth as it had existed beforehand. (We have lots of silicon from deconstructing the earth, so I’ll just assume we can now build a big enough computer to simulate this.) Of course, this is not worthwhile – but doing so would potentially show that the actual aleatory uncertainty involved is negligible. Or it could show that we need to model the macroscopically chaotic system to such a high fidelity that microscopic, fundamentally indeterminate factors actually matter – and it was truly aleatory uncertainty. (So we have epistemic uncertainty about whether it’s aleatory; if our map was of high enough fidelity, and was computable, we would know.)

It turns out that most of the time, for the types of problems being discussed, this distinction is irrelevant. If we know that the value of information to determine whether something is aleatory or epistemic is negative, we can treat the uncertainty as randomness. (And usually, we can figure this out via a quick order of magnitude calculation; Value of Perfect information is estimated to be worth $100 to figure out which side the dice lands on in this game, and building and testing / validating any model for predicting it would take me at least 10 hours, my time is worth at least $25/hour, it’s negative.) But sometimes, slightly improved models, and slightly better data, are feasible – and then worth checking whether there is some epistemic uncertainty that we can pay to reduce. In fact, for earthquakes, we’re doing that – we have monitoring systems that can give several minutes of warning, and geological models that can predict to some degree of accuracy the relative likelihood of different sized quakes.

So, in conclusion; most uncertainty is lack of resolution in our map, which we can call epistemic uncertainty. This is true even if lots of people call it “truly random” or irreducibly uncertain – or if they are fancy, aleatory uncertainty. Some of what we assume is uncertainty is really randomness. But lots of the epistemic uncertainty can be safely treated as aleatory randomness, and value of information is what actually makes a difference. And knowing the terminology used elsewhere can be helpful.

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 8:36 AM

Just a nitpick: not all interpretations treat quantum uncertainty as ontological. Many Worlds indeed say that it's just indexical (and so, epistemical) uncertainty.

Yes, and that means we have epistemic uncertainty about whether there is ontological uncertainty at that level - but again, it's irrelevant to almost any question we would ask in decision making.

I am not convinced that there exists anything like aleatory uncertainty - even QM uncertainty lies in the map. Having said that I agree with your point: that this doesn't matter, and value of information is the relevant measure (which is clearly not binary).

Having read your response to Dagon I am now confused - you state that:

This is in contrast to Eliezer's point that "Uncertainty exists in the map, not in the territory"

but above you only show the orthogonal point that allowing for irresolvable uncertainty can provide useful models, regardless of the existence of such uncertainty. If this is your main point (along with introducing the standard notation used in these models), how is this a contrast with uncertainty being in the map? Lots of good models have elements that can not be found in real life, for example smooth surfaces, right angles or irreducible macroscopic building blocks.

My main point was that it doesn't matter. Whether the irresolvable uncertainty exists in the territory isn't a question anyone can answer - I can only talk about my map.

I'd argue that there's a continuum of uncertainty from "already known" to "easily resolved" to "theoretically resolvable using current technology" to "theoretically resolvable with massive resources" to "theoretically resolvable by advanced civilizations".

There may or may not be an endpoint at "theoretically unknowable", but it doesn't matter. The point is that this isn't a binary distinction, and that categorizing by theory doesn't help us. The question for any decision theory is "what is the cost and precision that I can get with further modeling or measurements". Once the cost of information gathering is higher than the remaining risk due to uncertainty, you have to make the choice, and it's completely irrelevant how much of that remaining uncertainty is embedded in quantum unmeasurables and how much in simple failure to gather facts.

Absolutely - and that continuum is why I think that we should be OK with letting people call things "fundamental uncertainties." People in these circles spend a lot of time trying to define everything, or argue terminology, but we can get past that in order to note that for making most decisions, it's OK to treat some (not all) seemingly "random" things as fundamentally unknowable.

This is in contrast to Eliezer's point that "Uncertainty exists in the map, not in the territory" - not that he's wrong, just that it's usually not a useful argument to have. Instead, as you note, we should ask about value of information and make the decision.

This is in contrast to Eliezer's point that "Uncertainty exists in the map, not in the territory" - not that he's wrong, just that it's usually not a useful argument to have.

I don't know whether he is wrong in the sense that irreducible uncertainty exists in the territory, but the reasoning he uses to reach the conclusion is invalid.

He's discussing a different point.

That humans fall prey to the mind projection fallacy about much more consequential parts of what is clearly epistemic uncertainty.

It is not clearly the case that all probability is epistemic uncertainty. There is no valid argument that establishes that. There can be no armchair argument that establishes that, since the existence or otherwise of objective probability is a property of the universe, and has to be established by looking.

OK. But, there is still some important epistemic uncertainty that people nonetheless treat as intrinsic, purely because derp.

Upvoted. The heuristic "uncertainty exists in the map, not in the territory" is in the first place meant to be an heuristic against frequentist statistics. One can argue that probabilities are properties of the things itself also in situations of purely epistemic randomness. The argument "uncertainty exist in the map, not in the territory" is used in this context to show that thinking probablilities existing as "thing itself" can lead to weird conclusions.

is in the first place meant to be an heuristic against frequentist statistics.

And why do we need one of those? Most academics think religious warfare been B-ism and F-ism is silly, and you should use whichever is the most appropriate.

If you mean me and you...well we dont. I agree. But maybe one should ask that having Ronald Aylmer Fishers ideas about Bayesian statistics in mind: "the theory of inverse probabilities must be fully rejected"

Let me reprhase my quote: The heuristic "uncertainty exists in the map, not in the territory" is in the first place meant to be an heuristic against dismissing Bayesian concept of probability."

Then it is a misleading, unnecessarily metaphysical phrasing of the point, and appears to have misled Yudkowsky among others.

Generally if you approach probability as an extension of logic, probability is always relative to some evidence. Hardcore frequency dogmatists like John Venn for example thought that this is completely wrong: "the probability of an event is no more relative to something else than the area of a field is relative to something else."

So thinking probabilities existing as "things itself" taken to the extreme could lead one to the conclusion that one cant say much for example about single-case probabilities. Lets say I take HIV-test and it comes back positive. You dont find it weird to say that it is not OK to judge probabilities of me having the HIV based on that evidence?

So thinking probabilities existing as "things itself" taken to the extreme could lead one to the conclusion that one cant say much for example about single-case probabilities.

Thinking probabilities can exists in the territory leads to no such conclusion. Thinking probabilities exist only in the territory may lead to such a conclusion, but that is a strawman of the points that are being made.

It would be insane to deny that frequencies exist, or that they can be represented by a formal system derived from the Kolmogorov (or Cox) axioms.

It would also be insane to deny that beliefs exist, or that they can be represented by a formal system derived from the Kolmogorov (or Cox) axioms.

I think this confusion would all go away if people stopped worrying about the semantic meaning of the word "probability" and just specified whether they are talking about frequency or belief. It puzzles me when people insist that the formal system can only be isomorphic to one thing, and it is truly bizarre when they take sides in a holy war over which of those things it "really" represents. A rational decision maker genuinely needs both the concept of frequency and the concept of belief.

For instance, an agent may need to reason about the proportion (frequency) P of Everett branches in which he survives if he makes a decision, and also about how certain he is about his estimate of that probability. Let's say his beliefs about the probability P follow a beta distribution, or any other distribution bounded by 0 and 1. In order to make a decision, he may do something like calculate a new probability Q, which is the expected value of P under his prior. You can interpret Q as the agent's beliefs about the probability of dying, but it also has elements of frequency.

You can make the untestable claim that all Everett branches have the same outcome, and therefore that Q is determined exclusively by your uncertainty about whether you will live or die in all Everett branches. This would be Bayesian fundamentalism. You can also go to the other extreme and argue that Q is determined exclusively by P, and that there is no reason to consider uncertainty. That would be Frequentist fundamentalism. However, there is a spectrum between the two and there is no reason we should only allow the two edge cases to be possible positions. The truth is almost certainly somewhere in between.

Thinking "probability exists only in the territory" is exactly taking the idea that probabilities exists as "things itself" to the extreme as i wrote. This view is not a strawman of dogmatic frequentists position, as you can see from the John Venn quote.

I feel the need to point that i have tried to describe the context of the debate where the heuristic: "uncertainty exists in the map, not in the territory" was given in the first place. This whole historical debate started from the idea that probability as a degree of belief does not mean anything. This was the start. "Fallacious rubbish", as Fisher puts it.

I have tried to show that one can have this very extreme position even if there exists only epistemic uncertainty. One can answer to this position by describing how in some situations the uncertainty exists in the map, not in the territory. This is the context where that general heuristic is used and the background that it should be judged against.

"A rational decision maker genuinely needs both the concept of frequency and the concept of belief." Amen!

Generally if you approach probability as an extension of logic, probability is always relative to some evidence/

Maybe, but so what? That doesn't establish any point of interest. It doesn't establish Bayes over Frequentisim, since frequentists still need evidence. And it doesn't establish subectivity over objectivty, because if there are objective probabilities, you still need evidence to know what they are.

The invalid argument I alluded to elsewhere in this thread is the argument that if there is subjective probability, based on limited information, then there is no objective probability.

So thinking probabilities existing as "things itself" taken to the extreme could lead one to the conclusion that one cant say much for example about single-case probabilities.

"Don't take objective probability to an extreme" is very different to "reject objective probability".