This is the last of four short essays that say explicitly some things that I would tell an intrigued proto-rationalist before pointing them towards Rationality: AI to Zombies (and, by extension, most of LessWrong). For most people here, these essays will be very old news, as they talk about the insights that come even before the sequences. However, I've noticed recently that a number of fledgling rationalists haven't actually been exposed to all of these ideas, and there is power in saying the obvious.

This essay is cross-posted on MindingOurWay.


Once upon a time, three students of human rationality traveled along a dusty path. The first was a novice, new to the art. The second was a student, who had been practicing for a short time. The third was their teacher.

As they traveled, they happened upon a woman sitting beside a great urn attached to a grand contraption. She hailed the travellers, and when they appeared intrigued, she explained that she was bringing the contraption to town (where she hoped to make money off of it), and offered them a demonstration.

She showed them that she possessed one hundred balls, identical except for their color: one was white, ninety nine were red. She placed them all in the urn, and then showed them how the contraption worked: the contraption consisted of a shaker (which shook the urn violently until none knew which ball was where) and a mechanical arm, which would select a ball from the urn.

"I'll give you each $10 if the white ball is drawn," she said over the roar of the shaker. "Normally, it costs $1 to play, but I'll give you a demonstration for free."

As the shaking slowed, the novice spoke: "I want it to draw the white ball, so I believe that it will draw the white ball. I have faith that the white ball will be drawn, and there's a chance I'm right, so you can't say I'm wrong!"

As the shaking stopped, the student replied, "I am a student of rationality, and I know that it is a virtue to move in tandem with the evidence. In this urn, there are more red balls than white, and so the evidence says that is more likely that a red ball will be drawn than a white ball. Therefore, I believe that a red ball will be drawn."

As the arm began to unfold, the teacher smiled, and said only, "I assign 1% probability to the proposition 'a white ball will be drawn,' and 99% probability to 'a red ball will be drawn.'"


In order to study the art of human rationality, one must make a solemn pact with themselves. They must vow to stop trying to will reality into being a certain way; they must vow to instead listen to reality tell them how it is. They must recognize "faith" as an attempt to disconnect their beliefs from the voice of the evidence; they must vow to protect the ephemeral correspondence between the real world and their map of it.

It is easy for the student, when making this pact with themselves, to mistake it for a different one. Many rationalist think they've taken a vow to always listen to the evidence, and to let the evidence choose what they believe. They think that it is a virtue to weigh the evidence and then believe the most likely hypothesis, no matter what that may be.

But no: that is red-ball-thinking.

The path to rationality is not the path where the evidence chooses the beliefs. The path to rationality is one without beliefs.

On the path to rationality, there are only probabilities.

Our language paints beliefs as qualitative, we speak of beliefs as if they are binary things. You either know something or you don't. You either believe me or you don't. You're either right or you're wrong.

Traditional science, as it's taught in schools, propagates this fallacy. The statistician's role (they say) is to identify two hypotheses, null and alternative, and then test them, and then it is their duty (they say) to believe whichever hypothesis the data supports. A scientist must make their beliefs falsifiable (they say), and if ever enough evidence piles up against them, they must "change their mind" (from one binary belief to another). But so long as a scientist makes their beliefs testable and falsifiable, they have done their duty, and they are licensed to believe whatever else they will. Everybody is entitled to their own opinion, after all — at least, this is the teaching of traditional science.

But this is not the way of the rationalist.

The brain is an information machine, and humanity has figured out a thing or two about how to make accurate information machines. One of the things we've figured out is this: to build an accurate world-model, do away with qualitative beliefs, and use quantitative credences instead.

An ideal rationalist doesn't say "I want the next ball to be white, therefore I believe it will be." An ideal rationalist also doesn't say, "most of the balls are red, so I believe the next ball will be red." The ideal rationalist relinquishes belief, and assigns a probability.

In order to construct an accurate world-model, you must move in tandem with the evidence. You must use the evidence to figure out the likelihood of each hypothesis. But afterwards, you don't just pick the highest-probability thing and believe that. No.

The likelihoods don't tell you what to believe. The likelihoods replace belief. They're it. You say the likelihoods and then you stop, because you're done.

Most people, upon encountering the parable above, think that it is obvious. Almost everybody who hears me tell it in person just nods, but most of them fail to deeply integrate its lesson.

They hear the parable, and then they go on thinking in terms of "knowing" or "not knowing" (instead of thinking in terms of confidence). They nod at the parable, and then go on thinking in terms of "being right" or "being wrong" (instead of thinking about whether or not they were well-calibrated). They know the parable, but in the next conversation, they still insist "you can't prove that!" or "well that doesn't prove me wrong," as if propositions about reality could be "proven," as if perfect certainty was somehow possible.

No statement about the world can be proven. There is no certainty. All we have are probabilities.

Most people, when they encounter evidence that contradicts something they believe, decide that the evidence is not strong enough to switch them from one binary belief to another, and so they fail to change their mind at all. Most people fail to realize that all evidence against a hypothesis lowers its probability, even if only slightly, because most people are still thinking qualitatively.

In fact, most people still think that they get to choose how to drawn conclusions from the evidence they've seen. And this is true — but only for those who are comfortable with avoidable inaccuracy.

For this comes as a surprise to many, but humanity has uncovered many of the laws of reasoning.

Given your initial state of knowledge and the observations you have seen, there is only one maximally accurate updated state of knowledge.

Now, you can't achieve this state of perfect posterior state. Building an ideal information-gathering engine is just as impossible as building an ideal heat engine. But the ideal is known. Given what you knew and what you saw, there is only one maximally accurate new state of knowledge.

Contrary to popular belief, you aren't entitled to your own opinion, and you don't get to choose your own beliefs. Not if you want to be accurate. Given what you knew and what you saw, there is only one best posterior state of knowledge. Computing that state is nigh impossible, but the process is well understood. We can't use information perfectly, but we know which path leads towards "better."

If you want to walk that path, if you want to nourish the ephemeral correspondence between your mind and the real world, if you want to learn how to draw an accurate map of this beautiful, twisted, awe-inspiring territory that we live in, then know this:

The Way is quantitative.

To walk the path, you must leave beliefs behind and let the likelihoods guide you. For they are all you'll have.

If this is a path you want to walk, then I now officially recommend starting with Rationality: AI to Zombies Book I: Map and Territory.


As the arm began to unfold, the teacher smiled, and said only, "I assign 1% probability to the proposition 'a white ball will be drawn,' and 99% probability to 'a red ball will be drawn.'"

The woman with the urn cocked her head and said, "Huh, you three are dressed like rationalists, and yet you seem awfully certain that I told the truth about the arm drawing balls from the urn…"

The arm whirred into motion.

New Comment
30 comments, sorted by Click to highlight new comments since: Today at 3:03 PM

After reading the story at the beginning, I thought "huh, this teacher seems rather low-level for a teacher". I also thought that back when I was first getting into LW, a depiction of that level as the highest level would not have encouraged me to explore further.

I was more pleased with the bit at the end.

At some point, it might be worth making a few versions of this story which illustrate some of the trickier techniques, with the urn lady trying exploit specific biases. If a short story like that illustrated a bias well enough to trick the reader right up until the reveal, and the reveal were written so that the reader could believably learn to think in such a way as to catch it, I think that would really help convince newcomers that there's material here worth studying.

Also "the teacher smiled"? Damn your smugness, teacher!

[-][anonymous]9y40

As the arm began to unfold, the teacher smiled, and said only, "I assign 1% probability to the proposition 'a white ball will be drawn,' and 99% probability to 'a red ball will be drawn.'"

The woman with the urn cocked her head and said, "Huh, you three are dressed like rationalists, and yet you seem awfully certain that I told the truth about the arm drawing balls from the urn…"

The arm whirred into motion.

I am waiting for the woman to ask me to place a bet or pay a price, because surely she wouldn't play ball-and-urn games with passing travelers with nothing in it for her but functioning as side character in a parable about probability.

Most people, upon encountering the parable above, think that it is obvious. Almost everybody who hears me tell it in person just nods, but most of them fail to deeply integrate its lesson.

Feh. Sorry, but I think most of humanity does think of belief in quantitative terms. Folk epistemology talks of believing strongly or weakly, not of picking maximum-a-posteriori estimates.

Our language paints beliefs as qualitative, we speak of beliefs as if they are binary things. You either know something or you don't. You either believe me or you don't. You're either right or you're wrong.

Language must be succinct. Sometimes, this can make it very confusing.

Traditional science, as it's taught in schools, propagates this fallacy. The statistician's role (they say) is to identify two hypotheses, null and alternative, and then test them, and then it is their duty (they say) to believe whichever hypothesis the data supports.

This depends where you go to school. Years before I took Technion's Intro to Statistics course, I took Reasoning About Uncertainty at UMass Amherst, and David Barrington taught only the Bayesian perspective -- to the point that seeing the Intro to Stats teacher declare, "A likelihood is not a probability!" utterly boggled me. Because after all, in all my previous schooling, a likelihood had just been a funny name for a conditional probability distribution.

Also, I think that as "Bayesian" as we like to be on this site, putting down frequentist statistics is simply a bad idea. When you possess both the data and the computing power to train a Fully Ideal Bayesian generative model, that model minimizes prediction error (the Fable of the Bayes-Optimal Classifier). When you actually need to minimize prediction error in real life, with slow computers and little training data, training a discriminative, noncausal model is often the Right Thing.

And likewise, when you need to prove that some observed qualitative pattern did not happen by experimenter error, bias, or other self-delusion, and you indeed don't have much computing power to build a predictive model at all, then you have found the appropriate place for frequentist statistics. They are the Guards at the Gate of mainstream science precisely because they guard against the demonic enemies that actually assault mainstream science: experimenter error, experimenter egotism, self-promotion, and the human being's inductive bias to see causality where there is none. It is still approximate-Bayesian, bounded-rational to guard against the most common problems first, especially for people who were not explicitly trained in how to form priors with the lagom/Goldilocks amount of informedness, or better yet, explicitly trained in how to handle mixture models that allow for some amount of out-of-model error.

Also, and this relates to that other post I made the other day, I find this wannabe-value-neutral Jedi Knight crap very distasteful. We're all human beings here: we ought to speak as having the genuine concerns of real people. One does not pursue "rationality" out of an abstract love for probability or logic: that path leads to the mad labyrinths of Platonism and Philosophy, and eventually dumps its benighted walkers into the Catholic Church. You pursue winning, and find where that takes you, and avoid being turned from your goal even in the name of Rationality (since rationality, after all, is not a terminal goal). There must be some way in which you will the world outside your mind to change, or you will not be able to chain your mind to the real world.

(Please tell me I just founded the LWian Sith. I have plans for hilarious initiation rituals.)

In order to study the art of human rationality, one must make a solemn pact with themselves. They must vow to stop trying to will reality into being a certain way; they must vow to instead listen to reality tell them how it is.

Nobody asked me to take either vow. Doing so isn't in the spirit of this community. The only reason someone might vow is to create a precommitment. You didn't make any decent argument about why this is a case where a precommitment is useful.

There nothing to be won by using imprecise language when one wants to teach clear thinking. Simplicity is a virtue.

They must vow to stop trying to will reality into being a certain way;

There nothing wrong with willing reality to be different. It leads to actions that change reality.


Rationality also is about winning. There are cases where the truth isn't the most important thing.


This whole example suffers from what Nassim Taleb calls the ludic fallacy. Balls in urns do have fixed probabilities but in most cases in life we don't have probabilities that are known in the same way.

[-]dxu9y20

Nobody asked me to take either vow. Doing so isn't in the spirit of this community.

I believe that's why So8res referred to it as a vow to yourself, not anyone else. Also note that this is a series of posts meant to introduce people to Rationality: AI to Zombies, not "this community" (by which I assume you mean LW).

There nothing wrong with willing reality to be different. It leads to actions that change reality.

This seems like a willful misreading of the essay's point. It seems obvious from context that So8res is referring here to motivated cognition, which does indeed have something wrong with it.

I believe that's why So8res referred to it as a vow to yourself, not anyone else.

Before I also haven't heard anybody speak about taking those kinds of vows to oneself.

This seems like a willful misreading of the essay's point. It seems obvious from context that So8res is referring here to motivated cognition, which does indeed have something wrong with it.

I consider basics to be important. If we allow vague statements about basic principles of rationality to stand we don't improve our understanding of rationality.

Willing is not the problem of motivated cognition. Having desires for reality to be different is not the problem. You don't need to be a straw vulcan without any desire or will to be rational.

Furthermore "Shut up and do the impossible" from the sequences is about "trying to will reality into being a certain way".

Before I also haven't heard anybody speak about taking those kinds of vows to oneself.

It's not literal. It's an attempt at poetic language, like The Twelve Virtues of Rationality.

I think the "The Twelve Virtues of Rationality" actually makes an argument that those things are virtues.

It's start is also quite fitting: "The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth."

It argues against the frame of vows.

Withdrawing into mysticism where everything goes is bad. Obfuscating is bad. It's quite easy to say something that gives rationalist applause lights. Critical thinking and actually thinking through the implications of using the frame of a vow is harder. Getting less wrong about what it happens to think rational is hard.

Mystic writing that's too vague to be questioned doesn't really have a place here.

Sure, I agree with all of that. I was just trying to get at the root of why "nobody asked [you] to take either vow".

The fact that I haven't taken a literal vow is true, but they meaning of what I was saying goes beyond that point.

The root is that nobody asked me in a metaphorical way to take a vow either. Eliezer asked for curiosity instead of a solemn vow in the talk about rationalist virtues.

There are reasons why that's the case.

[-]dxu9y40

The root is that nobody asked me in a metaphorical way to take a vow either.

Er, yes, someone has. In fact, Eliezer has asked you to do so. From the Twelve Virtues:

The third virtue is lightness. Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can. Do this the instant you realize what you are resisting; the instant you can see from which quarter the winds of evidence are blowing against you. Be faithless to your cause and betray it to a stronger enemy. If you regard evidence as a constraint and seek to free yourself, you sell yourself into the chains of your whims. For you cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse. You must walk through the city and draw lines on paper that correspond to what you see. If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.)

This is the exact same thing that the article is saying:

In order to study the art of human rationality, one must make a solemn pact with themselves. They must vow to stop trying to will reality into being a certain way; they must vow to instead listen to reality tell them how it is.

[-][anonymous]9y00

Furthermore "Shut up and do the impossible" from the sequences is about "trying to will reality into being a certain way".

No, it's about actually finding the way to force reality into some state others considered so implausible that they hastily labeled it impossible. Saying, "If the probability isn't 0%, then to me it's as good as 100%!" isn't saying you can defy probability, but merely that you have a lot of information and compute-power. Or it might even just be expressing a lot of emotional confidence for someone else's sake.

(Or that you can solve your problems with giant robots, which is always the awesomer option.)

The sentence "trying to will reality into being a certain way". doesn't say anything about p=0 or defying probability.

[-]dxu9y20

This is what is known as "neglecting context". Right after the sentence you originally quoted from the article, we see this:

They must recognize "faith" as an attempt to disconnect their beliefs from the voice of the evidence; they must vow to protect the ephemeral correspondence between the real world and their map of it.

I'm not quite sure why you're having difficulty understanding this. "Willing reality into a being a certain way", in this context, does not mean desiring to change the world, but rather shifting one's probability estimates toward one's desired conclusion. For example, I have a strong preference that UFAI not be created. However, it would be a mistake for me to then assign a 0.00001% probability to the creation of UFAI purely because I don't want it to be created; the true probability is going to be higher than that. I might work harder to stop the creation of UFAI, which is what you mean by "willing reality", but that is clearly not the meaning the article is using.

[-][anonymous]9y00

Nobody asked me to take either vow. Doing so isn't in the spirit of this community. The only reason someone might vow is to create a precommitment. You didn't make any decent argument about why this is a case where a precommitment is useful.

No point swearing an oath to nothing, yeah. Reality isn't going to listen to you because you took a vow.

This is an extremely important lesson and I am grateful that you are trying to teach it.

In my experience it is almost impossible to actually succeed in teaching it, because you are fighting against human nature, but I appreciate it nonetheless.

(A few objections based on personal taste: Too flowery, does not get to the point fast enough, last paragraph teaches false lesson on cleverness)

last paragraph teaches false lesson on cleverness

What exactly do you believe the false lesson to be and why do you think it's false?

I interpreted it as meaning one should take into account your prior for whether someone with a gambling machine is telling the truth about how the machine works.

Hm, a fair point, I did not take the context into account.

My objection there is based on my belief that Less Wrong over-emphasizes cleverness, as opposed to what Yudkowsky calls 'winning'. I see too many people come up with clever ways to justify their existing beliefs, or being contrarian purely to sound clever, and I think it's terribly harmful.

The path to rationality is not the path where the evidence chooses the beliefs. The path to rationality is one without beliefs. On the path to rationality, there are only probabilities.

I realized something the other day. I don't believe in cryonics.†

But, I believe that cryonics has a chance of working, a small chance.

If I'm ever asked "Do you believe in cryonics?", I'm going to be careful to respond accurately.

† (By this, I mean that I believe cryonics has a less than 50% chance of working.)

† (By this, I mean that I believe cryonics has a less than 50% chance of working.)

This is a very bad translation gives that most of people on LW who are signed up for cryonics give it a less than 50% chance of working.

Yeah, that's my point: This is the translation I had been making, myself, and I had to realize that it wasn't correct.

[-][anonymous]9y00

But in this case, the degree of belief that becomes relevant is bounded by the utility trade-offs involved in the cost of cryonics and the other things you could do with the money. So, for my example, I assign (admittedly, via an intuitive and informal process of guesstimation) a sufficiently low probability to cryonics working (I have sufficiently little information saying it works...) that I'd rather just give life-insurance money and my remaining assets, when I die, to family, or at least to charity, all of which carry higher expected utility over any finite term (that is, they do good faster than cryonics does, in my belief). Since my family or charity can carry on doing good after I die just as indefinitely as cryonics can supposedly extend my life after I die, the higher derivative-of-good multiplied with the low probability of cryonics working means cryonics has too high an opportunity cost for me.

The path to rationality is one without beliefs.

TIL: Eliezer is not a rationalist.

[-]gjm9y40

TYL: So8res is using the word "beliefs" in a slightly idiosyncratic way, to refer to things one simply treats as true and as fit subjects for logical rather than probabilistic inference.

(Though, even with this more reasonable reading of So8res's statements about "beliefs", I still don't think I agree. A perfect reasoner with unlimited resources would do everything probabilistically, but a human being with limited memory and attention span and calculating ability may often do best to adopt the approximation of just treating some things as true and some as false. With, of course, some policy of always being willing to revisit that when enough evidence against something "true" turns up -- but I don't think the corrigibility of a belief stops it being a belief, even in (what I take to be) So8res's sense.

TYL: So8res is using the word "beliefs" in a slightly idiosyncratic way, to refer to things one simply treats as true and as fit subjects for logical rather than probabilistic inference.

I'm not sure this is idiosyncratic. As far as I can tell this is one of the most common colloquial meanings of beliefs.

[-]gjm9y20

Hmm, maybe. I'd have thought most people would say something is a "belief" if you assign it (say) 80% probability and act accordingly, but perhaps I'm wrong.

I'd have thought most people would say something is a "belief" if you assign it (say) 80% probability and act accordingly

They also do that. "Believe" can mean both "confident of" and "somewhat doubtful of". The former contrasts the state of mind with ignorance, the latter with knowledge.

[-]gjm9y00

In which case, it isn't true that according to their usage rationalists are supposed not to have beliefs.

[-][anonymous]9y00

TYL: So8res is using the word "beliefs" in a slightly idiosyncratic way, to refer to things one simply treats as true and as fit subjects for logical rather than probabilistic inference.

Logical inference is probability-preserving. It does not require that you assign infinite certainty to your axioms.

[-]gjm9y00

If your axioms are a1,a2,a3,...,a10 each with probability 0.75, and if they are independent, then a1 & a2 & ... & a10 (which is a valid logical inference from a1, ..., a10) has probability about 0.06. In the absence of independence, the probability could be anywhere from 0 to 0.75.

Perhaps by "probability-preserving" you mean something like: if you start with a bunch of axioms then anything you infer from them has probability no smaller than Pr(all axioms are correct). I agree, logical inference is probability-preserving in that sense, but note that that's fully compatible with (e.g.) it being possible to draw very improbable conclusions from axioms each of which on its own has probability very close to 1.