Related: What Do We Mean By "Rationality?"

Epistemic rationality and instrumental rationality are both useful. However, some things may benefit one form of rationality yet detract from another. These tradeoffs are often not obvious, but can have serious consequences.

For instance, take the example of learning debate skills. While involved in debate in high school, I learned how to argue a position quite convincingly, muster strong supporting evidence, prepare rebuttals for counterarguments, prepare deflections for counterarguments that are difficult to rebut, and so on.

I also learned how to do so regardless of what side of a topic I was assigned to.

My debate experience has made me a more convincing and more charismatic person, improved my public speaking skills, and bolstered my ability to win arguments. Instrumentally speaking, this can be a very useful skillset. Epistemically speaking, this sort of preparation is very dangerous, and I later had to unlearn many of these thought patterns in order to become better at finding the truth.

For example, when writing research papers, the type of motivated cognition used when searching for evidence to bolster a position in a debate is often counterproductive. Similarly, when discussing what the best move for my business to make is, the ability to argue convincingly for a position regardless of whether it is right is outright dangerous, and lessons learned from debate may actually decrease the odds of making the correct decision-- if I'm wrong but convincing and my colleagues are right but unconvincing, we could very well end up going down the wrong path!

Epistemic and instrumental goals may also conflict in other ways. For instance, Kelly (2003)[1] points out that, from an epistemic rationality perspective, learning movie spoilers is desirable, since they will improve your model of the world. Nevertheless, many people consider spoilers to be instrumentally negative, since they prefer the tension of not knowing what will happen while they watch a movie.

Bostrom (2011)[2] describes many more situations where having a more accurate model of the world can be hazardous to various instrumental objectives. For instance, knowing where the best parties are held on campus can be a very useful piece of knowledge to have in many contexts, but can become a distracting temptation when you're writing your thesis. Knowing that one of your best friends has just died can be very relevant to your model of the world, but can also cause you to become dangerously depressed. Knowing that Stalin's wife didn't die from appendicitis can be useful for understanding certain motivations, but can be extraordinarily dangerous to know if the secret police come calling.

Thus, epistemic and instrumental rationality can in some cases come into conflict. Some instrumental skillsets might be better off neglected for reasons of epistemic hygeine; similarly, some epistemic ventures might yield information that it would be instrumentally better not to know. When developing rationality practices and honing one's skills, we should take care to acknowledge these tradeoffs and plan accordingly.

 

[1] Kelly, T., (2003). Epistemic Rationality as Instrumental Rationality: A Critique. Philosophy and Phenomenological Research, 66(3), pp. 612-640.

[2] Bostrom, N., (2011). Information Hazards: A Typology of Harms from Knowledge. Review of Contemporary Philosophy, 10, pp. 44-79.

New to LessWrong?

New Comment
22 comments, sorted by Click to highlight new comments since: Today at 2:23 AM
[-]Error11y110

The distinction between instrumental and epistemic rationality is dramatic enough that I wonder if we should really be using the same word for both.

ETA: Just as I was posting this I came up with a response: The two variants might be well described as seeking correct knowledge and correct action respectively, with the common factor being correctness. So maybe using the same word plus a modifier does make sense.

ETA2: As army1987 points out below, I've just exhibited the same conflation I was concerned about. Embarrassing, but consider it evidence for my original point.

The former instance of “correct” means ‘true’, the latter means ‘good’. Still not the same thing.

Point. I wonder how I managed to notice the conflation of meanings for the word "rational" but not the word "correct." That's irritating. I was closer to being right before the edit.

Nitpick: I wouldn't say the latter means "good" precisely, but your point still stands.

There are so many words in English (but also in Italian, for that matter) that can be interpreted either normatively or descriptively (e.g. “should” can mean either ‘is most likely to’ or ‘had better’, also “right”, etc.) that being unambiguous between the two is more exceptional than being ambiguous.

I guess the reason for that is that, for social norms, the two coincide, i.e. the side of the road on which someone in a given country had better drive is the one on which someone in that country are most likely to drive, the past tense of a verb one had better use in a given language in a given register is the one speakers of that language in that register are most likely to use, the attire you had better wear on a job interview is the one people usually wear on job interviews, etc.

I think this is an interesting point. To be honest, I think whether we should be using the term "rationality" at all is very much an open question. Just as MIRI changed its name thanks to unwanted associations with other things using the term "singularity," we might be better served by avoiding the term "rationality" and coming up with something else.

we might be better served by avoiding the term "rationality" and coming up with something else.

My friends label me as "rational" (or alternatively, "hyper-rational") when talking about my stereotypically Spock-like characteristics. I'm all in favor of splitting up the overload on the word.

If we invent a new word, and the word will become famous enough, sooner or later Hollywood will make a movie using this word to describe a protagonist who does not completely satisfy the definition and also has many unrelated weird traits, and then we are back at the beginning.

Lest you think the distinction between epistemic and instrumental is merely a theoretical possibility, consider the following anecdote.

Since sometime last year, I have been using PredictionBook to track my daily, weekly, monthly, and yearly goals (I stopped making them public after someone complained) in order to develop a large enough reference class to draw from when making business decisions (in order to eliminate the need for my "gut feeling" variable in my forecasting rule). Over the course of half-a-year I almost completely overcame my susceptibility to the planning fallacy (at least, in the domain of medium-term business plans).

Then, in February, my first son was born. I had badly underestimated the adjustments I would have to make to my routines (stupid, I know) and started failing lots of my goals. This caused my forecasts to show that I was probably going to fail massively on several important projects. If you remember your procrastination equation, this essentially tanked my expectancy, decimating my motivation. This started a failure spiral that I didn't recover from until only a few weeks ago (with the help of The Motivation Hacker).

Anyway, I think the take-away if this: yes, the outside view is very useful for accurately forecasting the future, but keep in mind that your psychological state is causually influenced by the very forecasts you make and this can easily lead to a self-fulfilling prophesy of failure.

Then again, you should take my analysis with a grain of salt. N=1 and all that.

The article seems quite incomplete without even mentioning value-of-information. Instrumental and epistemic rationality have the same goals when the VOI of learning things is positive, and opposite goals when the total VOI is negative. Now, it may be hard to capture the VOI of, say, movie spoilers and truths that are bad for you, but the typical piece of information is positive VOI. In other words, most information merely lets you make better choices, as opposed to influencing your experiences in a predicted-in-advance negative manner.

This is basically the entire reasoning for going on an information diet. Not all truths are of equal value to you, so if you can deliberately get only the high value truths, you're consistently better off.

[-]satt11y10

And when applying/calculating VoI, allow for the opportunity cost of harvesting information. A truth might have positive VoI in itself, but its effective net VoI might be negative if reaching that truth eats up one's time, money, attention, effort, or other resources.

I agree that VoI and the calculations that allow you to use it effectively are very important. However, this post serves as a basic overview and I think taking the time to explain VoI and how to calculate it wouldn't fit here.

If you think a post on VoI is necessary as a "sequel" to this one, feel free to write it-- I don't have time with my current queue of things to write-- but please link me if and when you do!

but please link me if and when you do!

I wrote one a while back.

Thanks for the link. I'm not sure that post says exactly what I would try to say about the topic, but it is certainly interesting and useful in its own right.

I think taking the time to explain VoI and how to calculate it wouldn't fit here.

I disagree. VoI is essentially a formalized way to describe the instrumental value of figuring out how the world is (or is going to be). As such it's a very good way to relate instrumental rationality to epistemic rationality.

Basic but necessary post.

Instrumental rationality is "systematized winning", epistemic rationality is not, and (often?/sometimes? - depending on your priorities and situation) quite the contrary.

There are also things which are bad to learn for epistemic rationality reasons.

Sampling bias is an obvious case of this. Suppose you want to learn about the demographics of city X. Maybe half of the Xians have black hair, and the other half have blue hair. If you are introduced to 5 blue-haired Xians but no black-haired Xians, you might infer that all or most Xians have blue hair. That is a pretty obvious case of sampling bias. I guess what I'm trying to get at is that learning a few true facts (Xian1 has blue hair, Xian2 has blue hair, ... , Xian5 has blue hair) may lead you to make incorrect inferences later on (all Xians have blue hair).

The example you give, of debating being harmful to epistemic rationality, seems comparable to sampling bias, because you only hear good arguments for one side of the debate. So you learn a bunch of correct facts supporting position X, but no facts supporting position Y. Thus, your knowledge has increased (seemingly helpful to epistemic rationality), but leads to incorrect inferences (actually bad for epistemic rationality).

There's also the question of what to learn. You could spend all day reading celebrity magazines, and this would give you an increase in knowledge, but reading a math textbook would probably give you a bigger increase in knowledge (not to mention an increase in skills). (Two length-n sets of facts can, of course, increase your knowledge a different amount. Information theory!)

If you are introduced to 5 blue-haired Xians but no black-haired Xians, you might infer that all or most Xians have blue hair. That is a pretty obvious case of sampling bias.

If a-priori you had no reason to expect that the population was dominantly blue-haired then you should begin to suspect some alternative hypothesis, like your sampling is biased for some reason, rather than believe everyone is blue haired.

Well, one obvious connection is that sufficient degree of epistemic rationality is a prerequisite for instrumental rationality (but not vice versa).

Basically, if you're looking for the shortest way from A to B (instrumental), you can't find it unless your map is correct (epistemic) { or you're incredibly lucky :-D }. So epistemic rationality is the foundation, base for the instrumental rationality.

One of the implications is that ceteris paribus it's better to sacrifice instrumental for epistemic rather than epistemic for instrumental.

Well, one obvious connection is that sufficient degree of epistemic rationality is a prerequisite for instrumental rationality (but not vice versa).

Nope. See Dennett's concept of competence without comprehension. You can be tremendously competent without a representational map.

And for a social species with tremendous power with respect to the rest of the world, it's the opinion of others that matters above all, and such good opinion can be had in the face of epistemic incompetence.

Basically, if you're looking for the shortest way from A to B (instrumental), you can't find it unless your map is correct (epistemic) { or you're incredibly lucky :-D }. So epistemic rationality is the foundation, base for the instrumental rationality.

I think that's only the case insofar as we're talking about "general instrumental rationality." In practice lots of skills seem to be gained tacitly-- in fact, I would say this is the default skill acquisition process.

One of the implications is that ceteris paribus it's better to sacrifice instrumental for epistemic rather than epistemic for instrumental.

That seems to be a common assumption on LW, but I think it may not be a good one.

In practice lots of skills seem to be gained tacitly-- in fact, I would say this is the default skill acquisition process.

Well, I think that at this point we're venturing into the land of "it depends". I would say that if you have "tacit" skills but the wrong map, you might instrumentally do well within the narrow domain where your misunderstood skills work, but you have a chance for a catastrophic failure once you venture outside of it. For example, you could perfectly well do some restricted thermodynamics using the idea of phlogiston. Or you can do astronomy on the basis of Ptolemaic epicycles. Both would work for certain kinds of problems but both would also fail once you try to expand.

And, seems to me, in most cases there is no trade-off and epistemic and instrumental rationality match each other -- the process of discovering the shortest way from A to B simultaneously improves your map.

I would say that if you have "tacit" skills but the wrong map, you might instrumentally do well within the narrow domain where your misunderstood skills work, but you have a chance for a catastrophic failure once you venture outside of it.

I agree, and think this explains a large amount of human failure. I believe that nearly everyone relies on tacit skills for nearly everything and that whether "general instrumental rationality" even exists is by all means an open question.