Today's post, What Do We Mean By "Rationality"? was originally published on 16 March 2009. A summary (taken from the LW wiki):

 

When we talk about rationality, we're generally talking about either epistemic rationality (systematic methods of finding out the truth) or instrumental rationality (systematic methods of making the world more like we would like it to be). We can discuss these in the forms of probability theory and decision theory, but this doesn't fully cover the difficulty of being rational as a human. There is a lot more to rationality than just the formal theories.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was 3 Levels of Rationality Verification, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
7 comments, sorted by Click to highlight new comments since:

The one useful purpose for discussion of "meanings" is to draw attention to distinctions between different usages that may get overlooked. The "epistemic" vs "instrumental" is one such distinction in this case.

I suggest there is a third useful sense, which sort of links epistemic rationality and instrumental rationality.

The example in the sequence post takes consistent Bayesian probability with Occam priors as an example of rational modelling of the world. But in what sense is it rational to believe that the world is an ordered place with rules that apply irrespective of location and time, that Occam's razor is the right prior, and so on? The choice of such a method cannot be justified by appeals to evidence that assume the validity of the method.

The only root justification that makes sense to me is by a game-theory type argument. If the world does continue to behave the way I broadly expect it to, that has huge implications - I can continue to behave in broadly the same way as I have been behaving. If the universe from this moment on is going to behave according to entirely different rules, I have no basis for as much as putting one foot in front of another. So assuming that a model which describes the past well will also have validity in future has great instrumental advantages. So it is 'rational' to do so, even though it can't be justified by "scientific" reasoning.

It can be fairly pointed out that the reasoning here is essentially that of Pascal's Wager. However, there is nothing in that particular argument which justifies belief in any one version of "God" rather than another, and if somebody wants to use the word "God" exclusively for the fact that the universe makes sense, I see no reason to object!

Although instrumental rationality is an interesting category, I tend to view it as ultimately boiling down to epistemic rationality. For example, I reason that A leads to B. I wanted B but I didn't want A, and now my motivations start traveling up and down the A -> B causal chain until I reach equilibrium. Or for another example, I notice that if I choose C for reason R, my rational game-partner will likewise choose C for reason R, because of some symmetry in our properties as agents. Now I need to compare the outcome of choices {C, C} to other possibilities, but I can rule out {C, D}, say.

My attraction to various options will change in response to learning these facts. But the role of rationality seems to end with arriving at and facing the facts.

No? Or, beside the point? (But if beside the point, still an interesting new point, I reckon.)

The trouble is that there is nothing in epistemic rationality that corresponds to "motivations" or "goals" or anything like that. Epistemic rationality can tell you that pushing a button will lead to puppies not being tortured, and not pushing it will lead to puppies being tortured, but unless you have an additional system that incorporates desires for puppies to not be tortured, as well as a system for achieving those desires, that's all you can do with epistemic rationality.

That's entirely compatible with my point.

I see it as exactly the other way around: the only good reason to care about epistemic rationality is because it helps you be instrumentally rational. Obtaining accurate beliefs and then doing nothing with them is intellectual masturbation.

[-]Shmi30

Obtaining accurate beliefs and then doing nothing with them is intellectual masturbation.

This could be a goal in itself.

This too, is entirely compatible with my point. What rationality is, and why we care about it, are distinct questions.