In summary, the goal of this post is to start a discussion on the current meaning of 'rationality' as it is defined on less wrong. I am specifically trying to find out:

  1. What people think of the current definition of 'rationality' on less wrong
  2. What people think a better definition of 'rationality' would include. I am not necessarily looking for a perfect definition. I am more looking for a definition that would better highlight the areas that people should look into if they wish to become less wrong.

I think that the description below from the What Do We Mean By 'Rationality' post sums up the current meaning of 'rationality' as it is used on this site:

Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed "truth" or "accuracy", and we're happy to call it that.

Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".

...

"X is rational!" is usually just a more strident way of saying "I think X is true" or "I think X is good". So why have an additional word for "rational" as well as "true" and "good"? Because we want to talk about systematic methods for obtaining truth and winning.

The word "rational" has potential pitfalls, but there are plenty of non-borderline cases where "rational" works fine to communicate what one is getting at, likewise "irrational". In these cases we're not afraid to use it.

Now, I think that the definition or description of 'rationality' above is pretty good. In fact, if I wanted to introduce someone to the concept of rationality then I would probably refer to it, but I would explain that it is a working definition. This means that it conveys the general idea and that most of the time this suffices. I have no problems with working definitions. One of my favorite ideas on less wrong is the idea that words and concepts are pointers to areas in concept space. This idea allows you to use working definitions and to not waste your time on semantic issues. But, I think that an often neglected aspect of this idea is that you still need to ensure that the words you use point to the right and restricted areas in concept space. When people say "I am not here to argue about definitions", this does not abdicate their responsibility to create decent definitions. It is like saying: "hey, I know that this definition is not perfect, but I think that it's close enough that it will be able to convey the general idea that I am getting at". If that is all that you are trying to do, then avoiding refining your definitions is fine, but it should be noted that the more important and cited the concept becomes the more neccesary it is to improve the definitions of the concept.

I think the definition of rationality above has two major problems:

  • it doesn't highlight all of the important areas, but it can be extended to cover them. If something is highlighted, then it would mean that it was obvious and clear that you were referring to it. When I think of instrumental rationality, I don't, for example, think of seeing things from multiple perspectives, finding the best way to interpret situations, aligning your values, training creativity etc. (there are probably better examples). The point I am getting at is that Instrumental rationality ("winning") seems to me like it can be expanded to include almost anything, but it doesn't necessarily point to all the important points that I think a more explicit definition of 'rationality' would.
  • it describes methods to achieve rationality, not what rationality is. The way that 'rationality' is defined is kind of like defining 'fit' by referring to 'exercise' and 'eating right'. This is because Epistemic rationality is only valuable instrumentally. It helps you to create truer beliefs, but these beliefs need to be applied before they can actually be useful. If you spend lots of effort creating truer beliefs or trying to understand what rationality means and then compartmentalize that knowledge, you have effectively gained nothing. An example is Robert Aumann, he knows a lot about rationality, but he doesn't seem to be too rational as it looks like he believes in non overlapping magisteria.

Perhaps the biggest issue I have with the definition is not anything to do with how it currently is, but instead with how hard it is to improve. This sounds like a good thing, but it's not. The definition is hard to improve, not because it is perfect, but because instrumental rationality is just too big. Any ideas or improvements to the definition that would seem plausible are likely to be quickly discarded as they can be made to fall into the instrumental rationality category. 

I am not going to be providing a better definition of 'rationality' as the goal of this post is just to start a discussion, but I do think that a lot of the problems I have mentioned above can be best solved by first choosing a simple core definition of what it means to be rational and then having a seperate myriad of areas in which improvements lead to increases in rationality. In general, the more granularised, results-orientated and verified each of these areas is the better.

A possible parent or base definition for 'rationality' is already on the wiki. It says that 'rationality' is the: "characteristic of thinking and acting optimally". This, to me, seems like a pretty good starting point, although, I do admit that the definition itself is too concise and that it doesn't really tell us much since 'optimal' is also hard to define.  

That is not to say that we don't have any idea of what 'optimal' means, we do. It is just that this understanding (logic, probability and decision theory etc.) is mostly related to the normative sense of the word. This is a problem because we are agents with certain limitations and adaptations which make it so that our attempts to do things the normative way are often impractical, cumbersome and flawed. It is for this reason that any definition of 'rationality' should be about more than just: 'get closer to the normative model'. Of course, getting closer to the results of the normative model is always good, but I still think that a decent definition of 'rationality' should take into account, for example, ecological and bounded rationality as well as the values of the agent under consideration. 

  • Ecological rationality takes into account the context and representation of information. If a certain representation of information has been recurrent and stable during an agents evolution, then that agents cognitive processes are likely to be better adapted to those representations. There is a big difference between being irrational and performing poorly at specific types of problems because your cognitive processes have not adapted to information in a particular format. 
  • Bounded rationality takes into account the fact that most agents are limited by the information they have, the cognitive limitations of their minds, and the time available for them to make decisions. For limited agents, the fast and frugal heuristical approach to a problem may be what is optimal or at least not as bad as it seems. From, What Does It Mean to be Biased: Motivated Reasoning and Rationality
  • The rather surprising conclusion from a century of research purporting to show humans as poor at judgment and decision making, prone to motivational distortions, and inherently irrational is that it is far from clear to what extent human cognition exhibits systematic bias that comes with a genuine accuracy cost.
  • It is also important to take into account what the agent values. An alien is not irrational just because it values things differently than we do.
It is possible that 'rationality' isn't the best word to be using since there already exists an extremely large number of varied opinions on what it means. I would not be against choosing a different word if this would help to better illuminate what allows people to become less wrong. At the end of day, I don't really care about 'rationality', persay, all I care about is becoming less wrong. If for whatever reason 'rationality' and 'becoming lesswrong' become different or divergent, then I will move away from 'rationality'.

To start of the discussion here are a few questions that I have in regards to the current meaning of 'rationality' on less wrong:
  1. Do you think that a discussion on the meaning of 'rationality' would be helpful?
  2. Do you have any issues that were not mentioned above with how 'rationality' is currently defined on less wrong?
  3. Do you think that the issues of 'rationality' that I describe above make sense and are valid criticisms. 
  4. Do you think that explaining general areas and topics that lead to improvements in rationality would be helpful?
  5. Is there anything you can think of that is related to becoming less wrong and you also think has nothing or very little to do with becoming more rational?
New Comment
12 comments, sorted by Click to highlight new comments since: Today at 6:08 AM

Do you think that the issues of 'rationality' that I describe above make sense and are valid criticisms.

Not really. First, you criticize the definition for - well, being too short and broad and ill-defined. If we had a complete definition of rationality, we'd be done with creating AI. That is, by and large, what this site is all about - trying to arrive at a complete definition.

Do you think that explaining general areas and topics that lead to improvements in rationality would be helpful?

As far as I can tell, no, such approaches just make irrational people irrational in a more sophisticated way. Most people don't have the flexibility of self to actually -change- those aspects of themselves that are irrational, they just bury them under increasingly complex rationality models that do nothing but make their rationalizations sound more rational, and anything like calling people on their rationalizations falls into some area of mindkilling or another.

Is there anything you can think of that is related to becoming less wrong and you also think has nothing or very little to do with becoming more rational?

Yes. Either develop a healthy ego, or end your ego. If being wrong feels like failure, your failure is deeper than being wrong. If you can be made to feel like a fool, you are a fool regardless of how you feel. To take this out of deep wisdom, there are two fundamental concepts here: First, stop caring what other people think, particularly, for the Less Wrong crowd, about how rational (which is, after all, merely a proxy for intelligent) you are.

Second, stop caring so much about being less wrong than other people. You're just stoking the fires of an unhealthy ego, which makes it difficult for you to actually admit how wrong you are, and thus ends in you -failing- to become less wrong.

This website, for many people, is just a way of filling a void in their sense of self-worth, through an addiction to insight, to revelation, to that awe-inspiring feeling of getting smarter, of understanding the universe just a little bit better - without ever actually changing anything. They feel momentarily like smarter people, the feeling fades leaving them empty, and nothing changes.

This website, for many people, is just a way of filling a void in their sense of self-worth, through an addiction to insight, to revelation, to that awe-inspiring feeling of getting smarter, of understanding the universe just a little bit better - without ever actually changing anything.

Then perhaps we should discuss having primarily the remaining ones in mind.

Every website has more disengaged readers than real workers.

If we had a complete definition of rationality, we'd be done with creating AI. That is, by and large, what this site is all about - trying to arrive at a complete definition.

I just want to clarify. I am not trying to get a completed definition. That would be good, but I think it would be extremely hard to do. I am just looking for a more useful definition. I think that the current definition may be limited in its usefulness due to the reasons I gave above.

As far as I can tell, no, such approaches just make irrational people irrational in a more sophisticated way. Most people don't have the flexibility of self to actually -change- those aspects of themselves that are irrational.

This sounds too pessimistic to me. I also don’t buy into the defeatism of the idea that people don’t have the flexibility to become more rational. Perhaps, it’s true that most people aren’t motivated enough or that the best path to becoming rational is not obvious enough which makes it so that people have to spend lots of effort if they wish to become more rational. Both of these issues, I think, would be helped if the approaches I alluded to where described.

they just bury them under increasingly complex rationality models that do nothing but make their rationalizations sound more rational, and anything like calling people on their rationalizations falls into some area of mindkilling or another.

This was described in knowing biases can hurt people. It is why brevity and unapplied knowledge can be dangerous. The biggest problem with the type of person you describe above is that they know about rationality, but are still distorting feedback. It is a case of compartmentalization. Their knowledge of rationality would help them describe why other people are being rational or irrational, but they fail to correctly apply this knowledge to themselves.

If being wrong feels like failure, your failure is deeper than being wrong.

I think we are looking at this in different ways. By becoming less wrong I just mean coming to understand that you are implemented on kludgy and limited wetware (a human brain), which leads you to have certain propensities and tendencies. Through this understanding you can start to debiase, i.e. recognise these propensities and when alternative, non default, methods would better help you to achieve what you desire.

There is no sense of failure in this, though. Perhaps, a better way to phrase it is that I want to become better.

stop caring so much about being less wrong than other people.

This is not a race and I am not comparing myself to other people. Once again, perhaps, becoming better is a more suitable description. This doesn’t necessarily mean better than others. It just means better than you are currently are.

rational (which is, after all, merely a proxy for intelligent) you are.

I disagree with you on that. Extremely intelligent people can still be irrational. Intelligence is basically about cognitive abilities whereas rationality is more about how you reason.

Also, this

many biases are not very strongly correlated with measures of intelligence (algorithmic capacity). Additionally, there is reliable variance in rational thinking found even after cognitive ability is controlled,

_

This website, for many people, is just a way of filling a void in their sense of self-worth, through an addiction to insight, to revelation, to that awe-inspiring feeling of getting smarter, of understanding the universe just a little bit better - without ever actually changing anything. They feel momentarily like smarter people, the feeling fades leaving them empty, and nothing changes.

That sounds similar to all self-help type of stuff. You need to apply it. If you don’t apply what you have learnt, then it is close to useless.

If there's any kernel to the concept of rationality, it's the idea of proportioning beliefs to evidence (Hume). Everything really flows from that, and the sub-variations (like epistemic and instrumental rationality) are variations of that principle, concrete applications of it in specific domains, etc.

"Ratio" = comparing one thing with another, i.e. (in this context) one hypothesis with another, in light of the evidence.

(As I understand it, Bayes is the method of "proportioning beliefs to evidence" par excellence.)

Julia Galef from CFAR gave her definition last year at a panel at TAM titled Can Rationality Be Taught?.

Correct me if I am wrong, but I don't think she gave her definition. She described the general types of rationality (normative, descriptive and prescriptive). These aren't new and are described by Baron in Thinking and Deciding, for example.

In short, normative models tell us how to evaluate judgments and decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model. If prescriptive recommendations derived in this way are successful, the study of thinking can help people to become better thinkers.

Correct me if I am wrong, but I don't think she gave her definition. She described the general types of rationality (normative, descriptive and prescriptive).

What do you think it's the difference between that? I haven't argued that has an original definition.

When I think of instrumental rationality, I don't, for example, think of seeing things from multiple perspectives, finding the best way to interpret situations, aligning your values, training creativity etc.

I think discussions about applied debaising are discussions about finding a best way to interpret situations. Discussions about how to choose reference classes inside/outside views are about seeing things from multiple perspectives.

When it comes approaches about how to think creatively about a problem and let people come up with ideas before sharing them in brainstorming, I would call that instrumental rationality.

There are alignment techniques to align system one and system two from CFAR.

I think discussions about applied debaising are discussions about finding a best way to interpret situations

Isn't debiasing more about removing errors in your thinking or finding situations in which you should take a more formalised and objective approach. By "finding a best way to interpret situations" I meant basically finding ways to interpret situations so that the solution is simpler. If you can find out that a new problem X is really just another version of problem Y, then this can be extremely useful as you can draw upon pre-existing solutions to problem Y.

Another example from a movie I saw is:

Twenty random cards are placed in a row all face down. A move consists of turning a face down card face up, and turning over the card that is immediately to its right. Show that no matter what the choice of cards to turn, this sequence of moves must terminate (with all the cards facing up).

This problem can be simply solved once you see face up cards as 1 and face down cards as 0 in binary.

Discussions about how to choose reference classes inside/outside views are about seeing things from multiple perspectives.

That is just one different perspective that is often useful. In general, I view this as an example of looking at a situation from an objective perspective.

Here is a quote from feynman's Character of Physical Law (p. 53) which I think describes what I mean.

Mathematically each of the three different formulations: Newton’s law, the local field method and the minimum principle, gives exactly the same consequences. [...] They are equivalent scientifically [...] But, psychologically they are very different in two ways. First, philosophically you like them or do not like them; and training is the only way to beat that disease. Second, psychologically they are different because they are completely unequivalent when you are trying to guess new laws. As long as physics is incomplete, and we are trying to understand the other laws, then the different possible formulations may give clues about what might happen in other circumstances. In that case they are no longer equivalent, psychologically, in suggesting to us guesses about what the laws may look like in a wider situation.

_

When it comes approaches about how to think creatively about a problem and let people come up with ideas before sharing them in brainstorming, I would call that instrumental rationality.

That's my problem with instrumental rationality, almost everything can fit into it. I have two questions:

  1. if it was the first time that you heard about instrumental rationality would brainstorming pop into your head?
  2. Do you think that my point about this was a valid criticism?

There are alignment techniques to align system one and system two from CFAR.

Do you know what the best resource is to find out about what stuff CFAR has looked into, written or produced. I have read their website and few posts about CFAR on less wrong, but that is all.

[-]gjm9y00

In the question about cards, the binary observation is absolutely correct but gives the impression that you need more "structure" to solve the problem than you really do. I prefer (even though it maybe yields a textually longer proof) to do it this way: sort the possible configurations lexicographically ("dictionary order", where things further left always take precedence) with up < down; and then note that every operation you do moves your configuration earlier in the order, at which point you're done.

... Oh, and the problem as stated above isn't quite right. In the configuration UUUUUUUUUUUUUUUUUUUD there is no legal move according to your statement of the problem, so you don't terminate with all cards face up. More generally, your procedure always flips two cards at once so the parity of the number of D cards can't change, so half of all starting configurations are unable to end with all cards up and in fact will end with UUUUUUUUUUUUUUUUUUUD. (To fix this, either just ask for a proof that the procedure always terminates, or else make flipping the card to the right of the one you turn face-up optional.)

half of all starting configurations are unable to end with all cards up and in fact will end with UUUUUUUUUUUUUUUUUUUD.

That is why the starting configuration is explicitly given as: "Twenty random cards are placed in a row all face down". If the number of starting face down cards is odd, then it will terminate with the last card as down and the rest as up. If the number of starting face down cards is even, then it will terminate with all the cards face up.

[-]gjm9y00

Oops, misread the problem statement. Of course you're right. (Though I think the problem is made slightly more interesting if you allow starting with an arbitrary configuration.)