Related to: What Do We Mean By "Rationality?"

Rationality has many facets, both relatively simple and quite complex. As a result, it can often be hard to determine what aspects of rationality you should or shouldn't stress.

An extremely basic and abstract model of how rationality works might look a little something like this:

  1. Collect evidence about your environment from various sources
  2. Update your model of reality based on evidence collected (optimizing the updating process is more or less what we know as epistemic rationality)
  3. Act in accordance with what your model of reality indicates is best for achieving your goals (optimizing the actions you take is more or less what we know as instrumental rationality)
  4. Repeat continually forever
A lot of thought, both on LessWrong and within the academic literature on heuristics and biases, has gone into improving epistemic rationality, and while improving instrumental rationality was less of a focus at first, recently the community has been focusing more on it. On the other hand, improving your ability to collect evidence has been relatively neglected-- hence the (in-progress as of this writing) Situational Awareness sequence.

But most neglected of all has been the last step, "repeat continually forever." This sounds like a trivial instruction but is in fact highly important to emphasize. All your skills and training and techniques mean nothing if you don't use them, and unfortunately there are many reasons that you might not use your skills.

You might be offended, angry, hurt, or otherwise emotionally compromised. Similarly, you might be sleepy, inebriated, hungry, or otherwise physically compromised. You might be overconfident in your ability to handle a certain type of problem or situation, and hence not bother to think of other ways that might work better.[1] You might simply not bother to apply your skills because you don't think they're necessary, missing out on potential gains that you don't see at a glance-- or maybe even don't know exist. All in all, there are many times in which you may be missing out on the benefits that your skills can provide.

It may therefore be worthwhile to occasionally check whether or not you are actually applying your skills. Further, try to make this sort of check a habit, especially when encountering circumstances where people would typically be less than rational. If you find that you aren't using your skills as often as you'd expect, that may be cause for alarm, and at the very least is cause for introspection. After all, if rationality skills can be constantly applied to being successful in everyday life, we should be constantly on the watch for opportunities to apply them, as well as for potential lapses in our vigilance.

I indeed suspect that most LessWrong users would benefit more from being more vigilant in practicing and applying basic rationality skills than they would from learning cool advanced techniques. This principle is generally true in the martial arts, and both the inside and outside view strongly suggest to me that it is true for the art of rationality as well.

All in all, improving your rationality is a matter of serious practice and changing your mindset, not just learning cool new life hacks-- so next time you think about improving your rationality, don't look for new tricks, but new ways to truly integrate the principles you are already familiar with.


[1] The footnote to Humans Are Not Automatically Strategic describes several examples where this might apply.

New Comment
29 comments, sorted by Click to highlight new comments since:

Myself and a few others I have spoken with have noticed a "level up" effect. That is, you grind away at this stuff and one day you suddenly notice that you are noticing and applying the lessons much more effortlessly than before. It feels awesome and is worth striving for.

[-]Shmi50

Yes, it does feel awesome. This discontinuity of the effort -> outcome map ([almost] nothing... nothing... nothing... jump!) to me is an instance of the Hegelian/Marxian quantity->quality conversion, something that jumps at me again and again in different contexts. I wonder if there is a way to formalize it.

I wish that I understood this post. I am upvoting you in the hopes that you feel obligated to explain further.

My understanding of the "quantity to quality conversion" phrase is that in many situations the relation between some inputs and outputs is not linear. More specifically, there are many situations where at the beginning the relation seems linear, but later at some point the increase of outputs becomes incredibly huge (incredibly = for people who based their models on extrapolating the linear relationship at the beginning). Even more specifically, you can have one input "A" that has obvious effect on "X", but almost zero effect on "Y" and "Z". Then at some moment with additional increases of "A" also "Y" and "Z" start growing (which was totally unexpected by the old model).

Specific example: You start playing piano. At the beginning, it feels like it has a simple linear impact on your life. You spend 1 hour playing piano, you get an ability to play a simple song quite well. You spend 2 hours playing piano, you get an ability to play another simple song quite well. Extrapolate this, and you get a model. According to this model, after spending 80000 hours playing piano, you would expect to be able to play 80000 simple songs quite well. -- What happens in reality is that you get an ability to play any simple song well just by looking at the music sheets, an ability to play very complex music, an ability to make money by playing the music, you become famous, get a lot of social capital, lot of friends, lot of sex, lot of drugs, etc. (Both non-linear outputs, and the outputs not predicted by the original model.)

A similar pattern appears in many different situations, so some people invented a mysteriously sounding phrase to describe it. Now it seems like some law of nature. But maybe it is just a selection effect (some situations develop like this, and we notice "oh, the law of quantity to quality conversion", other situations don't, and we ignore them).

In other words, "quantity" seems to mean "linear model", "quality" means "model", and the whole phrase decoded means "if you change variables enough, you may notice that the linear model does not reflect reality well (especially in situations where the curve starts growing slowly, and then it grows very fast)".

[-]Shmi00

I was more after some discontinuity than a simple nonlinearity, like a quadratic or even an exponential dependence. And you are right, the selection effect is at work, but it's not a negative in this case. We want to select similar phenomena and find a common model for them, in order to be able to classify new phenomena as potentially leading to the same effects.

For example, if you look at some new hypothetical government policy which legislates indexing the minimum savings account rate to, say, inflation, you should be able to tell whether after a sizable chunk of people shift their savings to this guaranteed investment, the inflation rate will suddenly skyrocket (it happened before in some countries).

Or if you connect billions of computers together, whether it will give rise to a hive mind which takes over the world (it has not happened, despite some dire predictions, mostly in fictional scenarios).

Another example: if you trying to "level up", what factors would hasten this process, so you don't have to spend 10k hours mastering something, but only, say, 1000.

If you pay attention to this leveling effect happening in various disparate areas, you might get your clues from something like stellar formation, where increasing metallicity significantly decreases the mass required for a star to form (a dust cloud "leveling up").

Classifying, modeling and constructing successful predictions for this "quantity to quality conversion" would be a great example of useful applied philosophy.

There are (at least) two different things going on here that I think it's valuable to separate.

One is, as you say, the general category of systems whose growth rate expressed in delivered value "skyrockets" in some fashion (positive or negative) at an unexpected-given-our-current-model inflection point. I don't know if that's actually a useful reference class for analysis (that is, I don't know if an analysis of the causes of, say, runaway inflation will increase our understanding of the causes, say, a runaway greenhouse effect), any more than the class of systems with linear growth rates is, but I'll certainly agree that our ability to not be surprised by such systems when we encounter them is improved by encountering other such systems (that is, studying runaway inflation may teach me to not simply assume that the greenhouse effect is linear).

The other has to do with perceptual thresholds and just-noticable differences. I may experience a subjective "quantity to quality" transition just because a threshold is crossed that makes me pay attention, even if there's no significant inflection point in the growth curve of delivered value.

[-]Shmi40

I don't know if that's actually a useful reference class for analysis

I don't know, either, but I feel that some research in this direction would be justified, given the potential payoff.

The other has to do with perceptual thresholds and just-notic[e]able differences.

This might, in fact, be one of the models: the metric being observed hides the "true growth curve". So a useful analysis, assuming it generalizes, would point to a more sensitive metric.

[-]satt20

Phase transitions?

[-]Shmi70

Right, it works for a bunch of specific instances of this phenomenon, but how do you construct a model which describes both phase transitions and human learning (and a host of other similar effects in totally dissimilar substrates)?

Very interesting. What skills or practices have you noticed this "level up" associated with in particular?

Planning fallacy.
Being more automatically strategic.
Not falling for mysterious answers.
More consciously noticing the difference between positive/normative statements, or when they are mixed up.
More consciously noticing connotation.
Noticing yourself rationalizing.

There might be others, availability bias. :p

You might be offended, angry, hurt, or otherwise emotionally compromised. Similarly, you might be sleepy, inebriated, hungry, or otherwise physically compromised. You might be overconfident in your ability to handle a certain type of problem or situation, and hence not bother to think of other ways that might work better.[1]

This is in principle good advice, but I'd like to add a note of caution here - I feel that most "rationalists" actually follow it too closely, and end up losing (and rationalists should win).

Evolutionary process have produced a brain which has different specialized modules for dealing with different situations, and the "purpose" of these modules is more in line with instrumental rationality, not epistemic rationality. Consequentially, a good epistemic rationalist often must suppress the contribution of many of these modules (overconfidence, emotion, etc).

The instrumental rationalist, on the other hand, better play close attention to emotions and overconfidence. Don't forget Egan's law - given human cognitive limitations, someone who applies sound epistemic rationality to full effect is not going to behave too differently from the highly successful person next to them who does not care about epistemic rationalisty at all. In other words, subtracting within reasonable bounds the effects of luck and privilege, anyone who you'd gladly trade most aspects of your life with is a superior instrumental rationalist, regardless of intelligence or learning.

Although I do think overall, instrumental rationalism does improve when epistemic rationality improves, I think that some of the tensions between them have the unfortunate result of making strong epistemic rationalists err in systematic ways when it comes to instrumental rationality.

What does this mean practically? It means you have emotions for a reason. The parts of your brain which generate emotion are the ones which are calibrated for social behavior. If you feel yourself getting angry, it is likely that the behaviors which anger produces (confronting the aggressor) will in fact produce a positive result. Similarly, if you are sad, sad behaviors (crying, seeking support or temporarily withdrawing from the social scene, depending on the situation) will likely produce a positive result.

Same goes for cognitive biases. Fundamental attribution error produces positive results because it's better to assume that actions are innate to people rather than a result of random circumstances, since the latter don't hold any predictive value. The action resulting from overconfidence bias (risk taking) produces positive results as well. I can't even think of any biases that don't follow this pattern.

Behaviorally speaking, an instrumental rationalist should not correct a bias unless they have understood the reason the bias evolved and have adjusted the other variables accordingly. For example, if you are epistemically well calibrated in confidence, take care not to let that translate into instrumental underconfidence. I think the notion that the portions of your psyche which are useful when it comes to logic, reason, epistemic rationalisty, etc...will understand enough and react quickly enough to match the performance of systems which are specialized for this purpose is a bit misguided, and it is extremely important to let the appropriate systems guide behavior when comes to instrumental rationality.

Caveat - Of course, your brain is designed to make viable offspring in the ancestral environment. 1) The environment has changed and 2) your goal isn't necessarily to have offspring. But still - there is a good deal of overlap between the two utility functions.

subtracting within reasonable bounds the effects of luck and privilege

That sounds like an overwhelming exception to me.

Yes, it is an overwhelming exception. In the real world these differences always exist, and you'll have to use your intuition to correct for them.

I'm trying to make the least convenient possible world where two randomly selected people are pulled from a crowd and are given the same, luckless task and one does better. Existing differences in brain-biology, priors, and previously gained knowledge still apply, while differences in resources and non-brain-related biology should be factored out. In these unnatural conditions, when it comes to that specific task, the one who did better is by definition a superior instrumental rationalist.

Agreed, but actually I would call a world where if people who chew gum get more throat abscesses one could reliably conclude that refraining to chew gum is the right choice to prevent throat abscesses a more convenient world than ours.

given human cognitive limitations, someone who applies sound epistemic rationality to full effect is not going to behave too differently from the highly successful person next to them who does not care about epistemic rationalisty at all

If it increases the probability of winning like that highly successful irrational person, it's still worth doing. I mean, if an irrational person has a 20% chance of becoming highly successful, and a rationality training could increase it to 40%, then I would prefer to take that rationality training, even if the rewards for the "winners" in both categories are the same.

But yes, we should remember that we use the human hardware, so we don't consistently overestimate the benefits of learning some rationality. Ideas which would work great for a self-improving AI may have less impressive impact on the sapient apes.

If it increases the probability of winning like that highly successful irrational person, it's still worth doing. I mean, if an irrational person has a 20% chance of becoming highly successful, and a rationality training could increase it to 40%, then I would prefer to take that rationality training, even if the rewards for the "winners" in both categories are the same.

The idea here is that even if "rationality training" (or even general intelligence) gives people an overall advantage, there is a possibility that there are systematic disadvantages in some areas which arise when a person repeatedly uses reason to override emotion and instinct.

Relying on reason and suppressing emotion and instinct is a cultural value, especially for people who call themselves "rationalists". We need to be aware of the pitfalls of doing that too much, because instrumentally speaking instinct and emotion do play a part in "computing" rational behavior.

[-]Shmi70

How has your own advice been working for you? Any examples would be great.

Can you be more specific as to what you mean? This question seems confused to me, but the fact that it's being upvoted means that others likely have similar questions, so I'd like to know as much as possible about what you're asking me before answering.

[-]Shmi30

Presumably, you have noticed some of the issues you describe in your own behavior, not just in others (unless you are far more rational than everyone else). For example, you might have caught yourself "looking for new tricks", or forgetting to "repeat continually forever," or noticing only in retrospect that you were "emotionally compromised" in a certain situation, or some other pitfall you describe in your post. After realizing what happened, you (presumably) did what you preached: "practice and changing your mindset" and found that it worked for you personally after awhile. For example, you may have noticed that your training paid off and you behaved much more rationally in a situation similar to where before you had had lost your cool completely.

So, I asked you to share some examples where what you advocate actually worked for you.

Okay, I'll take a stab at answering. I'm kind of loath to do this because one of the main points of this post is that specific techniques are overemphasized and I think specific examples won't help with this, but perhaps a more expansive description on my part can avoid that pitfall.

In 2010, I read Patri Friedman's Self-Improvement or Shiny Distraction, which I consider to be an essentially correct indictment of things around here, or at least things around here circa 2010. This is the post that sort of jolted me out of complacency with regards to my own training.

In my experience with the martial arts, I consistently apply things that I've drilled a lot (to the point where it takes conscious effort to not do some things-- I was once called up to be a dummy for someone demonstrating a certain type of deceptive fencing attack and found it very difficult to not parry the attack, deception or no, since I had drilled the parry to that particular deception so often) I inconsistently apply things that I don't drill, and I don't apply things that I don't drill.

Rationality is, in my experience, very much the same (others have noticed this too). I consistently apply thought patterns and principles that I've invested serious time and effort into drilling, I occasionally apply thought patterns and principles that I've thought about a fair amount but haven't put really serious effort into, and I don't apply thought patterns or principles that I've heard of but not really thought about. I'm actually rather embarrassed that I didn't notice this until reading Patri's post in 2010, but so it goes.

One example of a specific rationality skill that I have invested time and effort into drilling is that of keeping my identity small. I read a lot and I read fast, and hence when I was first linked to a Paul Graham essay I read all of them in one sitting. Keep Your Identity Small stuck with me the most, but for a while it was something I sort of believed in but hadn't applied. Here's some evidence of me not having applied it-- note the date.

However, at one point in early 2011 I noticed myself feeling personally insulted when someone was making fun of a group that I used to belong to, and more importantly I noticed that that was something that I wasn't supposed to do anymore. How could this be?

Well, quite frankly, it was because despite high degrees of theoretical knowledge about rationality, I lacked the practice hours required to be good at it. Unfortunately, most rationality skills are rare enough that knowing a little bit beyond a password-guessing level makes you seem very advanced relative to others. But rationality, except in certain competitive situations, isn't about being better than others, it's about being the best you can be.

So to make a long story short, I devised methods and put in the practice hours and got better, and now I actually know a few things instead of sort of knowing a few things. I winced at how low-level I used to be when I read that post from 2010, but all in all that's probably a good sign. After all, if I didn't think my old writing was silly and confused, wouldn't that indicate that I hadn't been progressing since then? Three years of progression should yield noticeably different results.

[-]maia30

I devised methods and put in the practice hours and got better

Could you unpack the training montage a bit? I don't really know what you mean by this.

That I cannot do, as it really would be just describing specific techniques. I may do so in a later post, though, and will link it here if and when that happens.

I spent a fair amount of time in martial arts and have a similar attitude toward generalization of kata/form. This idea is standing behind my consistent emphasis on the benefits of coding (particularly TDD) for this community. It builds thought patterns that are useful for tasks that computers typically perform better.

right goals

Pardon?

“Collect evidence about your environment from various sources; update your model of reality based on evidence collected; act in accordance with what your model of reality indicates is best for achieving your goals; repeat continually forever” would be a great candidate for The One Sentence.

circumstances where people would

Some words are missing here.

Fixed-- thanks for the heads up.

Having the right goals is somewhat separate from (my view of) rationality in that rationality is a set of methods oriented towards achieving one's goals and can be applied to any sort of goal, right or wrong as that goal may be. While "selecting the right goals" can itself be a goal that you can use rationality to help with, in principle the methods of rationality can be applied to assist you in any goal.

One might (rightly) point out that applying the methods of rationality to goals that are not desirable may be hazardous for you or for those around you, but this is true for nearly any tool. Increasing one's ability to influence the world will always carry a risk of you influencing the world in a negative direction. Luckily, rationality can be used to help verify that what you're doing is likely to have positive consequences-- it is hence one of very few tools that can actually help the user use it better!