An excellent way to improve one's skill as a rationalist is to identify one's strengths and weaknesses, and then expend effort on the things that one can most effectively improve (which are often the areas where one is weakest). This seems especially useful if one is very specific about the parts of rationality, if one describes them in detail. 

In order to facilitate improving my own and others' rationality, I am posting this list of 11 core rationalist skills, thanks almost entirely to Anna Salamon

  • Keep your eyes on the prize.  Focus your modeling efforts on the issues most relevant to your goals. Be able to quickly refocus a train of thought or discussion on the most important issues, and be able and willing to quickly kill tempting tangents. Periodically stop and ask yourself "Is what I am thinking about at the moment really an effective way to achieve my stated goals?".
  • Entangle yourself with the evidence. Realize that true opinions don't come from nowhere and can't just be painted in by choice or intuition or consensus. Realize that it is information-theoretically impossible to reliably get true beliefs unless you actually get reliably pushed around by the evidence. Distinguish between knowledge and feelings. 
  • Be Curious: Look for interesting details; resist cached thoughts; respond to unexpected observations and thoughts. Learn to acquire interesting angles, and to make connections to the task at hand.
  • Aumann-update: Update to the right extent from others' opinions.  Borrow reasonable practices for grocery shopping, social interaction, etc from those who have already worked out what the best way to do these things is.  Take relevant experts seriously. Use outside views to estimate the outcome of one's own projects and the merit of one's own clever ideas. Be willing to depart from consensus in cases where there is sufficient evidence that the consensus is mistaken or that the common practice doesn't serve its ostensible purposes. Have correct models of the causes of others’ beliefs and psychological states, so that you can tell the difference between cases where the vast majority of people believe falsehoods for some specific reason, and cases where the vast majority actually knows best. 
  • Know standard Biases:  Have conscious knowledge of common human error patterns, including the heuristics and biases literature; practice using this knowledge in real-world situations to identify probable errors; practice making predictions and update from the track record of your own accurate and inaccurate judgments. 
  • Know Probability theory:  Have conscious knowledge of probability theory; practice applying probability theory in real-world instances and seeing e.g. how much to penalize conjunctions, how to regress to the mean, etc.
  • Know your own mind:  Have a moment-to-moment awareness of your own emotions and of the motivations guiding your thoughts. (Are you searching for justifications? Shying away from certain considerations out of fear?) Be willing to acknowledge all of yourself, including the petty and unsavory parts.  Knowledge of your own track record of accurate and inaccurate predictions, including in cases where fear, pride, etc. were strong.
  • Be well calibrated: Avoid over- and under-confidence.  Know how much to trust your judgments in different circumstances.  Keep track of many levels of confidence, precision, and surprisingness; dare to predict as much as you can, and update as you test the limits of your knowledge.  Develop as precise a world-model as you can manage.  (Tom McCabe wrote a quiz to test some simple aspects of your calibration.)  
  • Use analytic philosophy: understand the habits of thought taught in analytic philosophy; the habit of following out lines of thought, of taking on one issue at a time, of searching for counter-examples, and of carefully keeping distinct concepts distinct (e.g. not confusing heat and temperature; free will and lack of determinism; systems for talking about Peano arithmetic and systems for talking about systems for talking about Peano arithmetic).
  • Resist Thoughtcrime.  Keep truth and virtue utterly distinct in your mind.  Give no quarter to claims of the sort "I must believe X, because otherwise I will be {racist / without morality / at risk of coming late to work/ kicked out of the group / similar to stupid people}".  Decide that it is better to merely lie to others than to lie to others and to yourself.  Realize that goals and world maps can be separated; one can pursue the goal of fighting against climate change without deliberately fooling oneself into having too high an estimate (given the evidence) of the probability that the anthropogenic climate change hypothesis is correct. 
New Comment
36 comments, sorted by Click to highlight new comments since:
[-]Jordan180

Great post.

If only we could flesh this out into a four year long class and call it high school...

This is an excellent list, and would serve well as an introduction to Less Wrong.

One of the more common irrationalities I see on the internet is the sin of foolish consistency. i.e. people post something without a lot of thought and then when evidence or arguments arise which undermine their statement, they are overly dismissive because they do not want to appear inconsistent.

Actually make written down predictions about the future. If you don't make real written down predictions you will never know how bad you are at predicting the world around you.

[-][anonymous]50

The thing is, if you actually make a written down prediction, you're more likely to alter your actions purely for the sake of making it come true.

You say that like it's a bad thing.

[-][anonymous]30

It is a bad thing, if you predict that something bad will happen.

If you don't foresee/predict bad things happening, then you can't do anything to prevent them. UnFriendly AI anyone?

This reminds me of an architectural designer I used to work for, when we were laying out what needed to be done I would try to foresee what could go wrong, so I could head it off. I finally quit trying to do that with him because he kept accusing me of "negativity" as though thinking about bad things make them happen. (Since I was the one actually doing most of the work I still tried to predict what could go wrong, and head it off, I just quit talking to him about it.)

[-][anonymous]30

I can imagine someone predicting something bad happening, seeing that it probably won't happen, and causing it to happen in order to prove they were right.

It's probably best to do this with things that we have almost no control over.

[-][anonymous]00

Quite right.

It's the poor craftsman who blames his tools!

(Or as Gibbon says, 'The wind and waves are always on the side of the ablest navigators.')

Your work context may provide you with frequent opportunities to do that.

For instance, if you are a programmer, you can make predictions about how long a given task is going to take, or alternately how many tasks you can take on in a given period.

If you train or teach people, you can predict what they will have understood at the end of a given session, and test those predictions by asking questions at the end of the session.

More generally, predictions of the form "I will achieve objective X by time T" are a useful class, as you normally have quite a lot of the relevant information, which ought to narrow your confidence bounds.

ETA: keeping appointments is another similar class. If you're never late, you're probably underconfident. (See Umeshisms.) You should have a general degree of confidence in your timeliness, e.g. "I will seek to show up on time 80% of the time." You may adjust that depending on criticality in given contexts, e.g. "...except that I hate to disappoint employers, so I'll show up to work on time 95% of the time".

[-]Roko00

How many people actually bother to do this?

Written predictions seem like they would be a lot of work, but maybe this is indeed worth trying.

Prediction Book is a really promising app, but it unfortunately has lots of problems, and it doesn't look like it's being improved very much at all.

Among the problems are:

  • performance is terrible, making it really frustrating to use, because you're forever waiting for pages to load
  • the UI is confusing, which leads to lots of wrong judgments, though it has improved from what it was
  • the right/wrong judgment is global, not per user, so if 2 people make a prediction on "I will lose 5 pounds before Jan. 1, 2010", then it can only be judged right or wrong for both of them, not right for one and wrong for the other. The only alternative is to make everything private, which eliminates the benefits of seeing other people's estimates on the same issue and of the comments and feedback from other users.
[-]Jack10
  1. Despite any problems, it is still a pretty convenient place to record predictions and that was the topic at issue.
  2. I agree with the first two complaints.
  3. I'm not sure I would want predictions with indexicals referring to the user who posted them to apply to all users. That makes little sense since the prediction that you lose wait is totally different from the prediction that anyone else does. It definitely is a little strange to see "I will do x" and not estimate as if it is yourself but I have no idea why it would be helpful to see "other people's estimates on the same issue" since it isn't the same issue. What would be the point of a composite estimation? Maybe it would be nice to have a tool that grouped predictions like this so people could talk about them, but that is hardly the most pressing issue.
  4. You've voted in the site's feedback section, yes?

Yes, it's definitely convenient. I guess I am a bit frustrated with PB because I thought it had tremendous potential and was very excited about using it. After using it for a while now and making more than a hundred predictions with it, the problems have worn me down and there seems little chance that it'll be improved, since they've said they'll only improve it if it's used a lot (and it doesn't get very much use because of all the problems).

The issue with not having the judgments of public options be able to be different per user is that there are lots of public predictions that include indexicals and that multiple people vote on (I've done it myself before I realized what it leads to), and there will probably continue to be plenty of those kinds of predictions, since there is no suggestion from PB that public predictions should not include indexicals or that people should avoid providing estimates on public predictions that include indexicals.

What happens is that multiple people make predictions, and then the judgment swings back and forth between Right and Wrong as different people judge it Right or Wrong for them, many of them probably thinking that they're rendering judgment for themselves and not for everybody else as well. Now, I make everything private to avoid these kinds of problems, but a site like that with most content private is much less useful than it would be if things were more public, and people could get ideas about things to make predictions on from other people and could comment on each others' predictions.

Another problem is that if, every time I see a prediction with an indexical (of which there are many) that I would like to add an estimate for, I have to create a new prediction and copy/paste the text or type it out again, then it becomes too much of a hassle -- especially given how slow everything is. I don't care how it's implemented, but I should just have to add an estimate and click a button. Anything more than that is too much work when the exact wording for the prediction I want to make already exists and I'm looking at it. Perhaps they could add another button to allow making a private estimate for that prediction, and then allow a private judgment for it that is only visible to the user as well.

And no, I didn't vote in the feedback. Requiring your users to sign up for a different account in order to provide feedback is just obnoxious.

[-]matt30

Dude, we're trying to help on many fronts. We host OB at no charge; we developed and host LW at no charge; we wrote PB and offer it at no charge. If you tried a little harder, do you think you could come up with an explanation for why we'd use an external feedback service other than that we're obnoxious?

ETA 08:39:51 UTC: Sorry - that was overly snarky. You obviously want to be a passionate user but are being let down by our lack of time to tune and improve the site. Watch for a top level post on PB and its future coming soon.

I didn't say you or anybody else developing PB is obnoxious. I said a certain behavior (requiring signing up for 2 accounts) was obnoxious. And since behaviors aren't intrinsically obnoxious or not, I obviously meant that I judge that requiring users to sign up for a second account to give feedback is obnoxious. Colloquially, this just means that I find it annoying, and it doesn't imply that you're trying to annoy anyone or say anything about you as a person. I find the behavior annoying, which I gave as an explanation for why I didn't bother to provide feedback.

I can of course imagine plenty of reasons why you'd use an external feedback service, just like I can imagine plenty of reasons that the performance would be what it is, none of them involving any kind of malevolent intention or lack of skill on your part. Nevertheless, I and quite a few others find PB frustrating to use, which is a real shame for an app that holds so much promise.

For what it's worth, I applaud your pro bono work for OB and LW, and I hope you keep up the good work. I think PB holds incredible promise, and I hope that you do find a way to improve it.

No apology necessary. I'd probably react similarly if I felt that somebody was being unconstructively critical of an app that I created.

I was just frustrated, as you guessed, because I really care about the idea and the app, and I see so much promise there. I should have just said to the original poster that I didn't provide feedback because I didn't want to sign up for a second account, but my frustration made me get snarky, which I apologize for.

Thanks again to you and everybody else at Tricycle.

It's not really that there's much time involved.

Let's say you are waiting in line in the supermarket to pay. It costs you no additional time to take a paper and note down an estimated amount of money that you have to pay.

It's probably rather like quiting smoking. Going through life while being fuzzy about your expectations is just easier than making predictions.

However all that talk about cognitive biases doesn't do much when you just gather knowledge but don't change any of your deep seated habits.

[-]Roko50

It would cost me effort and thought cycles; I would pay $100 a year at least to have this prediction/calibration thing done by magic.

[-]Roko30

Though, one alternative that seems to work is making bets with people. It seems to work because the thought of gaining coolness/status points over the other person overcomes the resistance to putting in the effort of thinking about the prediction. Apparently some major financial firms (Renaissance?) have a culture where you are actively encouraged to do this.

Actually want an accurate map, because you have Something to protect.

Why does protection have to be everyone's Capitalized goal?

Yeah, what's wrong with having Something to Destroy?

:-)

You probably need to find Something to understand that.

I am still looking, myself.

So your Capitalized Goal is having Something to Look For?

Ha. You could put it that way. I still don't know what Something is, though.

Or more succinctly and broadly, learn to:

  • pay attention

  • correct bias

  • anticipate bias

  • estimate well

With a single specific enumeration of means to accomplish these competencies you risk ignoring other possible curricula. And you encourage the same blind spots for the entire community of aspiring rationalists so educated.

Proof of how dangerous this sort of list can be.

I entirely forget about:

  • act effectively

After all, how can you advance even pure epistemic rationality without constructing your own experiments on the world?

Also, the first eleven Virtues of Rationality should be removed from the list.

I want to add "be wary of conclusions which make you feel safer or require less action", but that may just one of the "standard biases". (I have come to the conclusion that I don't have time to read the referenced book just now, but I suppose I should be suspicious of that conclusion because the alternative requires more work and may challenge the validity of this comment, thus making me feel less safe in making it...)

[-]djcb10

I like the idea of a list -- maybe it should really be limited to some fixed number -- say 13 to clarify the rationalist stance on superstition :)

Anyway, perhaps this current list is somewhat unbalanced -- for example, before including analytic philosophy, I think the e.g., familiarity with game theory seems much more important.

Also, the first couple of points are like simple rules to follow, while many of the later ones are more about pointing to fields of knowledge than given something short to keep in mind. There's something to be said for both, but it might be clearer to have separate lists for those, e.g., a list of short rules one can remember ("The map is not the area"), and a list of fields that are important, such as parts of economics, game theory, information theory and so on.

Bravo. Were this a religion, I'd be a member. Wait, I already am. Or is that self-contradictory?