Related to: Go forth and create the art!, A sense that more is possible.
If you talk to any skilled practitioner of an art, they have a sense of the depths beyond their present skill level. This sense is important. To create an art, or to learn one, one must have a sense of the goal.
By contrast, when I chat with many at Less Wrong meet-ups, I often hear a sense that mastering the sequences will take one most of the way to "rationality", and that the main thing to do, after reading the sequences, is to go and share the info with others. I would therefore like to sketch the larger thing that I hope our rationality can become. I have found this picture useful for improving my own rationality; I hope you may find it useful too.
To avoid semantic disputes, I tried to generate my picture of "rationality" by asking not "What should 'rationality' be?" but "What is the total set of simple, domain-general hacks that can help humans understand the most important things, and achieve our goals?" or "What simple tricks can help turn humans -- haphazard evolutionary amalgams that we are -- into coherent agents?"
The branches
The larger "rationality" I have in mind would include some arts that are well-taught on Less Wrong, others that that don't exist yet at all, and others have been developed by outside communities from which we could profitably steal.
Specifically, a more complete art of rationality might teach the following arts:
1. Having beliefs: the art of having one's near-mode anticipations and far-mode symbols work together, with the intent of predicting the outside world. (The Sequences, especially Mysterious answers to mysterious questions, currently help tremendously with these skills.)
2. Making your beliefs less buggy -- about distant or abstract subjects. This art aims to let humans talk about abstract domains in which the data doesn’t hit you upside the head -- such as religion, politics, the course of the future, or the efficacy of cryonics -- without the conversation turning immediately into nonsense. (The Sequences, and other discussions of common biases and of the mathematics of evidence, are helpful here as well.)
3. Making your beliefs less buggy -- about yourself. Absent training, our models of ourselves are about as nonsense-prone as our models of distant or abstract subjects. We often have confident, false models of what emotions we are experiencing, why we are taking a given action, how our skills and traits compare to those around us, how long a given project will take, what will and won’t make us happy, and what our goals are. This holds even for many who've studied the Sequences and who are reasonably decent on abstract topics; other skills are needed.[1]
4. Chasing the most important info: the art of noticing what knowledge would actually help you. A master of this art would continually ask themselves: "What do I most want to accomplish? What do I need to know, in order to achieve that thing?". They would have large amounts of cached knowledge about how to make money, how to be happy, how to learn deeply, how to effectively improve the world, and how to achieve other common goals. They would continually ask themselves where telling details could be found, and they would become interested in any domain that could help them.[2]
As with the art of self-knowledge, Less Wrong has barely started on this one.
5. Benefiting from everyone else's knowledge. This branch of rationality would teach us:
- Which sorts of experts, and which sorts of published studies, are what sorts of trustworthy; and
- How to do an effective literature search, read effectively, interview experts effectively, or otherwise locate the info we need.
Less Wrong and Overcoming Bias have covered pieces of this, but I'd bet there's good knowledge to be found elsewhere.
6. The art of problem-solving: how to brainstorm up a solution once you already know what the question is. Eliezer has described parts of such an art for philosophy problems[3], and Luke Grecki summarized Polya's "How to Solve It" for math problems, but huge gaps remain.
7. Having goals. In our natural state, humans do not have goals in any very useful sense. This art would change that, e.g. by such techniques as writing down and operationalizing one's goals, measuring progress, making plans, and working through one's emotional responses until one is able, as a whole person, to fully choose a particular course.
Much help with goal-achievement can be found in the self-help and business communities; it would be neat to see that knowledge fused with Less Wrong.[4]
8. Making your goals less buggy. Even insofar as we do act on coherent goals, our goals are often "buggy" in the sense of carrying us in directions we will predictably regret. Some skills that can help include:
- Skill in noticing and naming your emotions and motivations (art #2 above);
- Understanding what ethics is, and what you are. Sorting out religion, free will, fake utility functions, social signaling patterns, and other topics that disorient many.
- Being on the look-out for lost purposes, cached goals or values, defensiveness, wire-heading patterns, and other tricks your brain tends to play on you.
- Being aware of, and accepting, as large a part of yourself as possible.
Parts of a single discipline
Geometry, algebra, and arithmetic are all “branches of mathematics”, rather than stand-alone arts. They are all “branches of mathematics” because they build on a common set of thinking skills, and because skill in each of these branches can boost one’s problem-solving ability in other branches of mathematics.
My impression is that the above arts are all branches of a single discipline ("rationality") in roughly the same sense in which arithmetic, algebra, etc. are branches of mathematics. For one thing, all of these arts have a common foundation: they all involve noticing what one's brain is doing, and asking if those mental habits are serving one's purpose or if some other habits would work better.
For another thing, skill at many of the above arts can help with many of the others. For example, knowing your motivations can help you debug your reasoning, since you’re much more likely to find the truth when you want the truth. Asking “what would I expect to see, if my theory was true? if it was false?” is useful for both modeling the future and modeling yourself. Acquiring coherent goals makes it easier to wholeheartedly debug ones beliefs, without needing to flinch away. And so on.
It therefore seems plausible that jointly studying the entire above discipline (including whatever branches I left out) would give one a much larger cross-domain power boost, and higher performance in each of the above arts, than one gets from only learning the Less Wrong sequences.
[1] That is: Bayes' theorem and other rules of reasoning do work for inferring knowledge about oneself. But Less Wrong hasn't walked us through the basics of applying them to self-modeling, such as noting that one must infer one's motives through a process of ordinary inference (“What actions would I expect to see if I was trying to cooperate? What actions would I expect to see if I was instead trying to vent anger?") and not by consulting one's verbal self-model. It also has said very little about how to gather data about oneself, how to reduce ones biases on the subject, etc. (although Alicorn's Luminosity sequence deserves mention).
[2] Michael Vassar calls this skill “lightness of curiosity”, by analogy to the skill “lightness of beliefs” from Eliezer’s 12 virtues of rationality. The idea here is that a good rationalist should have a curiosity that moves immediately as they learn what information can help them, much as an good rationalist should have beliefs that move immediately as they learn which way the evidence points. Just as a good rationalist should not call reality "surprising", so also a good rationalist should not call useful domains "boring".
[3] I.e., Eliezer's posts describe parts of an art for cases such as "free will" in which the initial question is confused, and must be dissolved rather than answered. He also notes the virtue of sustained effort.
[4] My favorite exceptions are Eliezer's post Something to protect and Alicorn's City of Lights technique. If you're looking for good reading offsite on how to have coherent goals, I'd second Patri's recommendation of Brian Tracy's books.
(nods) Yes.
I discovered this when my job involved doing a lot of software design reviews; the only way I could make myself actually review the design (rather than read the design and nod my head) was to write my own summary document of what the design was.
This was particularly true for noticing important things that were missing (as opposed to noticing things that were there and false).