After trying to figure out where the response would be best suited, I'm splitting the difference; I'll put a summary here, and if it's not obviously stupid and seems to garner comments, I'll post the full thing on its own.
I've read some of the sequences, but not all; I started to, and then wandered off. Here are my theories as to why, with brief explanations.
1) The minimum suggested reading is not just long, it's deceptively long.
The quantity by itself is a pretty big hurdle to someone who's only just developing an interest in its topics, and the way the sequences are indexed hides the actual amount of content behind categorized links. This is the wrong direction in which to surprise the would-be reader. And that's just talking about the core sequences.
2) Many of the sequences are either not interesting to me, or are presented in ways that make them appear not to be.
If the topic actually doesn't interest me, that's fine, because I presumably won't be trying to discuss it, either. But some of the sequence titles are more pithy than informative, and some of the introductory text is dissuasive where it tries to be inviting; few of them give a clear summary of what the subject is and who needs to read it.
3) Even the ones which are interesting to me contain way more information, or at least text, than I needed.
I don't think it's actually true that every new reader needs to read all of the sequences. I'm a bad example, because there's a lot in them I've never heard of or even thought about, but I don't think that's true of everyone who walks up to LW for the first time. On the other hand, just because I'd never heard of Bayes's Theorem by name doesn't mean that I need a huge missive to explain it to me. What I turned out to need was an example problem, the fact that the general form of the math I used to solve it is named after a guy called Bayes, and an explanation of how the term is used in prose. I was frustrated by having to go through a very long introduction in order to get those things (and I didn't entirely get the last one).
My proposal for addressing these is to create a single introductory page with inline links to glossary definitions, and from there to further reading. The idea is that more information is available up front and a new reader can more easily prioritize the articles based on their own knowledge and interest; it would also provide a general overview of the topics LW addresses. (The About page is a good introduction to the site, but not the subjects.) On a quick search, the glossary appears to have been suggested before but not yet exist--unless I just can't find it, in which case it's not doing much good. There are parts of this I'm not qualified to do, but I'd be happy to donate time to the ones that I am.
To be clear, do you actually think that time spent reading later posts has been more valuable than marginal time on the sequences would have been? To me that seems like reading Discover Magazine after dropping your intro to mechanics textbook because the later seems to just tell you thinks that are obvious.
Related to: Humans are not automatically strategic, The mystery of the haunted rationalist, Striving to accept, Taking ideas seriously
I argue that many techniques for epistemic rationality, as taught on LW, amount to techniques for reducing compartmentalization. I argue further that when these same techniques are extended to a larger portion of the mind, they boost instrumental, as well as epistemic, rationality.
Imagine trying to design an intelligent mind.
One problem you’d face is designing its goal.
Every time you designed a goal-indicator, the mind would increase action patterns that hit that indicator[1]. Amongst these reinforced actions would be “wireheading patterns” that fooled the indicator but did not hit your intended goal. For example, if your creature gains reward from internal indicators of status, it will increase those indicators -- including by such methods as surrounding itself with people who agree with it, or convincing itself that it understood important matters others had missed. It would be hard-wired to act as though “believing makes it so”.
A second problem you’d face is propagating evidence. Whenever your creature encounters some new evidence E, you’ll want it to update its model of “events like E”. But how do you tell which events are “like E”? The soup of hypotheses, intuition-fragments, and other pieces of world-model is too large, and its processing too limited, to update each belief after each piece of evidence. Even absent wireheading-driven tendencies to keep rewarding beliefs isolated from threatening evidence, you’ll probably have trouble with accidental compartmentalization (where the creature doesn’t update relevant beliefs simply because your heuristics for what to update were imperfect).
Evolution, AFAICT, faced just these problems. The result is a familiar set of rationality gaps:
I. Accidental compartmentalization
a. Belief compartmentalization: We often fail to propagate changes to our abstract beliefs (and we often make predictions using un-updated, specialized components of our soup of world-model). Thus, learning modus tolens in the abstract doesn’t automatically change your answer to the Wason card test. Learning about conservation of energy doesn’t automatically change your fear when a bowling ball is hurtling toward you. Understanding there aren’t ghosts doesn’t automatically change your anticipations in a haunted house. (See Will's excellent post Taking ideas seriously for further discussion).
b. Goal compartmentalization: We often fail to propagate information about what “losing weight”, “being a skilled thinker”, or other goals would concretely do for us. We also fail to propagate information about what specific actions could further these goals. Thus (absent the concrete visualizations recommended in many self-help books) our goals fail to pull our behavior, because although we verbally know the consequences of our actions, we don’t visualize those consequences on the “near-mode” level that prompts emotions and actions.
c. Failure to flush garbage: We often continue to work toward a subgoal that no longer serves our actual goal (creating what Eliezer calls a lost purpose). Similarly, we often continue to discuss, and care about, concepts that have lost all their moorings in anticipated sense-experience.
II. Reinforced compartmentalization:
Type 1: Distorted reward signals. If X is a reinforced goal-indicator (“I have status”; “my mother approves of me”[2]), thinking patterns that bias us toward X will be reinforced. We will learn to compartmentalize away anti-X information.
The problem is not just conscious wishful thinking; it is a sphexish, half-alien mind that distorts your beliefs by reinforcing motives, angles or approach or analysis, choices of reading material or discussion partners, etc. so as to bias you toward X, and to compartmentalize away anti-X information.
Impairment to epistemic rationality:
Impairment to instrumental rationality:
Type 2: “Ugh fields”, or “no thought zones”. If we have a large amount of anti-X information cluttering up our brains, we may avoid thinking about X at all, since considering X tends to reduce compartmentalization and send us pain signals. Sometimes, this involves not-acting in entire domains of our lives, lest we be reminded of X.
Impairment to epistemic rationality:
Impairment to instrumental rationality:
Type 3: Wireheading patterns that fill our lives, and prevent other thoughts and actions. [3]
Impairment to epistemic rationality:
Impairment to instrumental rationality:
Strategies for reducing compartmentalization:
A huge portion of both Less Wrong and the self-help and business literatures amounts to techniques for integrating your thoughts -- for bringing your whole mind, with all your intelligence and energy, to bear on your problems. Many fall into the following categories, each of which boosts both epistemic and instrumental rationality:
1. Something to protect (or, as Napoleon Hill has it, definite major purpose[4]): Find an external goal that you care deeply about. Visualize the goal; remind yourself of what it can do for you; integrate the desire across your mind. Then, use your desire to achieve this goal, and your knowledge that actual inquiry and effective actions can help you achieve it, to reduce wireheading temptations.
2. Translate evidence, and goals, into terms that are easy to understand. It’s more painful to remember “Aunt Jane is dead” than “Aunt Jane passed away” because more of your brain understands the first sentence. Therefore use simple, concrete terms, whether you’re saying “Aunt Jane is dead” or “Damn, I don’t know calculus” or “Light bends when it hits water” or “I will earn a million dollars”. Work to update your whole web of beliefs and goals.
3. Reduce the emotional gradients that fuel wireheading. Leave yourself lines of retreat. Recite the litanies of Gendlin and Tarski; visualize their meaning, concretely, for the task or ugh field bending your thoughts. Think through the painful information; notice the expected update, so that you need not fear further thought. On your to-do list, write concrete "next actions", rather than vague goals with no clear steps, to make the list less scary.
4. Be aware of common patterns of wireheading or compartmentalization, such as failure to acknowledge sunk costs. Build habits, and perhaps identity, around correcting these patterns.
I suspect that if we follow up on these parallels, and learn strategies for decompartmentalizing not only our far-mode beliefs, but also our near-mode beliefs, our models of ourselves, our curiosity, and our near- and far-mode goals and emotions, we can create a more powerful rationality -- a rationality for the whole mind.
[1] Assuming it's a reinforcement learner, temporal difference learner, perceptual control system, or similar.
[2] We receive reward/pain not only from "primitive reinforcers" such as smiles, sugar, warmth, and the like, but also from many long-term predictors of those reinforcers (or predictors of predictors of those reinforcers, or...), such as one's LW karma score, one's number theory prowess, or a specific person's esteem. We probably wish to regard some of these learned reinforcers as part of our real preferences.
[3] Arguably, wireheading gives us fewer long-term reward signals than we would achieve from its absence. Why does it persist, then? I would guess that the answer is not so much hyperbolic discounting (although this does play a role) as local hill-climbing behavior; the simple, parallel systems that fuel most of our learning can't see how to get from "avoid thinking about my bill" to "genuinely relax, after paying my bill". You, though, can see such paths -- and if you search for such improvements and visualize the rewards, it may be easier to reduce wireheading.
[4] I'm not recommending Napoleon Hill. But even this unusually LW-unfriendly self-help book seems to get most points right, at least in the linked summary. You might try reading the summary as an exercise in recognizing mostly-accurate statements when expressed in the enemy's vocabulary.