Related to: Humans are not automatically strategic, The mystery of the haunted rationalist, Striving to accept, Taking ideas seriously

I argue that many techniques for epistemic rationality, as taught on LW, amount to techniques for reducing compartmentalization.  I argue further that when these same techniques are extended to a larger portion of the mind, they boost instrumental, as well as epistemic, rationality.

Imagine trying to design an intelligent mind.

One problem you’d face is designing its goal.  

Every time you designed a goal-indicator, the mind would increase action patterns that hit that indicator[1].  Amongst these reinforced actions would be “wireheading patterns” that fooled the indicator but did not hit your intended goal.  For example, if your creature gains reward from internal indicators of status, it will increase those indicators -- including by such methods as surrounding itself with people who agree with it, or convincing itself that it understood important matters others had missed.  It would be hard-wired to act as though “believing makes it so”. 

A second problem you’d face is propagating evidence.  Whenever your creature encounters some new evidence E, you’ll want it to update its model of  “events like E”.  But how do you tell which events are “like E”? The soup of hypotheses, intuition-fragments, and other pieces of world-model is too large, and its processing too limited, to update each belief after each piece of evidence.  Even absent wireheading-driven tendencies to keep rewarding beliefs isolated from threatening evidence, you’ll probably have trouble with accidental compartmentalization (where the creature doesn’t update relevant beliefs simply because your heuristics for what to update were imperfect).

Evolution, AFAICT, faced just these problems.  The result is a familiar set of rationality gaps:

I.  Accidental compartmentalization

a.  Belief compartmentalization:  We often fail to propagate changes to our abstract beliefs (and we often make predictions using un-updated, specialized components of our soup of world-model).  Thus, learning modus tolens in the abstract doesn’t automatically change your answer to the Wason card test.  Learning about conservation of energy doesn’t automatically change your fear when a bowling ball is hurtling toward you.  Understanding there aren’t ghosts doesn’t automatically change your anticipations in a haunted house. (See Will's excellent post Taking ideas seriously for further discussion).

b. Goal compartmentalization We often fail to propagate information about what “losing weight”, “being a skilled thinker”, or other goals would concretely do for us.  We also fail to propagate information about what specific actions could further these goals.  Thus (absent the concrete visualizations recommended in many self-help books) our goals fail to pull our behavior, because although we verbally know the consequences of our actions, we don’t visualize those consequences on the “near-mode” level that prompts emotions and actions.

c.  Failure to flush garbage:  We often continue to work toward a subgoal that no longer serves our actual goal (creating what Eliezer calls a lost purpose).  Similarly, we often continue to discuss, and care about, concepts that have lost all their moorings in anticipated sense-experience.

II.  Reinforced compartmentalization: 

Type 1:   Distorted reward signals. If X is a reinforced goal-indicator (“I have status”; “my mother approves of me”[2]), thinking patterns that bias us toward X will be reinforced.  We will learn to compartmentalize away anti-X information.

The problem is not just conscious wishful thinking; it is a sphexish, half-alien mind that distorts your beliefs by reinforcing motives, angles or approach or analysis, choices of reading material or discussion partners, etc. so as to bias you toward X, and to compartmentalize away anti-X information.

Impairment to epistemic rationality:

  • “[complex reasoning]... and so my past views are correct!” (if I value “having accurate views”, and so I’m reinforced for believing my views accurate)
  •  “... and so my latest original theory is important and worth focusing my career on!” (if I value “doing high-quality research”)
  • “... and so the optimal way to contribute to the world, is for me to continue in exactly my present career...” (if I value both my present career and “being a utilitarian”)
  •  “... and so my friends’ politics is correct.” (if I have value both “telling the truth” and “being liked by my friends”)

Impairment to instrumental rationality:

  • “... and so the two-fingered typing method I’ve used all my life is effective, and isn’t worth changing” (if I value “using effective methods” and/or avoiding difficulty)
  •  “... and so the argument was all his fault, and I was blameless” (if I value “treating my friends ethically”)
  • “... and so it’s because they’re rotten people that they don’t like me, and there’s nothing I might want to change in my social habits.”
  • “... and so I don’t care about dating anyhow, and I have no reason to risk approaching someone.”

Type 2:   “Ugh fields”, or “no thought zones”.  If we have a large amount of anti-X information cluttering up our brains, we may avoid thinking about X at all, since considering X tends to reduce compartmentalization and send us pain signals.  Sometimes, this involves not-acting in entire domains of our lives, lest we be reminded of X.

Impairment to epistemic rationality:

Impairment to instrumental rationality:

  • Many of us avoid learning new skills (e.g., taking a dance class, or practicing social banter), because practicing them reminds us of our non-competence, and sends pain signals.
  • The longer we’ve avoided paying a bill, starting a piece of writing, cleaning out the garage, etc., the harder it may be to think about the task at all (if we feel pain about having avoided it); 
  • The more we care about our performance on a high-risk task, the harder it may be to start working on it (so that the highest value tasks, with the most uncertain outcomes, are those we leave to the last minute despite the expected impact of such procrastination);
  • We may avoid making plans for death, disease, break-up, unemployment, or other unpleasant contingencies.

Type 3:   Wireheading patterns that fill our lives, and prevent other thoughts and actions. [3]

Impairment to epistemic rationality:

  • We often spend our thinking time rehearsing reasons why our beliefs are correct, or why our theories are interesting, instead of thinking new thoughts.

Impairment to instrumental rationality:

  • We often take actions to signal to ourselves that we have particular goals, instead of acting to achieve those goals.  For example, we may go through the motions of studying or working, and feel good about our diligence, while paying little attention to the results.
  • We often take actions to signal to ourselves that we already have particular skills, instead of acting to acquire those skills.  For example, we may prefer to play games against folks we often beat, request critiques from those likely to praise our abilities, rehearse yet more projects in our domains of existing strength, etc.

Strategies for reducing compartmentalization:

A huge portion of both Less Wrong and the self-help and business literatures amounts to techniques for integrating your thoughts -- for bringing your whole mind, with all your intelligence and energy, to bear on your problems.  Many fall into the following categories, each of which boosts both epistemic and instrumental rationality:

1.  Something to protect (or, as Napoleon Hill has it, definite major purpose[4]): Find an external goal that you care deeply about. Visualize the goal; remind yourself of what it can do for you; integrate the desire across your mind.  Then, use your desire to achieve this goal, and your knowledge that actual inquiry and effective actions can help you achieve it, to reduce wireheading temptations.

2.  Translate evidence, and goals, into terms that are easy to understand.  It’s more painful to remember “Aunt Jane is dead” than “Aunt Jane passed away” because more of your brain understands the first sentence.  Therefore use simple, concrete terms, whether you’re saying “Aunt Jane is dead” or “Damn, I don’t know calculus” or “Light bends when it hits water” or “I will earn a million dollars”.  Work to update your whole web of beliefs and goals.

3.  Reduce the emotional gradients that fuel wireheading.  Leave yourself lines of retreat.  Recite the litanies of Gendlin and Tarski; visualize their meaning, concretely, for the task or ugh field bending your thoughts.  Think through the painful information; notice the expected update, so that you need not fear further thought.  On your to-do list, write concrete "next actions", rather than vague goals with no clear steps, to make the list less scary.

4.  Be aware of common patterns of wireheading or compartmentalization, such as failure to acknowledge sunk costs.  Build habits, and perhaps identity, around correcting these patterns.

I suspect that if we follow up on these parallels, and learn strategies for decompartmentalizing not only our far-mode beliefs, but also our near-mode beliefs, our models of ourselves, our curiosity, and our near- and far-mode goals and emotions, we can create a more powerful rationality -- a rationality for the whole mind.

 


[1] Assuming it's a reinforcement learner, temporal difference learner, perceptual control system, or similar.

[2] We receive reward/pain not only from "primitive reinforcers" such as smiles, sugar, warmth, and the like, but also from many long-term predictors of those reinforcers (or predictors of predictors of those reinforcers, or...), such as one's LW karma score, one's number theory prowess, or a specific person's esteem. We probably wish to regard some of these learned reinforcers as part of our real preferences.

[3] Arguably, wireheading gives us fewer long-term reward signals than we would achieve from its absence. Why does it persist, then?  I would guess that the answer is not so much hyperbolic discounting (although this does play a role) as local hill-climbing behavior; the simple, parallel systems that fuel most of our learning can't see how to get from "avoid thinking about my bill" to "genuinely relax, after paying my bill".  You, though, can see such paths -- and if you search for such improvements and visualize the rewards, it may be easier to reduce wireheading.

[4] I'm not recommending Napoleon Hill. But even this unusually LW-unfriendly self-help book seems to get most points right, at least in the linked summary.  You might try reading the summary as an exercise in recognizing mostly-accurate statements when expressed in the enemy's vocabulary.

New Comment
123 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

rwallace, as mentioned by whpearson, notes possible risks from de-compartmentalization:

Human thought is by default compartmentalized for the same good reason warships are compartmentalized: it limits the spread of damage.... We should think long and hard before we throw away safety mechanisms, and compartmentalization is one of the most important ones.

I agree that if you suddenly let reason into a landscape of locally optimized beliefs and actions, you may see significant downsides. And I agree that de-compartmentalization, in particular, can be risky. Someone who believes in heaven and hell but doesn’t consider that belief much will act fairly normally; someone who believes in heaven and hell and actually thinks about expected consequences might have fear of hell govern all their actions.

Still, it seems to me that it is within the reach of most LW-ers to skip these downsides. The key is simple: the downsides from de-compartmentalization stem from allowing a putative fact to overwrite other knowledge (e.g., letting one’s religious beliefs overwrite knowledge about how to successfully reason in biology, or letting a simplified ev. psych overwrite one’s experiences of what da... (read more)

This reminds me:

When I finally realized that I was mistaken about theism, I did one thing which I'm glad of– I decided to keep my system of ethics until I had what I saw as really good reasons to change bits of it. (This kept the nihilist period I inevitably passed through from doing too much damage to me and the people I cared about, and of course in time I realized that it was enough that I cared about these things, that the universe wasn't requiring me to act like a nihilist.)

Eventually, I did change some of my major ethical beliefs, but they were the ones that genuinely rested on false metaphysics, and not the ones that were truly a part of me.

6pjeby
This is leaving out the danger that realistic assessments of your ability can be hazardous to your ability to actually perform. People who over-estimate their ability accomplish more than people who realistically estimate it, and Richard Wiseman's luck research shows that believing you're lucky will actually make it so. I think instrumental rationalists should perhaps follow a modified Tarski litany, "If I live in a universe where believing X gets me Y, and I wish Y, then I wish to believe X". ;-) Actually, more precisely: "If I live in a universe where anticipating X gets me Y, and I wish Y, then I wish to anticipate X, even if X will not really occur". I can far/symbolically "believe" that life is meaningless and I could be killed at any moment, but if I want to function in life, I'd darn well better not be emotionally anticipating that my life is meaningless now or that I'm actually about to be killed by random chance. (Edit to add a practical example: a golfer envisions and attempts to anticipate every shot as if it were going to be a hole-in-one, even though most of them will not be... but in the process, achieves a better result than if s/he anticipated performing an average shot. Here, X is the perfect shot, and Y is the improved shot resulting from the visualization. The compartmentalization that must occur for this to work is that the "far" mind must not be allowed to break the golfer's concentration by pointing out that the envisioned shot is a lie, and that one should therefore not be feeling the associated feelings.)

I think instrumental rationalists should perhaps follow a modified Tarski litany, "If I live in a universe where believing X gets me Y, and I wish Y, then I wish to believe X". ;-)

Maybe. The main counter-argument concerns the side-effects of self-deception. Perhaps believing X will locally help me achieve Y, but perhaps the walls I put up in my mind to maintain my belief in X, in the face of all the not-X data that I am also needing to navigate, will weaken my ability to think, care, and act with my whole mind.

8Will_Newsome
You can believe a falsity for sake of utility while alieving a truth for sake of sanity. Deep down you know you're not the best golfer, but there's no reason to critically analyze your delusions if believing so's been shown time and time again to make you a better golfer. The problems occur when your occupation is 'FAI programmer' or 'neurosurgeon' instead of 'golfer'. But most of us aren't FAI programmers or neurosurgeons, we just want to actually turn in our research papers on time. It's not even really that dangerous, as rationalists can reasonably expect their future selves to update on evidence that their past-inherited beliefs aren't getting them utility (aren't true): by this theory, passive avoidance of rationality is epistemically safer than active doublethink (which might not even be possible, as Eliezer points out). If something forces you to really pay attention to your false belief then the active process of introspection will lead to it being destroyed by the truth. Added: You know, now that I think about it more, the real distinction in question isn't aliefs and beliefs but instead beliefs and beliefs in beliefs; at least that's how it works when I introspect. I'm not sure if studies show that performance is increased by belief in belief or if the effect is limited to 'real' belief. Therefore my whole first paragraph above might be off-base; anyone know the literature? I just have the secondhand CliffsNote pop-psy version. At any rate the second paragraph still seems reasonably clever... which is a bad sign. Double added: Mike Blume's post indicates my first paragraph may not have been off the mark. Belief in belief seems sufficient for performance enhancement. Actually, as far as I can tell, Blume's post really just kinda wins the debate. Also see JamesAndrix's comment.
0pjeby
Honestly, this sounds to me like compartmentalization to protect the belief that non-compartmentalism is useful, especially since the empirical evidence (both scientific experimentation and simple observation) is overwhelmingly in favor of instrumental advantages to the over-optimistic. In any case, anticipating an experience has no truth value. I can anticipate having lunch now, for example; is that true or untrue? What if I have something different for lunch than I currently anticipate? Have I weakened my ability to think/care/act with my whole mind? Also, if we are really talking about the whole mind, then one must consider the "near" mind as well as the "far" one... and they tend to be in resource competition for instrumental goals. To the extent that you think in a purely symbolic way about your goals, you weaken your motivation to actually do anything about them. What I'm saying is, decompartmentalization of the "far" mind is all well and good, as is having consistency within the "near" mind, and in general, correlation of the near and far minds' contents. But there are types of epistemic beliefs that we have scads of scientific evidence to show are empirically dangerous to one's instrumental output, and should therefore be kept out of "near" anticipation.
3Jonathan_Graehl
The level of mental unity (I prefer this to "decompartmentalization") that makes it impossible to focus productively on a learnable physical/computational performance task, is fortunately impossible to achieve, or at least easy to temporarily drop.
8AnnaSalamon
It seems to me there are two categories of mental events that you are calling anticipations. One category is predictions (which can be true or false, and honest or self-deceptive); the other is declarations, or goals (which have no truth-values). To have a near-mode declaration that you will hit a hole-in-one, and to visualize it and aim toward it with every fiber of your being, is not at all the same thing as near-mode predicting that you will hit a hole-in-one (and so being shocked if you don't, betting piles of money on the outcome, etc.). But you've done more experiments here than I have; do you think the distinction between "prediction" and "declaration/aim" exists only in far mode?
1pjeby
To be clear, one is compartmentalizing - deliberately separating the anticipation of "this is what I'm going to feel in a moment when I hit that hole-in-one" from the kind of anticipation that would let you place a bet on it. This example is one of only many where compartmentalizing your epistemic knowledge from your instrumental experience is a damn good idea, because it would otherwise interfere with your ability to perform. What I'm saying is that decompartmentalization is dangerous to many instrumental goals, since epistemic knowledge of uncertainty can rob you of necessary clarity during the preparation and execution of your actual action and performance. To perform confidently and with motivation, it is often necessary to think and feel "as if" certain things were true, which may in fact not be true. Note, though, that with respect to the declaration/prediction divide you propose, Wiseman's luck research doesn't say anything about people declaring intentions to be lucky, AFAICT, only anticipating being lucky. This expectation seems to prime unconscious perceptual fitlers as well as automatic motivations that do not occur when people do not expect to be lucky. I suspect that one reason this works well for vague expectations such as "luck" is that the expectation can be confirmed by many possible outcomes, and is so more self-sustaining than more-specific beliefs would be. We can also consider Dweck and Seligman's mindset and optimism research under the same umbrella: the "growth" mindset anticipates only that the learner will improve with effort over time, and the optimist merely anticipates that setbacks are not permanent, personal, or pervasive. In all cases, AFAICT, these are actual beliefs held by the parties under study, not "declarations". (I would guess the same also applies to the medical benefits of believing in a personally-caring deity.)
3Will_Newsome
Compartmentalization only seems necessary when actually doing things; actually hitting golf balls or acting in a play or whatever. But during down time epistemic rationality does not seem to be harmed. Saying 'optimists' indicates that optimism is a near-constantly activated trait, which does sound like it would harm epistemic rationality. Perhaps realists could do as well as or better than optimists if they learned to emulate optimists only when actually doing things like golfing or acting, but switching to 'realist' mode as much as possible to ensure that the decompartmenalization algorithms are running at max capacity. This seems like plausible human behavior; at any rate, if realism as a trait doesn't allow one to periodically be optimistic when necessary, then I worry that optimism as a trait wouldn't allow one to periodically be realistic when necessary. The latter sounds more harmful, but I optimistically expect that such tradeoffs aren't necessary.
3pjeby
I rather doubt that, since one of the big differences between the optimists and pessimists is the motivation to practice and improve, which needs to be active a lot more of the time than just while "doing something". If the choice is between, say, reading LessWrong and doing something difficult, my guess is the optimist will be more likely to work on the difficult thing, while the purely epistemic rationalist will get busy finding a way to justify reading LessWrong as being on task. ;-) Don't get me wrong, I never said I liked this characteristic of evolved brains. But it's better not to fool ourselves about whether it's better not to fool ourselves. ;-)
5JGWeissman
Suppose there is a lake between the tee and the hole, too big for the golfer to hit the ball all the way across. Should he envision/anticipate a hole in one, and waste his first stroke hitting the ball into the water, or should he acknowledge that this hole will take multiple strokes, and hit the ball around the lake?
0pjeby
Whatever will produce the better result. Remember that the instrumental litany I proposed is, "If believing X will get me Y and I wish Y, then I wish to believe X." If believing I'll get a hole in one won't get me a good golf score, and I want to get a good score, then I wouldn't want to believe it.
-1wedrifid
Depends. Do you want to win or do you want to get the girl?
4Richard_Kennaway
Really? That is, is that what the top golfers report doing, that the mediocre ones don't? If so, I am surprised. Aiming at a target does not mean believing I'm going to hit it. Aiming at a target means aiming at a target.
6pjeby
My understanding is that top golfers do indeed pre-visualize every strike, though I doubt they visualize or expect holes-in-one. AFAIK, however, they do visualize something better than what they can reasonably expect to get, and performance always lags the visualization to some degree. What I'm saying is that if you really aim at it, this is functionally equivalent to believing, in that you are performing the same mental prerequisites: i.e., forming a mental image which you are not designating false, and acting as if it is true. That is more or less what "belief" is, at the "near" level of thinking. To try to be more precise: the "acting as if" here is not acting in anticipation of hitting the target, but acting so as to bring it about - the purpose of envisioning the result (not just the action) is to call on the near system's memory of previous successful shots in order to bring about the physical states (reference levels) that brought about the previous successes. IOW, the belief anticipation here isn't "I'm going to make this shot, so I should bet a lot of money", it's, "I'm going to have made this shot, therefore I need to stand in thus-and-such way and use these muscles like so while breathing like this" and "I'm going to make this shot, therefore I can be relaxed and not tense up and ruin it by being uncertain".
7Richard_Kennaway
It looks like a stretch to me, to call this a belief. I've no experience of high-level golf, but I did at one time shoot on the county small-bore pistol team (before the law changed and the guns went away, but that's even more of a mind-killing topic than politics in general). When I aim at a target with the intention of hitting it, belief that I will or won't doesn't come into the picture. Thinking about what is going to happen is just a distraction. A month ago I made the longest cycle ride I have ever done. I didn't visualise myself as having completed the ride or anything of that sort. I simply did the work. Whatever wins, wins, of course, but I find either of the following more likely accounts of what this exercise of "belief" really is: (1) What it feels like to single-mindedly pursue a goal. (2) A technique to keep the mind harmlessly occupied and out of the way while the real work happens -- what a coach might tell people to do, to produce that result. In terms of control theory, a reference signal -- a goal -- is not an imagined perception. It is simply a reference signal.
1pjeby
At which point, we're arguing definitions, because AFAICT the rest of your comment is not arguing that the process consists of something other than "forming a mental image which you are not designating false, and acting as if it is true." You seem to merely be arguing that this process should not be called "belief". What is relevant, however, is that this is a process of compartmentalizing one's thinking, so as to ignore various facts about the situation. Whether you call this a belief or not isn't relevant to the main point: decompartmentalization can be hazardous to performance. As far as I can tell, you are not actually disputing that claim. ;-)
6Richard_Kennaway
You can't call black white and then say that to dispute that is to merely talk about definitions. "Acting as if one believes", if it means anything at all, must mean doing the same acts one would do if one believed. But you explicitly excluded betting on the outcome, a paradigmatic test of belief on LW. Aiming at a target is not acting as if one were sure to hit the target. Visualising hitting the target is not acting as if one believes one will. These are different things, whatever they are called.
1pjeby
Even if you call it "froobling", it doesn't change my point in any way, so I don't see the relevance of your reply... which is still not disputing my point about compartmentalization.
3JenniferRM
I think maybe the problem is that different neurological processes are being taken as the primary prototype of "compartmentalization" by Anna and yourself. Performance enhancing direction of one's attention so as not to be distracted in the N minutes prior to a critical performance seems much different to me than the way the same person might calculatingly speculate about their own performance three days in advance while placing a side bet on themselves. Volitional control over the contents of one's working memory, with a thoughtful eye to the harmonization of your performance, your moment-to-moment-mindstates, and your long-term-mind-structures (like skills and declarative knowledge and such) , seems like something that would help the golfer in both cases. In both cases there is some element of explicit calculating prediction (about the value of the bet or the golfing technique) that could be wrong, but whose rightness is likely to correlate with success in either the bet or the technique. Part of the trick here seems to be that both the pro- and the anti-compartmentalization advice are abstract enough that both describe and might inspire good or bad behavior, and whether you think the advice is good or bad depends on which subsets of vaguely implied behavior are salient to you (based on skill estimates, typical situations, or whatever). Rationalists, especially early on, still get hurt... they just shouldn't get hurt twice in the same way if they're "doing it right". Any mistake should make you double check both the theory and its interpretation. The core claim of advocates of rationality is simply that there is a "there" there, that's worth pursuing... that seven "rational iterations" into a process, you'll be in much better position than if you'd done ten things "at random" (two of which were basically repetitions of an earlier mistake).
7pjeby
See Seligman's optimism research. Optimists out-perform pessimists and realists in the long run, in any task that requires motivation to develop skill. This strongly implies that an epistemically accurate assessment of your ability is a handicap to actual performance in such areas. These kinds of research can't just be shrugged off with "seems like something that would help", unless you want to drop epistemic rationality along with the instrumental. ;-)
2NancyLebovitz
I'm a fairly good calligrapher-- the sort of good which comes from lots of attentive hours, though not focused experiments. I've considered it a blessing that my ambition was always just a tiny bit ahead of what I was able to do. If I'd been able to see the difference between what I could do when I started and what I'm able to do now (let alone what people who are much better than I am are able to do), I think I would have given up. Admittedly, it's a mixed blessing-- it doesn't encourage great ambition. I hear about a lot of people who give up on making music because the difference between the sounds they can hear in their heads and the sounds they can produce at the beginning are simply too large. In Effortless Mastery, Kenny Werner teaches thinking of every sound you make as the most beautiful sound, since he believes that the effort to sound good is a lot of what screws up musicians. I need to reread to see how he gets from there to directed practice, but he's an excellent musician. I've also gotten some good results on being able to filter out background noise by using "this is the most beautiful sound I've ever heard" rather than trying to make out particular voices in a noisy bar. Steve Barnes recommends high goal-setting and a minute of meditation every three hours to lower anxiety enough to pursue the goals. It's worked well for him and seems to work well for some people. I've developed a ugh field about my whole fucking life as a result of paying attention to his stuff, and am currently working on undoing it. Surprisingly, draining the certainty out of self-hatred has worked much better than trying to do anything about the hostility. A quote about not going head-on against psychological defenses
4pjeby
That reminds me of another way in which more epistemic accuracy isn't always useful: projects that I never would have started/finished if I had realized in advance how much work they'd end up being. ;-)
1Will_Newsome
(I did similarly with the Litany of Gendlin in my post):
5jimrandomh
I wrote a slightly less general version of the Litany of Gendlin on similar lines, based on the one specific case I know of where believing something can produce utility: The last two lines may be truncated off for some values of X, but usually shouldn't be.
0Valentine
I've been wondering about this lately. I don't have a crisp answer as yet, though for practical reasons I'm definitely working on it. That said, I don't think your golfer example speaks to me about the nature of the potential danger. This looks to me like it's highlighting the value of concretely visualizing goals in some situations. Here are a few potential examples of the kind of phenomenon that nags at me: * I'm under the impression that I'm as physically strong as I am because I learned early on how to use the try harder for physical tasks. I noticed when I was a really young kid that if I couldn't make something physically budge and then I doubled my effort, I still had room to ramp up my effort but the object often gave way. (I would regularly test this around age 7 by trying to push buildings over.) Today this has cashed out as simple muscular strength, but when I hit resistance I can barely manage to move (such as moving a "portable" dance floor) my first instinct is still to use the try harder rather than to find an easier way of moving the thing. * This same instinct does not apply to endurance training, though. I do Tabata intervals and find my mind generating adamant reasons why three cycles is plenty. I attribute this to practicing thinking that I'm "bad at endurance stuff" from a young age. * Possibly relatedly, I don't encounter injuries from doing weight-lifting at a gym, but every time I start a jogging regimen I get a new injury (illiotibal band syndrome, overstretching a tendon running inside my ankles, etc.). This could be coincidence, but it's a weird one, and oddly consistent. * My impression is that I am emotionally capable of handling whatever I think I'm emotionally capable of handling, and conversely that I can't handle what I think I can't handle. For instance, when I'm in danger of being rejected in a social setting, I seem to have a good sense of whether that's going to throw me emotionally off-kilter (being upset, feeling really
[-]darius130

'Something to protect' always sounded to me like a term for a defensive attitude, a kind of bias; I have to remind myself it's LW jargon for something quite different. 'Definite major purpose' avoids this problem.

1EchoingHorror
I think that, very basically, when it comes to ideas rationalists explicitly don't have anything to protect. Ideas are to be judged by their merits without interference. This has to include the Something to Protect that brought about rationality in the first place, because to the degree that thing isn't rational there is a contradiction in using rationality to protect irrationality, the defensive attitude and bias you mentioned. Can "definite major purpose" avoid that problem (beyond sounding unlike what is meant)? I'd shorten it to "major purpose" or make it "prime directive" or "main quest" just to avoid anything definite. It should be subject to change with new information or better thinking while the rational methods used to achieve it stay the same.

I find the analysis presented in this post to be exceptionally good, even by the standards of your usual posting.

2Will_Newsome
Seconded: dense with useful content, unlike this comment.
2Will_Sawin
If you quoted the most useful sentence of the post, your comment would be more than half as dense, which is still pretty dense.
0Will_Newsome
But it would be redundant information, making the post/comment system overall less dense, which would make people sad.
6AnnaSalamon
Not so -- re-emphasizing what points hit home, and how one plans to apply them, often helps the useful parts stand out for others. Self-help/business seminars standardly have attendees summarize takeaways, and what personal experiments they plan, after each session.
0Will_Newsome
Good point.
-1Jonathan_Graehl
I find both your comments incredibly dense :)
-2wedrifid
And your momma is so dense that.... :P

For example, we may request critiques from those likely to praise our abilities...

In the spirit of learning and not wireheading, could a couple people for whom this post didn't work well explain what didn't work about it? A few folks praised it, but it seems to be getting less upvotes than other posts, and I'd love to figure out how to make posts that are widely useful.

3Relsqui
Personally, I don't have the foundation in relevant knowledge to easily understand much of the post content, so I'm not qualified to vote on it one way or the other. I may come back later, when I do, and vote then.
3AnnaSalamon
Thanks. Was the post useful to you, or just opaque?
7Relsqui
Not entirely opaque, but like reading a language which you've learned the 200 most common words of, enabling you to understand 95% of a text and not come away with the point (because the key parts are in the other 5%). Not an error, just a reader mismatch; it wouldn't have been worth mentioning except that you asked.
7MichaelVassar
Have you read the sequences yet? If not, can you suggest a good way to encourage people who haven't yet done so to do so?

After trying to figure out where the response would be best suited, I'm splitting the difference; I'll put a summary here, and if it's not obviously stupid and seems to garner comments, I'll post the full thing on its own.

I've read some of the sequences, but not all; I started to, and then wandered off. Here are my theories as to why, with brief explanations.

1) The minimum suggested reading is not just long, it's deceptively long.

The quantity by itself is a pretty big hurdle to someone who's only just developing an interest in its topics, and the way the sequences are indexed hides the actual amount of content behind categorized links. This is the wrong direction in which to surprise the would-be reader. And that's just talking about the core sequences.

2) Many of the sequences are either not interesting to me, or are presented in ways that make them appear not to be.

If the topic actually doesn't interest me, that's fine, because I presumably won't be trying to discuss it, either. But some of the sequence titles are more pithy than informative, and some of the introductory text is dissuasive where it tries to be inviting; few of them give a clear summary of what the subject is and w... (read more)

To be clear, do you actually think that time spent reading later posts has been more valuable than marginal time on the sequences would have been? To me that seems like reading Discover Magazine after dropping your intro to mechanics textbook because the later seems to just tell you thinks that are obvious.

I think some of my time spent reading articles in the sequences was well spent, and the rest was split between two alternatives: 1) in a minority of cases where the reading didn't feel useful, it was about something I already felt I understood, and 2) in a majority of such cases, it wasn't connected to something I was already curious about.

It's explained a bit better in the longer version of the above comment (which now appears to be homeless). But I think the sequences, or at least the admonition to read them all, are targeted at someone who has done some reading or at least thinking about their subjects before. Not because they demand prior knowledge, but because they demand prior interest. You may have underestimated how much of a newbie you have on your hands.

It's not that I'm claiming to be so smart that I can participate fully in the discussions without reading up on the fundamentals, it's that participating or even just watching the discussion is the thing that's piquing my interest in the subjects in the first place. It feels less like asking me to read about basic physics before trying to set up a physics experiment, and more like asking me to read about music theory witho... (read more)

9komponisto
I am struck by the inclusion of the seemingly unnecessary phrase "with other people", which suggests that your real interest is social in nature. And sure enough, you confirm this later in the comment: and It seems like an important point, and another argument in favor of additional (sub)forums. About that, I'm not sure what I think yet. Incidentally, against the notion that attending performances is the most enjoyable part of the musical experience, here is Milton Babbitt on the subject:
4Relsqui
Well, to say it's my "real" interest suggests that my interest in rationality is fake, which is false, but I am indeed a very social critter and a lot of the appeal of LW is being able to discuss, not just absorb. (I even get shiny karma points for doing it well!) So, yes--and I was actually realizing that myself over the course of writing that comment (which necessarily involved thinking about why I'm here). Despite the above, I'm not actually sure why it is. Well, I voted for 'em, so it's good to hear that's consistent. :) That quote is pretty funny. We clearly differ in at least these two ways: 1) I either don't know or don't care enough about music to be bothered by period distractions from it (I'm not sure how to tell the difference from inside my own head), and 2) I like the noisy hall. He's right about the novel, though, that would be appalling. (Difference being that verbal language breaks down a lot faster if you miss a piece.)
0Relsqui
Oh, my. Fiction put in a good effort, but truth pulls ahead as always: Source; non-free, but includes a thorough abstract.
6MichaelVassar
Thanks for a very thoughtful answer.
5Relsqui
You're quite welcome. I appreciate how much thought and respect you're giving a newbie's opinion.
0Perplexed
A clever point, but is it really useful to compare the sequences to a textbook? Maybe a textbook at some community college somewhere. I personally found the sequences to be overloaded with anecdote and motivation, and rather lacking in technical substance. There is one thing that the post and comment part of this site has that the sequences do not have. Dialog. Posters and commenters are challenged to clarify their positions and to defend their arguments. In the sequences, on the other hand, it often seemed that Eliezer was either busy demolishing strawmen, or he was energetically proving some point which I had never really apprehended.
1timtyler
The "sequences" posts have comment sections too - no? There are only a few posts with disabled comments - such as this one: http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/ Evidently the definition of "rationality" is not up for debate. Perhaps it is the royal "we".
3matt
I think that's a bug, not a feature. I'll look into it.
0Perplexed
Yes, but I don't think the discussion was all that vigorous. Eliezer was making a full size posting every day back then. He really didn't have the time to engage commenters, even if the commenters had tried to engage him. Cute.
0timtyler
Pretty often, when a post has an obvious flaw, or attacks a straw man, that is pointed out in the comments.
2Will_Newsome
Good analysis. Also briefly explaining where the subjects connect to rationality. It's not immediately obvious what e.g. evolutionary biology or quantum physics have to do with human rationality, which probably puts people off. Actually, it's so not-obvious that I think it'd be easy to miss the point if one wasn't somewhat careful about making sure they read most of the posts in the sequence, or the ones explaining how everything's connected.
2Relsqui
By the by, is this a vote for or against making an actual post on this subject (or neither)? I'm trying to get a sense of whether that would be acceptable and useful; I've gotten a handful of upvotes on comments about it, but I don't know if that means to go ahead or not. (This is an area of local etiquette I'm not yet familiar with and don't particularly want to take the karma hit for messing up.)
9Will_Newsome
In general, suggestions for site improvements are frowned upon because very few people here are keen on actually implementing them, and the typical response is "Yeah that'd be great, now let's have a long discussion about how great that is and subtle improvements that could make it even better while not actually doing anything." Less Wrong needs improvements, but more than that it needs people willing to improve it. The Intro Page idea has been around for awhile, but the people who have control over the site have a lot of other stuff to focus on and there's limited time. So overall I don't think a post would be good, but I'm unsure as to how to fix the general problem.
4Relsqui
Thanks, that's the answer I was looking for. If it was done on the wiki, would they need to commit time to it? It seems like a dedicated member or set of members could just write the page and present it to the community as a fait accompli. The only reason I haven't done it is that i don't feel I know enough yet. Maybe I'll do it anyway, and that will inspire more experienced LWers to come fix it. ;)
1Douglas_Knight
Yes, write something on the wiki and ask later for it to be placed somewhere useful. There is the problem that the people who need introductions probably aren't going to write them. If you go back to reading the sequences, it would be a good exercise to write summaries.
0Relsqui
Yup. And for people who don't need them, it's pretty tedious. That occurred to me as well. We'll see how that comes along.
0Perplexed
I'll vote for making a post. I like your characterization of what is "wrong" with the sequences, but I'm not sure what ought to be done about it. I suspect that different people need to read different sequence postings. I would like to have the introduction pages for each sequence be expanded to provide roughly a paragraph of description for each posting in the sequence. If you disagree with the paragraph or don't understand it, then you should probably read that posting. ETA: After reading Will's comment, I will withdraw my vote. Proceed with caution.
0Relsqui
I agree; that's one of the things I wanted to discuss (and something my solution would theoretically address). I might try to find another useful place to put my longer writeup of the subject, e.g. my own talk page on the wiki.
0Relsqui
This is a very good point; I agree that this belongs in the summary. In fact, logically, it would be the thread connecting everything that needed to be summarized.
1komponisto
You'll want to see this post, if you haven't already.
1Relsqui
Ah, thank you. I hadn't seen that one, although I had seen the technical explanation, which did a much better job of explaining the intuitive usage than the intuitive one, and involved less math. ;) I'll check this out too.
7Relsqui
I started a reply to this and then noticed that it was getting to be a solid pageful. Is "why don't newbies read the sequences" a sufficiently commonly addressed topic to warrant a post? What I've got so far includes a breakdown of my theory as to the answer, as well as a suggestion for a solution.
3Perplexed
I would like to see that analysis and suggestion very much. But it does sound a bit risky as a topic for a premier top-level post. Why not just present it as a comment?
1Relsqui
The reason would be if it were of interest to the community at large, but I trust your (pl.) judgment if you say it would be better suited to a comment. I'll post when I'm done tinkering with it.
5[anonymous]
Phenomenon -> Theory(s) -> Experiment! If you make a post, it would probably benefit from a simple poll.
1Relsqui
Good point. I saw Yvaine post a poll recently, so I have a general idea of how that works here, but if there's anything non-obvious I need to know by all means elucidate. (Similarly, I'd welcome any advice on formatting a useful article.)
0Relsqui
Someone else asked, so I pasted my original long answer to this question over here.

Thank you, this was an excellent post. It tied together a lot of discussions that have gone on and continue to go on here, and I expect it to be very useful to me.

Among other things, I suffer from every impairment to instrumental rationality that you mention under Type 2.

The first of those is perhaps my most severe downfall; I term it the "perpetual student" syndrome, and I think that phrasing matches other places where that phrase is used. I'm fantastically good at picking up entry-level understandings of things, but once I lose the rewarding f... (read more)

1[anonymous]
I'm naturally diffuse in my interests myself; I get around it with good old-fashioned shame. I can't look myself in the eye unless I hit certain official milestones in a certain period of time.
[-][anonymous]60

I like the writing here: very clear and useful.

I have a very simple problem when doing mathematics.

I want to write a proof. But I also want to save time. And so I miss nuances and make false assumptions and often think the answer is simpler than it is. It's almost certainly motivated cognition, rather than inadequate preparation or "stupidity" or any other problem.

I know the answer is "Stop wanting to save time" -- but how do you manipulate your own unvoiced desires?

Do you have any ideas, including guesswork, about where your hurry is coming from? For example, are you in a hurry to go do other activities? Are you stressing about how many problems you have left in your problem set? Do you feel as though you're stupid if you don't immediately see the answer?

Some strategies that might help, depending:

  1. Block off time, know and visualize that this time is for proof-writing and nothing else (you have this block of time whether you use it or not, and cannot move onto other activities), and visualize that this is the only problem in the world.
  2. Make a plan for the rest of the day (and write your “must hurry to do” activities down on a list, with their own timeslots) so that you can believe the blocked off time in 1. When your brain tells you you have to hurry and do X, remind it that you’ll do X at 4pm (or whenever), that this is the timeslot for proofs, and that focusing slowly will get the most done.
  3. Find a context wherein you have the sort of slow, all-absorbing focus that would be helpful here (whether on proof-writing, conversation, or whatever else). Try to understand the relevant variables/mindset and to set up the outside context similarl
... (read more)
7mathemajician
The way it works for me is this: First I come up with a sketch of the proof and try to formalise it and find holes in it. This is fairly creative and free and fun. After a while I go away feeling great that I might have proven the result. The next day or so, fear starts to creep in and I go back to the proof with a fresh mind and try to break it in as many ways as possible. What is motivating me is that I know that if I show somebody this half baked proof it's quite likely that they will point out a major flaw it. That would be really embarrassing. Thus, I imagine that it's somebody else's proof and my job is to show why it's broken. After a while of my trying to break it, I'll then show it to somebody kind who won't laugh at me if it's wrong, but is pretty careful at checking these things. Then another person... slowly my fear of having screwed up lifts. Then I'm ready to submit to publish. So in short: I'm motivated to get proofs right (I have yet to have a published proof corrected, not counting blog posts) out of a fear of looking bad. What motivates me to publish at all is the feeling for satisfaction that I draw from the achievement. In my moderate experience of mathematicians, they often seem to have similar emotional forces at work.
6pjeby
If you think of the brain as having two "programming languages": the "far" (symbolic) and "near" (experiential), and the "unvoiced desire" as being something that's running on the "near" system, then what you need to do is translate from the symbolic to the experiential. In this case, you'd begin by asking what experiences you anticipate will happen if you don't "save time", and what your emotional reaction to those experiences is. Take care, though, to imagine actually experiencing one specific situation (in sensory detail) where you currently want to "save time", and to anticipate the results in sensory detail as well. Otherwise, you'll only engage the "far" (symbolic) system, and won't get any useful information.
3[anonymous]
Thanks for all the good advice! I think I'll try blocking off time (I've already started tracking how much time a day I spend actually working and found it was much less than I'd assumed) and also try the two-stage process (first try to get something, then try looking for flaws.)
3JoshuaZ
At least based on personal introspection, the part of my mind that comes up with proofs feels very similar to engaging in motivated cognition. This is in some ways ok because if a proof is valid then counterarguments aren't something that need to be thought about. But yes, this can lead to the problem of constructing apparently valid proofs that then don't work. One thing that seems to help is to engage in more or less motivated cognition to make a proof and then go through that proof in close detail looking for flaws. So essentially, use motivated cognition to try to get something good, and then use motivated cognition to try to poke holes in it. If you iterate this enough one will generally have an ok proof.
3dco
This is a well-known issue. Basically, a mathematical problem tends to involve several non-trivial steps. If you are too pessimistic, it is impossible to see all these steps (because you get bogged down in proving details and lose track of the point of the problem.) On the other hand, if you are too optimistic, you will take too long to debunk an incorrect sequence of steps, leading to the problem you describe. One solution is to work with someone else, and take turns being optimistic. (E.g., one person proposes a solution, then the other tries to shoot it down; it's much easier to be pessimistic about other people's ideas.) Another solution is what Mr. Weissman proposes: just investigate the problem, look at similar problems, try to falsify the problem, try to prove something stronger, etc. I'm sure that professional mathematicians deal with this issue all the time, so you might want to ask one of them as well.
3Alicorn
Get someone to pay you by the hour, but not so much that the money swamps your desire to write the proof?
2Soki
Ask yourself what are the thrilling aspects of what you want to prove. Look for what you cannot explain, but feel is true. Before writing, you should be satisfied with your understanding of the problem. Try to find holes in it, as if you were a teacher reading some student work. You should also ask yourself why you want to write a correct proof, and remember that a proof that is wrong is not a proof.
2JGWeissman
Instead of setting out to prove a proposition, investigate whether or not it is true. Perhaps genuine curiosity will override your desire to save time.

“... and so I don’t care about dating anyhow, and I have no reason to risk approaching someone.”

This doesn't seem like it is a distorted reward pathway. Unless people are valuing being virtuous and not wasting time and money on dating?

If it is a problem it seems more likely to be an Ugh field. I.e. someone who had problems with the opposite sex and doesn't want to explore a painful area.

Apart from that I think rwallace's point needs to be addressed. Lack of compartmentalisation can be a bad thing as well as a good thing. Implicit in this piece is the i... (read more)

3AnnaSalamon
People seem to feel better about not achieving things they “don’t care about” than about ignoring or failing at things they care about. Thus the phenomenon of sour grapes (where, after Aesop’s fox fails to get the grapes, it declares that the grapes “were sour anyway”). I’m not sure if sour grapes arises because we don’t want to expect pain and desire-dissatisfaction in our futures (because one e.g. cares about dating, but plans not to ever work toward it) or because we prefer to think of ourselves as the sorts of people who would act on desires instead of fleeing in fear, or what. I agree that ugh fields are also involved in the example.
1Jonathan_Graehl
Sour grapes are essential when they're one shot opportunities that we missed (perfect world: first learn from any mistake, then emotionally salve w/ sour grapes). They're a detriment when the opportunity is ongoing and, fear of more possible failures considered, likely worth the effort.
4wedrifid
Sour grapes are never essential. Not only are there better emotional salves it is healthier to just not take emotional damage from missed opportunities or mistakes in the first place. (This is a skill that can be developed.)
5EchoingHorror
I take the "Meh, I've had worse" approach to deflecting emotional damage. I'm also partial to considering missed opportunities to be trivial additions to the enormous heap of missed opportunities before them. No need for sour grapes here. In fact, let's keep all grapes sweet and succulent just in case we get them later.
2Relsqui
Thanks, now I'm hungry.
4Jonathan_Graehl
Interesting. Can you be more specific? I don't feel like I can, or need to, make all of my emotional reactions rational. But if it's easy, of course I prefer to be better integrated.
3wedrifid
People certainly don't need to make their emotional reactions rational if they don't want to - but they can do so to some extent when it helps. This is the cornerstone of things like Cognitive Behavioural Therapy and much of pjeby's mind hacking. It's hard to describe without going into huge detail but something that works is embracing the frustration in the full degree rather than flinching away from it. Then you can release it. Then rinse and repeat. The emotional trigger is reduced as your mind begins to realise that it really isn't as awful as you thought. You can also harness the frustration into renewed motivation for reaching the generalised goal that hit a setback or localised failure. This is nearly (but not quite) the opposite of using the frustration to remove your desire for something.
5Jonathan_Graehl
I've also read about CBT and agree that it seems helpful. I took from it the idea that if you're avoiding some activity that you think you would probably benefit from, look at the reasons you think it will be hard/painful/whatever, and you should not only think about and defuse them purely intellectually, but also through practice (starting w/ milder efforts) get your toes wet in that direction, comparing the actual results to your overblown negative expectations. Also, in my experience, I've never been disappointed when I honestly describe some negative emotional reaction I'm already having, and look for some insight into why I'm having it. That is, I'm already feeling terrible, and so coming up with true-seeming stories explaining the feeling (and perhaps deciding that I've learned something, or have some plan for doing better in the future) is a mild relief.
3wedrifid
This reminds me of the popular "what is true is already so; owning up to it doesn't make it worse". Also, see today's SMBC comic. His timing is incredible. :)
5Relsqui
"I must not be frustrated. .... I will face my frustration, permit it to pass over me and through me ..." I honestly use the Litany Against Fear quite like this--for frustration, annoyance, pain, or anything else that I have to put up with for a while. The metaphor of passing over and through works well for me.
3wedrifid
My twist on that is that I use 'will' instead of 'must'. Similar to Jonathan I don't think I need to alter my emotional responses and I reject such demands even from myself. "Will", "want" and sometimes "am" all work better for me. (This can just mean leaving off the first sentence there.)
4Jonathan_Graehl
I won't look for the study hyperlink, but I was also charmed by something showing that the self-question "will I X?" was interesting in that it actually movtivated people to do X (more so than something like "I must X"). That is, having a curious/wondering tone seemed helpful. I and the reporters of this result may be missing the actual cause, of course.
1wedrifid
I've seen it, probably while reading through pjeby's work. It's one of favourite tactics. I don't recall the name he gives it but that curious wondering tone seems to work wonders.
2Relsqui
That makes sense to me. "Must" implies a moral code; if you decline to accept responsibility from any external moral code, you could interpret it as "must, according to rational methods of achieving my personal goals," but there's no advantage to that circuitous interpretation over the changes you suggest.
0wedrifid
Exactly the reasoning I use.
1whpearson
Disclaimer: I believe I have a lot less interest in dating than most men. Partially introspection/partially revealed preference when opportunity arose. I hadn't thought about that view. One thing is that it is worth noting is that it is hard to ignore dating. And people tend to ask for some explanation, I tend to go with, "I haven't found the right person yet," though. Although what would you say the right response was to not being willing to pay a cost for something? Lets say you want a sports car, you lust after it for a bit. Then you find it costs 3 million dollars, and you could always find better things to do with the money. Should you then say you don't care about the sports car? Or should you leave it as a nagging desire which will never be fulfilled?
1mattnewport
This seems like a false dichotomy. My answer to this question is something along the lines of "the current price of a sports car is more than I am currently willing to pay for the pleasure of owning a sports car, in the future circumstances may be different but for now I will make higher expected value choices".
1whpearson
To me things and people I care about are those that I willingly expend some mental energy on every so often. So when I care about owning a sports car, every so often it would pop into my head, "Darn, I wish the car was cheaper". As it is unlikely to become so it would be an unfulfilled desire and taking up mental energy for no reason, I could spend that mental energy elsewhere. Care is different from value. Does that explain what I meant?
2mattnewport
I think I understand what you mean, I just don't think it's a good strategy to try to convince yourself you don't care about something because it is not currently attainable. A better alternative might be to think about what appeals to you about owning a sports car and consider if there are lower cost ways of getting some of the same benefits for example.
1whpearson
Oh, I agree. But once you have done so would it be a bad idea to say you no longer care about the sport's car? Aside: I didn't mean to give the impression it was unattainable. The hypothetical still works if you've got 4 million dollars, you could buy a house and donate some money to xrisk charities, found companies or put it aside for retirement. All better things than the car.
5mattnewport
If you want a sports car that implies that there is some point at which the best marginal use of your next 3 million dollars would be to buy the sports car. If there is no such point then it seems to me that you don't really want it in any meaningful sense.

Thanks! This excellent material to be using during "confession" parts of my Yom Kippur faking ceremony. I am taking printouts to shul ;)

2[anonymous]
For once it seems to me that self-improvement is the only useful form of "repentance."
[-][anonymous]30

This post has been very useful to me.

If I had to isolate what was personally most useful (it'd be hard but) I'd pick the combination of your discussion of distorted reward signals and your advice about something to protect. I now notice status wireheading patterns quite frequently (often multiple times daily), and put a stop to them because I recognize they don't work towards what I actually care about (or maybe because I identify as a rationalist, I'm not sure). Either way I appreciate being able to halt such patterns before they grow into larger action patterns.

[-][anonymous]20

I suspect that an underrated rationality technique is to scream while updating your plans and beliefs on unpleasant subjects, so that any dismay at the unpleasantness finds expression in the scream rather than in your plans and beliefs.

This is a great post, and I wish to improve only a tiny piece of it:

"Similarly, we often continue to discuss, and care about, concepts that have lost all their moorings in anticipated sense-experience."

In that sentence, I hear a suggestion the primary or only thing we ought to care about is anticipated sense-experience. However, anticipated sense-experience can be manipulated (via suicide or other eyes-closing techniques), and so cannot be the only or primary thing that we ought to care about.

I admit I don't know precisely what else we ought to c... (read more)

-2Jonathan_Graehl
This doesn't require any amendment to the original statement. Once you decide to cope by closing your eyes, your future sense experience options are limited - same with suicide. So neither will often be rationally chosen (except perhaps in a scary movie screening).
2Johnicholas
You're right, no amendments are necessary; I was answering a subtle implication that I heard in the sentence, and which Anna Salamon probably didn't intend to put there, and it's possible that my "hearing" in this matter is faulty. However, your comment makes me think I haven't been sufficiently clear: A "quantum" suicide strategy would be combining a lottery ticket with a device that kills you if you do not win the lottery (it doesn't really have anything to do with quantum mechanics). If we all we cared about was anticipated sense experience, this combination might seem to be a good idea. However, it is (to my common sense, at least) a bad idea; which is an argument that we care about something more than just anticipated sense experience.
0AnnaSalamon
It's a good point; thanks. I had indeed missed that when I wrote the sentence.
1wedrifid
Suicide is groovy It hides the scary movie And I can take or leave it if I please

enemy's vocabulary.

Is there a war I missed?

6AnnaSalamon
Perhaps I should have used a different term. I just meant that Think and Grow Rich contains much discussion of e.g. "applied faith", and it is easy to hear terms like that and try to spit out the whole book. But if you listen to the concrete actions it is recommending, rather than allowing yourself to react as to an enemy camp, most of them seem sound.
0mattnewport
I wondered about this comment as well. Think and Grow Rich has some fairly serious rationality fails and contains some pretty wacky and unsupported ideas so maybe that's what the comment was getting at.
1xamdam
World is rationality fail, by and large. Enemy sounds like there is something extra evil there.
0mattnewport
Agreed.
[-][anonymous]00

[2] We receive reward/pain not only from "primitive reinforcers" such as smiles, sugar, warmth, and the like, but also from many long-term predictors of those reinforcers (or predictors of predictors of those reinforcers, or...)

How primitive are these "primitive reinforcers"? For those who know more about the brain, is it known if and how they are reinforced through lower-level systems? Can these systems be (at least partially) brought under conscious control?

[-][anonymous]00

Beside the technical posts, LW has many good articles that teach a good mindset for epistemic rationality (like the 12 Virtues and the litanies). Much of this applies to instrumental rationality. But I compartmentalize between epistemic and instrumental rationality. I use different words and thoughts when thinking about believes and actions or plans.

So I have been reading the 12 Virtues and tried to interpret it in terms of plans, actions and activities.

The first virtue (curiosity) would obviously become "something to protect".

The fourth virtu

... (read more)

This is an illuminating summary.