Once upon a time, Seth Roberts took a European vacation and found that he started losing weight while drinking unfamiliar-tasting caloric fruit juices.
Now suppose Roberts had not known, and never did know, anything about metabolic set points or flavor-calorie associations—all this high-falutin' scientific experimental research that had been done on rats and occasionally humans.
He would have posted to his blog, "Gosh, everyone! You should try these amazing fruit juices that are making me lose weight!" And that would have been the end of it. Some people would have tried it, it would have worked temporarily for some of them (until the flavor-calorie association kicked in) and there never would have been a Shangri-La Diet per se.
The existing Shangri-La Diet is visibly incomplete—for some people, like me, it doesn't seem to work, and there is no apparent reason for this or any logic permitting it. But the reason why as many people have benefited as they have—the reason why there was more than just one more blog post describing a trick that seemed to work for one person and didn't work for anyone else—is that Roberts knew the experimental science that let him interpret what he was seeing, in terms of deep factors that actually did exist.
One of the pieces of advice on OB/LW that was frequently cited as the most important thing learned, was the idea of "the bottom line"—that once a conclusion is written in your mind, it is already true or already false, already wise or already stupid, and no amount of later argument can change that except by changing the conclusion. And this ties directly into another oft-cited most important thing, which is the idea of "engines of cognition", minds as mapping engines that require evidence as fuel.
If I had merely written one more blog post that said, "You know, you really should be more open to changing your mind—it's pretty important—and oh yes, you should pay attention to the evidence too." And this would not have been as useful. Not just because it was less persuasive, but because the actual operations would have been much less clear without the explicit theory backing it up. What constitutes evidence, for example? Is it anything that seems like a forceful argument? Having an explicit probability theory and an explicit causal account of what makes reasoning effective, makes a large difference in the forcefulness and implementational details of the old advice to "Keep an open mind and pay attention to the evidence."
It is also important to realize that causal theories are much more likely to be true when they are picked up from a science textbook than when invented on the fly—it is very easy to invent cognitive structures that look like causal theories but are not even anticipation-controlling, let alone true.
This is the signature style I want to convey from all those posts that entangled cognitive science experiments and probability theory and epistemology with the practical advice—that practical advice actually becomes practically more powerful if you go out and read up on cognitive science experiments, or probability theory, or even materialist epistemology, and realize what you're seeing. This is the brand that can distinguish LW from ten thousand other blogs purporting to offer advice.
I could tell you, "You know, how much you're satisfied with your food probably depends more on the quality of the food than on how much of it you eat." And you would read it and forget about it, and the impulse to finish off a whole plate would still feel just as strong. But if I tell you about scope insensitivity, and duration neglect and the Peak/End rule, you are suddenly aware in a very concrete way, looking at your plate, that you will form almost exactly the same retrospective memory whether your portion size is large or small; you now possess a deep theory about the rules governing your memory, and you know that this is what the rules say. (You also know to save the dessert for last.)
I want to hear how I can overcome akrasia—how I can have more willpower, or get more done with less mental pain. But there are ten thousand people purporting to give advice on this, and for the most part, it is on the level of that alternate Seth Roberts who just tells people about the amazing effects of drinking fruit juice. Or actually, somewhat worse than that—it's people trying to describe internal mental levers that they pulled, for which there are no standard words, and which they do not actually know how to point to. See also the illusion of transparency, inferential distance, and double illusion of transparency. (Notice how "You overestimate how much you're explaining and your listeners overestimate how much they're hearing" becomes much more forceful as advice, after I back it up with a cognitive science experiment and some evolutionary psychology?)
I think that the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms—thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up. And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.
Note the grade of increasing difficulty in citing:
- Concrete experimental results (for which one need merely consult a paper, hopefully one that reported p < 0.01 because p < 0.05 may fail to replicate)
- Causal accounts that are actually true (which may be most reliably obtained by looking for the theories that are used by a majority within a given science)
- Math validly interpreted (on which I have trouble offering useful advice because so much of my own math talent is intuition that kicks in before I get a chance to deliberate)
If you don't know who to trust, or you don't trust yourself, you should concentrate on experimental results to start with, move on to thinking in terms of causal theories that are widely used within a science, and dip your toes into math and epistemology with extreme caution.
But practical advice really, really does become a lot more powerful when it's backed up by concrete experimental results, causal accounts that are actually true, and math validly interpreted.
Excellent comment! You have hit the nail very nearly square on the head. Allow me to make one minor adjustment to your aim, and then relate your analogy back to the fields of self-help, NLP, Zen, normal waking consciousness, etc.
See, it's not the content of the thought that switches modes, but how you think the thought, or rather, what portion of your thoughts you pay attention to.
In suspension of disbelief -- and hypnosis, suggestion, etc.-- you simply refrain from commenting on your experience in-progress, because it interferes with the perception of the experience itself. (See e.g. current studies on how explicit commenting can reduce satisfaction with decision making and accuracy of classification.)
So if "B" is experience, and "A" is commenting-about-experience, to the extent that you do both at the same time, one or the other will suffer, just like your experience of a movie will be degraded by a running commentary by audience members... unless you prefer the humor of the commentary to the experience of the movie. (But in that case, the movie still suffers relative to the commentary, you just like it better that way!)
Now, whether you refrain from commenting on something is partly determined by what you already believe. Movies that violate my understanding of say, computer technology, will be much more tempting to internally dispute or comment on, thus voiding my enjoyment and use of "B"-mode thinking. In contrast, someone who knows less about computers will not be induced to comment by the same scene, and thus not suspend their disbelief.
Self-help techniques use B-mode thinking, but the more intelligent you are, the more ways you can find to object to the "truthfulness" of thoughts that you nonetheless would find useful to have installed in your "B" system. But if you give in to the temptation to meta-comment on those thoughts, then you will not succeed in installing them in the "B" system... assuming you didn't already throw the book down in disgust, long before even trying to!
Religion works in roughly the same way, of course: you're discouraged from meta-commenting, so various B-mode thoughts can be installed and left running.
Of course, we all know that this is bad, but it's not because B-mode itself is bad, it's because religions include many poor-quality beliefs, in addition to the ones that might have some personal or social utility!
Part of the foundation of NLP, however, is a set of principles known as the "outcome frame" and "ecology"-- attempts to codify quality standards for "B-mode beliefs", based on well-formedness rules for the beliefs themselves, and standards for evaluating the likely long-term systemic effects of carrying that belief.
Most of the original NLP clique have also been very careful, when defining their techniques, to offer guidelines for what kind of beliefs to install in people, and how to avoid "junk beliefs".
(For example, one is cautioned to prefer installing beliefs of capability rather than ability, e.g. "I can learn to do this better", not "I am the best there is".)
Most self-help material -- including much popular work on NLP, alas -- does not adhere to such standards.
My experience of Zen meditation is that it trains you to refrain from commenting on your thoughts and experiences, which is why it provides benefits for learning skills that require you to focus on experience instead of commenting. (See e.g. "The Inner Game of Tennis".) So, AFAICT, it's definitely related to the same "B" mode as other self-help modalities, and really just consists of practicing trying to stay in B mode, no matter what thoughts try to pull you into A mode.
In contrast, hypnosis tries to get you so relaxed that it seems like "too much work" to do any "A" mode thinking, versus just drifting along with your ongoing "B" experience.
NLP techniques, including my own, work on controlled alternation of attention between the A and B modes.
And normal consciousness for most people also alternates between A and B, but "A" dominates, and we actually spend good money (e.g. on movies and other entertainment, hobbies, etc.) so we can spend some quality time in "B".
I'd generally agree with that, but I was recently at an excellent qi gong workshop taught by Yang Yang, who told the students to do qi gong with an attitude of "I am a master". As far as I can tell, this has the advantage of overriding habits of thinking "I'm just a student, I'm not very good at this". It might also override habits of thinking "I have to show how good I am".