Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Practical Advice Backed By Deep Theories

44 Post author: Eliezer_Yudkowsky 25 April 2009 06:52PM

Once upon a time, Seth Roberts took a European vacation and found that he started losing weight while drinking unfamiliar-tasting caloric fruit juices.

Now suppose Roberts had not known, and never did know, anything about metabolic set points or flavor-calorie associations—all this high-falutin' scientific experimental research that had been done on rats and occasionally humans.

He would have posted to his blog, "Gosh, everyone!  You should try these amazing fruit juices that are making me lose weight!"  And that would have been the end of it.  Some people would have tried it, it would have worked temporarily for some of them (until the flavor-calorie association kicked in) and there never would have been a Shangri-La Diet per se.

The existing Shangri-La Diet is visibly incomplete—for some people, like me, it doesn't seem to work, and there is no apparent reason for this or any logic permitting it.  But the reason why as many people have benefited as they have—the reason why there was more than just one more blog post describing a trick that seemed to work for one person and didn't work for anyone else—is that Roberts knew the experimental science that let him interpret what he was seeing, in terms of deep factors that actually did exist.

One of the pieces of advice on OB/LW that was frequently cited as the most important thing learned, was the idea of "the bottom line"—that once a conclusion is written in your mind, it is already true or already false, already wise or already stupid, and no amount of later argument can change that except by changing the conclusion.  And this ties directly into another oft-cited most important thing, which is the idea of "engines of cognition", minds as mapping engines that require evidence as fuel.

If I had merely written one more blog post that said, "You know, you really should be more open to changing your mind—it's pretty important—and oh yes, you should pay attention to the evidence too."  And this would not have been as useful.  Not just because it was less persuasive, but because the actual operations would have been much less clear without the explicit theory backing it up.  What constitutes evidence, for example?  Is it anything that seems like a forceful argument?  Having an explicit probability theory and an explicit causal account of what makes reasoning effective, makes a large difference in the forcefulness and implementational details of the old advice to "Keep an open mind and pay attention to the evidence."

It is also important to realize that causal theories are much more likely to be true when they are picked up from a science textbook than when invented on the fly—it is very easy to invent cognitive structures that look like causal theories but are not even anticipation-controlling, let alone true.

This is the signature style I want to convey from all those posts that entangled cognitive science experiments and probability theory and epistemology with the practical advice—that practical advice actually becomes practically more powerful if you go out and read up on cognitive science experiments, or probability theory, or even materialist epistemology, and realize what you're seeing.  This is the brand that can distinguish LW from ten thousand other blogs purporting to offer advice.

I could tell you, "You know, how much you're satisfied with your food probably depends more on the quality of the food than on how much of it you eat."  And you would read it and forget about it, and the impulse to finish off a whole plate would still feel just as strong.  But if I tell you about scope insensitivity, and duration neglect and the Peak/End rule, you are suddenly aware in a very concrete way, looking at your plate, that you will form almost exactly the same retrospective memory whether your portion size is large or small; you now possess a deep theory about the rules governing your memory, and you know that this is what the rules say.  (You also know to save the dessert for last.)

I want to hear how I can overcome akrasia—how I can have more willpower, or get more done with less mental pain.  But there are ten thousand people purporting to give advice on this, and for the most part, it is on the level of that alternate Seth Roberts who just tells people about the amazing effects of drinking fruit juice.  Or actually, somewhat worse than that—it's people trying to describe internal mental levers that they pulled, for which there are no standard words, and which they do not actually know how to point to.  See also the illusion of transparency, inferential distance, and double illusion of transparency.  (Notice how "You overestimate how much you're explaining and your listeners overestimate how much they're hearing" becomes much more forceful as advice, after I back it up with a cognitive science experiment and some evolutionary psychology?)

I think that the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms—thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.

Note the grade of increasing difficulty in citing:

  • Concrete experimental results (for which one need merely consult a paper, hopefully one that reported p < 0.01 because p < 0.05 may fail to replicate)
  • Causal accounts that are actually true (which may be most reliably obtained by looking for the theories that are used by a majority within a given science)
  • Math validly interpreted (on which I have trouble offering useful advice because so much of my own math talent is intuition that kicks in before I get a chance to deliberate)

If you don't know who to trust, or you don't trust yourself, you should concentrate on experimental results to start with, move on to thinking in terms of causal theories that are widely used within a science, and dip your toes into math and epistemology with extreme caution.

But practical advice really, really does become a lot more powerful when it's backed up by concrete experimental results, causal accounts that are actually true, and math validly interpreted.

 

Part of the sequence The Craft and the Community

Next post: "Less Meta"

Previous post: "Well-Kept Gardens Die By Pacifism"

Comments (112)

Comment author: roland 26 April 2009 03:57:40PM *  25 points [-]

The thing is, it can take a long time until the deep theory to support a given practical advice is discovered and understood. Moving forward through trial and error can give faster and as effective results.

If you look at human history you will find several examples like the making of steel where practical procedures where discovered through massive experimentation centuries before the theoretical basis to understand them.

Comment author: MrShaggy 28 April 2009 01:38:25AM 6 points [-]

This comment is I think an essential couterbalance to the post's valid points. To expand a little, the book Good Calories, Bad Calories by Gary Taubes argues that bad nutritional recommendations were adopted by leading medical and then governmental associations, partly justified by the above advice (we need recommendations to help people now, can't wait for full testing). So someone could refer to this as an example of why the comment above is dangerous in areas that are harder to test than the efficacy of steel production (which I presume they knew worked better than other procedures, whereas some nutritional effects have long term consequences that aren't clear or it's not clear which component of the recommendation is affecting what). However, Taubes also shows that this was also used to justify overlooking flaws in the evidence, and he points to a group heuristic bias (if that's the right term) of information cascades. There are other biases and failures of rationality (how certain statistical evidence was interpreted) in the story as well. So all this to say, while trial and error give give faster and as effective results, the less clear the measurement of the results are, the more care required interpreting them. When stated, it sounds obvious and I almost feel dumb for saying it, yet it's one of those rules honored more in the breach as they say. In the field of nutrition, you'll have headlines that say "Meat causes cancer" based on a study that points to a small statistical correlation between two diets which have very many differences other than type and amount of meat and itself concludes that more studies are called for to examine possible links between meat and cancer but not other possible causes that are just as much pointed to by the study.

Comment author: matt 03 May 2009 06:04:32AM 7 points [-]

The harm didn't come from "leading medical and then governmental associations" adopting recommendations before they were proven, it came from them holding to those recommendations when the evidence had turned.

Comment author: magfrump 21 February 2010 05:42:26AM 1 point [-]

I probably would have voted this comment up had it been formatted more nicely. A lot of your point was lost on me because of the single large paragraph.

Comment author: roland 28 April 2009 02:30:10AM 0 points [-]

In my comment I wasn't thinking particularly about nutrition. Regarding bad nutritional recommendations(and health recommendations in general) they may also be the consequence of studies. The thing is, when will we ever be done with the "full testing"? Science is constantly improving and in the future we will probably be horrified by some of the things we do now and that will later be proven to be wrong.

The best thing we can do is to be careful and prepared to update swiftly on new evidence.

Comment author: MichaelVassar 25 April 2009 09:00:09PM 13 points [-]

It seems to me that many people don't realize that math results have to be validly interpreted in order to be compelling. LOTS of bad thinking by smart people tends to involve sloppiness in the interpretation of the math. Auman was prone to this problem and so are people thinking about his agreement theorem.

Comment author: NancyLebovitz 25 December 2011 06:30:03AM 10 points [-]

This may be pointing at a bias that I don't have a name for-- the belief that the pathway between a possible cause-effect pair can be neglected.

It's believing that all you need is the right laws, without having to pay attention to how they're enforced. It's believing that if you are the right sort of person, your life will automatically work well. It's believing that more education will lead to a more prosperous society without having ways for people to apply what they know.

Comment author: timtyler 26 April 2009 08:24:13AM 1 point [-]

http://sethroberts.net/science/ is totally unconvincing. The main promoter of the diet doesn't seem to have any decent evidence that it works.

Lacking evidence, it seems like another fad diet, whose most obvious purpose is to sell diet books by telling people what they desperately want to hear - that they can diet and lose weight - while still eating whatever they like.

To me, it looks like junk science that distracts people from advice that might actually help them.

Comment author: badger 26 April 2009 08:58:16AM *  4 points [-]

The graph of Roberts's weight compared to fructose water intake on p. 73 of "What makes food fattening?" is very persuasive in my mind. I don't think there is any evidence that it is effective in the population at large, but I think it is clear cut that it worked for Roberts.

I don't think the cynical explanation gets very far. The details of the diet are freely available. There is only a single, cheap, slim book that Roberts published so that someone could learn about the diet in a format other than his website. Roberts could easily be mistaken, but I think his tone has consistently been "here is a little-known, easy technique that was highly effective for me; I have a theory why it could work for you too". It's hard to make money by telling someone to take three tablespoons of extra-light olive oil a day in addition to whatever other diet they are following.

Comment author: timtyler 26 April 2009 11:32:12AM 2 points [-]

One rat is just not statistically significant evidence - especially not when the rat is also the salesman. I don't know whether Roberts is motivated by wealth, fame, or whatever - nor do I care very much.

Comment author: Luke_A_Somers 20 November 2011 10:42:35PM *  3 points [-]

Many tests on the same rat can be statistically significant! Do X, Y changes in the rat. Undo it, Y changes back. Repeat until it's statistically certain connection...

We just have no particular reason to expect that it'll generalize well to others.

This really stands out to me as a physicist because we do things like one rat tests all the time. Well, usually we get a few other 'rats', but we rely heavily on the notion that identically prepared matter is... identical. Biology, of course, doesn't allow that shortcut.

Clinicians sometimes have a cohort of 1 for rare diseases... but of course that's simply the best they can do under the circumstances.

Comment author: timtyler 21 November 2011 12:10:58AM 2 points [-]

Many tests on the same rat can be statistically significant! Do X, Y changes in the rat. Undo it, Y changes back. Repeat until it's statistically certain connection...

True - but it won't be too convincing if self-experimenting on yourself with your own diet. Science is based on confirmations of experiments by other scientists.

Comment author: Luke_A_Somers 21 November 2011 03:25:20PM 2 points [-]

The rat being the salesman is the more serious issue there, yes.

Comment author: prase 26 April 2009 02:37:56PM 2 points [-]

I agree that the theory is unconvincing. Roberts seems to argue that organisms have brain-regulated mechanism which force the organisms to eat more if the food is more easily available. Such behaviour could be beneficial because during famines the supplies would be later depleted, but the explanation smells of group selection - I suppose that especially during famines the individual who eats as much as possible and stores that as fat will have great advantage against more modest members of his group, not speaking about other species. Am I missing something?

Comment author: timtyler 26 April 2009 04:57:55PM 5 points [-]

Pop evo-psych stories are a marketing strategy for diets, not a real reason to follow one. Look at the paleo diet - which apparently promotes the ancestral state of malnourishment and dehydration, on the basis of an evo-psych story.

Diets are best evaluated by testing them, not by telling memorable stories about their origins.

Comment author: prase 27 April 2009 08:47:58AM 0 points [-]

Why evo-psych? Psychology has nothing to do with that.

Diets are, of course, evaluated by testing, but Roberts goes further and makes an explanation of his diet, and whether this explanation is consistent from evolutionary perspective is a relevant question.

Comment author: timtyler 27 April 2009 06:38:17PM 1 point [-]

Or, in my view, not as far, by promoting an almost totally-untested diet.

Comment author: pjeby 26 April 2009 02:49:21PM 2 points [-]

I suppose that especially during famines the individual who eats as much as possible and stores that as fat will have great advantage against more modest members of his group, not speaking about other species. Am I missing something?

Yes - the cost of gathering the food. Roberts's hypothesis is that if food is not plentiful, it's counterproductive to be so hungry that you burn a lot of calories looking for more food, versus sitting tight and drawing on your fat stores. Conversely, if food is plentiful, you'd be an idiot not to go get as much as you can handle.

Comment author: pjeby 25 April 2009 09:52:53PM -2 points [-]

I think that the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.

Actually, speaking as somebody who's done this, what I can tell you is that a huge amount of the experimenters get stuff wrong in their models and conclusions, because their terminology is at cross-purposes to what's really happening.

NLP, on the other hand, actually does have a vocabulary that matches the territory, but one that has been largely unexplored by experimental psychology, in much the way that hypnosis has had limited study. The catch in both is that you need a skilled operator to observe or produce many of the phenomena in question, because people differ in surface characteristics that have to be bypassed before you get to the similarities.

NLP's rep-systems and strategies models actually do have the necessary vocabulary and "behavioral caclulus" to discuss subjective experience, and in particular the parts needed to get past surface dissimilarities in processing.

I suggest "Neuro-linguistic Programming, Volume I", by Dilts et al, as an introduction for the theory-minded. A brief excerpt:

...we begin by showing how the five classes of sensory experience ... are the basis for the strategies people have for generating and guiding behavior, rather than more complex and abstract concepts such as "ego", "mind", "human nature", "mechanisms", "morals", "reason", etc., employed by other behavioral models.

Another:

First, for a pattern or generalization regarding human communication to be acceptable or well-formed in NLP, it must include in the description the human agents who are initiating and responding to the pattern being described, their actions, their possible responses. Secondly, the description of the pattern must be represented in sensory grounded terms which are available to the user. [Emphasis added] .... We have been continually struck by the tremendous gap between theory and practice in the behavioral sciences -- this requirement closes that gap.

IOW, if you're looking for a vocabulary, run, don't walk, to get that book. It is generally considered the least-successful/popular book on NLP ever written, for precisely the same reason I'm recommending it to you: it's full of math, big words, and attempts at being precise.

(It is almost 30 years old, btw, so it shouldn't be considered the latest or greatest. There are a LOT of things in it that have been supplanted by more streamlined methods. However, the key underlying model of sensory representation strategy sequences (both in and out of consciousness) is just as valid today. There are just a lot more things known today about how we code things in those sensory represenations, and how to obtain information about them, install new representations, etc.)

Comment author: Eliezer_Yudkowsky 26 April 2009 05:57:16PM 10 points [-]

This post was, to some extent, directed particularly at you. It would seem that you haven't taken my advice... I wish I knew of some good experimental results to back it up, as this would render it less ignorable.

What you're talking about above is not a concrete experimental result. Neither is it a standard causal theory, nor is it a causal theory that strikes me as particularly likely to be true in the absence of experimental validation. Nor is it valid math validly interpreted, or logic that seems necessarily true across lawful possible worlds. I don't care if it works for you and for other people you know; that doesn't show anything about the truth of the model; there's this thing called a placebo effect. The advice fails to meet the standard we're accustomed to, and that's why we're ignoring it. It is just one more theory on the Internet at this point, and one more set of orders delivered in a confident tone but not explained well enough to interpret at all, really.

Comment author: roland 26 April 2009 07:22:36PM *  0 points [-]

I'm relieved to read this Eliezer, because I thought it was just me who perceived pjeby's advice as misguided.

Comment author: gjm 26 April 2009 11:47:47PM 1 point [-]

I've been whining at him for a while, though my complaint isn't so much that his advice is misguided, as that he keeps offering pronouncements about how the mind works and how to make it work better, but evidence that his model and methods are sound seems sorely lacking (here, at least).

Comment author: pjeby 27 April 2009 12:24:58AM 12 points [-]

he keeps offering pronouncements about how the mind works and how to make it work better

...much of which has come attached with things that are actually possible to investigate and test on your own, and a few people have actually posted comments describing their results, positive or negative. I've even pointed to bits of research that support various aspects of my models.

But if you're allergic to self-experimentation, have a strong aversion to considering the possibility that your actions aren't as rational as you'd like to think, or just don't want to stop and pay attention to what goes on in your head, non-verbally... then you really won't have anything useful to say about the validity or lack thereof of the model.

I think it's very interesting that so far, nobody has opposed anything I've said on the grounds that they tested it, and it didn't work.

What they've actually been saying is, they don't think it's right, or they don't think it will work, or that NLP has been invalidated, or ANYTHING at all other than: I tried thus-and-such using so-and-so procedure, and it appears that my results falsify this-or-that portion of the model you are proposing."

In a community of self-professed rationalists, I find that very interesting. Not as interesting, mind you, as I would an actual result falsifying a portion of my model, though.

Because that, I would actually LEARN something from. I could try and replicate the person's result, offer other things to try, or maybe even update my model. It does happen, pretty regularly -- and the updates are almost equally likely to come from:

  1. more-or-less mainstream psych and popularizations thereof,
  2. pop, new age, or NLP stuff,
  3. self-experimentation, and
  4. unexpected events in client work

A recent mainstream psych example would be Dweck's fixed/growth mindsets model, which I've now converted to a more specific model for change work that I call "or"/"more" thinking.

That is, a belief that "either I do this OR I fail" -- a digital control variable of avoidance -- is less useful than one where "the MORE I do this the more/closer I get": an analog variable under your control.

This is a much finer-grained distinction than my older notion that didn't include discrete/continuous, but focused strictly on the approach/avoidance aspect of the variables. It's also a more narrowly-focused understanding of the difference than Dweck's work, which speaks more about the effects of these mindsets than the mechanism of them, or how to change that mechanism in practice.

So now that I have this distinction, I've gone back and reviewed other things I've read that tie into this idea in one way or another, giving it more depth. That is, I can look at other discussions of "naturally successful" behavior, hypnotic techniques or NLP submodality techniques that link an increase in one thing to an increase in another, and so on.

In particular, I've found various techniques by Richard Bandler that describe how certain successful athletes and entertainers he worked with transformed "or" variables into "more" variables (although he didn't use those terms).

I'm now in the process of self-experimenting with some of those techniques, preparatory to selecting ones to add to my personal and training repertoire.

That, more or less is my method for model refinement: read about ideas, try ideas, figure out what works, update models, find relevant techniques, try techniques w/self, w/clients, get ideas about what other ideas might be worth investigating, rinse and repeat.

Is it "the scientific method". Probably not. Is it closer to the scientific method than the "I read something or believe something that means that won't work, but can't be bothered to tell whether it's the same thing" approach favored by some folks? Hell yeah.

Btw, that attitude is why every new self-help author or guru has to come up with new names for every damn thing: the old names get worn out by people who conclude they already "know" what that thing is, because their brother told them something about something like that once and it sounded kind of like something else they tried that didn't work.

Yet century-old techniques work fine, if you actually know how to do them, and you actually DO them. But surprisingly few people ever actually try, let alone try with all their might, in the "shut up and do the impossible" sense.

Comment author: Torben 27 April 2009 08:31:48AM 4 points [-]

I'm allergic to self-experimentation. I find that I'm not a very good judge on my own reactions. Furthermore, self-experimentation is probably the worst way to go about setting up a true model of the world.

Comment author: Eliezer_Yudkowsky 27 April 2009 12:38:03AM 4 points [-]

I am unable to make enough sense of what you say to try it. It is not written in a language I can read.

Comment author: pjeby 27 April 2009 01:46:43AM *  1 point [-]

I am unable to make enough sense of what you say to try it. It is not written in a language I can read.

And that's not a criticism I have a problem with. Hell, if you actually tried something and it didn't work, and you gave me enough information to be able to tell what you did and what result you got instead , that would be excellent criticism, in my book.

Helpful criticism is helpful, and always welcomed, at least by me.

Comment author: Vladimir_Nesov 27 April 2009 12:06:34PM 0 points [-]

And that's not a criticism I have a problem with.

Why shouldn't you?

Comment author: pjeby 27 April 2009 05:43:18PM 2 points [-]

Why shouldn't you?

I don't understand. Why should I have a problem with Eliezer's criticism, or any considered criticism or honest opinion? It is only ignorant criticism and anti-applause lights that I have a problem with.

Comment author: Vladimir_Nesov 27 April 2009 05:55:15PM *  4 points [-]

Well, that's ambiguity in interpretation of "having a problem with something". I (mis)interpreted your statement to mean "this kind of criticism doesn't bother me", that is you are not going to change anything in yourself in response, which would be unhealthy, whereas you seem to have intended it to say "this kind of criticism doesn't offend me".

Comment author: hargup 25 December 2014 06:51:26AM 0 points [-]

So basically are you saying Eliezer, gjm and others are falling for the fallacy fallacy ?

Comment author: Vladimir_Nesov 26 April 2009 06:24:38PM *  4 points [-]

It is generally considered the least-successful/popular book on NLP ever written, for precisely the same reason I'm recommending it to you: it's full of math, big words, and attempts at being precise.

When I read this, I get the same feeling as before, when you wrote about changing your ways in order to introduce your techniques to this forum. The feeling is that when you talk of rigor, you see it as a mere custom, something socially required, and quite amusing, really, since all that rigor can't be true, anyway. After all, it's only possible to make attempts at being precise, so who are you kidding. Plus, truth is irrelevant. And here we are, the LessWrong crowd, all for the image, none for the substance, bad for efficiency.

Comment author: pjeby 26 April 2009 06:43:03PM 3 points [-]

And here we are, the LessWrong crowd, all for the image, none for the substance, bad for efficiency.

I wouldn't say that of everybody on LessWrong, but there is certainly a vocal contingent of that stripe. That contingent unfortunately also suffers from the use of cognitive models that, to me, are as primitive as the medieval four-humors model.

So when they push my "ignorance and superstition" buttons in the same posts where they're demanding properly validated rituals and papers for things they could verify for themselves in ten minutes by simple self-experimentation, it's rather difficult to take them seriously as "rationalists". (Especially when they go on to condemn theists for suffering from the same delusions as they are, just externally directed.)

I totally don't mind engaging with people who want to learn something and are willing to actually look at experience, instead of just talking about it and telling themselves they already know what works or what is likely to work, without actually trying it. The other people, I can't do a damn thing for.

If your interest is in "science", I can't help you. I'm not a scientist, and I'm not trying to increase the body of knowledge of science. Science is a movement; I'm interested in individuals. And individual rationalists ought to be able to figure things out for themselves, without needing the stamp of authority.

I also have no interest in being an authority -- the only authority that counts in any field is your own results.

Comment author: Eliezer_Yudkowsky 26 April 2009 07:02:50PM 19 points [-]

they're demanding properly validated rituals and papers for things they could verify for themselves in ten minutes by simple self-experimentation

This is why I hope that the next P. J. Eby starts out by first reading the OBLW sequences, and only then begins his explorations into akrasia and willpower.

You cannot verify anything by self-experimentation to nearly the same strength as by "properly validated rituals and papers". The control group is not there as impressive ritual. It is there because self-experimentation is genuinely unreliable.

I agree with Seth Roberts that self-experimentation can provide a suggestive source of anecdotal evidence in advance of doing the studies. It can tell you which studies to do. But in this case it would appear that formal studies were done and failed to back up the claims previously supported by self-experimentation. This is very, very bad. And it is also very common - the gold standard shows that introspection is not systematically trustworthy.

Comment author: matt 03 May 2009 06:11:27AM *  4 points [-]

I'm a bit confused as to your goal, Eliezer.

Are you trying to find a fully general solution to the akraisia problem, applicable to any human currently alive… or do you want to know how you can overcome akrasia? The first is going to be a fair bit harder than the second, and you probably don't have time to do that and save the world.

If you shoot a little lower on this one and just try to find something that works for you I think your argument will change… quite a lot.

Comment author: pjeby 26 April 2009 08:02:25PM 0 points [-]

But in this case it would appear that formal studies were done and failed to back up the claims previously supported by self-experimentation

If you think that's the case, you didn't read the whole Wikipedia page on that, or the cite I gave to a 2001 paper that independently re-creates a portion of NLP's model of emotional physiology. I've seen more than one other peer-reviewed paper in the past that's recreated some portion of "NLP, Volume I", as in, a new experimental result that supports a portion of the NLP model.

Hell, hyperbolic discounting using the visual representation system was explained by NLP submodalities research two decades ago, for crying out loud. And the somatic marker hypothesis is at the very core of NLP. Affective asynchrony? See discussions of "incongruence" and "anchor collapsing" in NLP vI, which demonstrate and explain the existence of duality of affect.

IOW, none of the real research validation of NLP has the letters "N-L-P" on it .

You cannot verify anything by self-experimentation to nearly the same strength as by "properly validated rituals and papers". The control group is not there as impressive ritual. It is there because self-experimentation is genuinely unreliable.

Unreliable for what purpose? I would think that for any individual's purpose, self-experimentation is the ONLY standard that counts... it's of no value to me if a medicine is statistically proven to work 99% of the time, if it doesn't work for ME.

Comment author: Cyan 26 April 2009 08:23:14PM *  3 points [-]

Unreliable for getting true explanations. Self-experimentation is generally too poorly controlled to give unconfounded data about what really caused a result. (Also, typically sample size is too small to justify generalizability.)

Comment author: MrShaggy 28 April 2009 01:50:05AM 2 points [-]

Unreliable for what purpose? I would think that for any individual's purpose, self-experimentation is the ONLY standard that counts... it's of no value to me if a medicine is statistically proven to work 99% of the time, if it doesn't work for ME.

The way I'd put it for this stuff is that experiments help communicate why someone would try a technique, they help people distinguish signal from noise, because there are a ton of people out there saying X works for me.

Comment author: Vladimir_Nesov 26 April 2009 08:27:49PM *  3 points [-]

You cannot verify anything by self-experimentation to nearly the same strength as by "properly validated rituals and papers". The control group is not there as impressive ritual. It is there because self-experimentation is genuinely unreliable.

Unreliable for what purpose? I would think that for any individual's purpose, self-experimentation is the ONLY standard that counts... it's of no value to me if a medicine is statistically proven to work 99% of the time, if it doesn't work for ME.

This sounds like being uninterested in the chances of winning a lottery, since the only thing that matters is whether the lottery will be won by ME, and it costs only a buck to try (perform a self-experiment).

Comment author: pjeby 26 April 2009 08:49:16PM *  8 points [-]

This sounds like being uninterested in the chances of winning a lottery, since the only thing that matters is whether the lottery will be won by ME, and it costs only a buck to try (perform a self-experiment).

And yet, this sort of thinking produces people who get better results in life, generally. Successful people know they benefit from learning to do one more useful thing than the other guy, so it doesn't matter if they try fifty things and 49 of them don't work, whether those fifty things are in the same book or different books, because the payoff of something that works is (generally speaking) forever.

Success in learning, IOW, is a black-swan strategy: mostly you lose, and occasionally you win big. But I don't see anybody arguing that black swan strategies are mathematically equivalent to playing the lottery.

IMO, the rational strategy is to try things that might work better, knowing that they might fail, yet trying to your utmost to take them seriously and make them work. Hell, I even read "Dianetics" once, or tried to. I got a third of the way through that huge tome before I concluded that it was just a giant hypnotic induction via boredom. (Things I read later about Scientology's use of the book seem to actually support this hypothesis.)

Comment author: Vladimir_Nesov 26 April 2009 09:02:11PM *  1 point [-]

This became infeasible with the invention of printing press. There is too much stuff out there, for any given person to learn. Or to ever see all the titles of the stuff that exists. Or the names of the fields for which it's written. There is too much science, and even more nonsense. You can't just tell "read everything". It's physically impossible.

P.S. See this disclaimer, on second thought I connotationally disagree with this comment.

Comment author: pjeby 26 April 2009 09:54:28PM 2 points [-]

There is too much stuff out there, for any given person to learn. Or to ever see all the titles of the stuff that exists. Or the names of the fields for which it's written. There is too much science, and even more nonsense. You can't just tell "read everything". It's physically impossible.

What happened to "Shut up and do the impossible"? ;-)

More seriously, what difference does it make? The winning attitude is not that you have to read everything, it's that if you find one useful thing every now and then that improves your status quo, you already win.

Also, when it comes to self-help, you're in luck -- the number of actually different methods that exist is fairly small, but they are infinitely repeated over and over again in different books, using different language.

My personal sorting tool of choice is looking for specificity of language: techniques that are described in as much sensory-oriented, "near" language as possible, with a minimum of abstraction. I also don't bother evaluating things that don't make claims that would offer an improvement over anything else I've tried, and I have a preference for reading authors who've offered insightful models and useful techniques in the past.

Lately, I've gotten over my snobbish tendency to avoid authors who write things I know or suspect aren't true (e.g. stupid quantum mechanics interpretations); I've realized that it just doesn't have as much to do with whether they will actually have something useful to say, as I used to think it did.

Comment author: Vladimir_Golovin 27 April 2009 09:17:10AM 2 points [-]

the number of actually different methods that exist is fairly small, but they are infinitely repeated over and over again in different books, using different language.

PJ, is there a survey / summary / list of these methods online? Could you please link, or, if there's no such survey, summarize the methods briefly?

Comment author: Eliezer_Yudkowsky 26 April 2009 10:02:14PM 1 point [-]

What happened to "Shut up and do the impossible"? ;-)

You keep using that phrase. I do not think it means what you think it does.

Comment author: Vladimir_Nesov 26 April 2009 10:19:40PM *  1 point [-]

Viva randomness! At least it's better than stupidity. And is about as effective as reversed stupidity. Which is not intelligence.

You should know better what you need, what's good for you, than a random number generator. And you should work on your field of study being better than a procedure for crafting another random option for such a random choice. I wonder how long it'll take to stumble on success if you use a hypothetical "buy a random popular book" order option on Amazon.

P.S. See this disclaimer, on second thought I connotationally disagree with this comment.

Comment author: Vladimir_Nesov 27 April 2009 12:03:45AM 0 points [-]

Also, when it comes to self-help, you're in luck -- the number of actually different methods that exist is fairly small, but they are infinitely repeated over and over again in different books, using different language.

My personal sorting tool of choice is looking for specificity of language: techniques that are described in as much sensory-oriented, "near" language as possible, with a minimum of abstraction. I also don't bother evaluating things that don't make claims that would offer an improvement over anything else I've tried, and I have a preference for reading authors who've offered insightful models and useful techniques in the past.

Okay. Another take. Is this really true? How long would it take for a new-commer to walk through every available option? How much would it cost? What is the chance he should expect before starting the whole endeavor that any of the available options will help? For the last question, the lottery analogy fits perfectly, no "works only for ME" excuse.

Comment author: CronoDAS 26 April 2009 07:07:17PM *  9 points [-]

I totally don't mind engaging with people who want to learn something and are willing to actually look at experience, instead of just talking about it and telling themselves they already know what works or what is likely to work, without actually trying it. The other people, I can't do a damn thing for.

If your interest is in "science", I can't help you. I'm not a scientist, and I'm not trying to increase the body of knowledge of science. Science is a movement; I'm interested in individuals. And individual rationalists ought to be able to figure things out for themselves, without needing the stamp of authority.

I also have no interest in being an authority -- the only authority that counts in any field is your own results.

The plural of anecdote is not data. Many people will tell you how they were cured by faith healers or other quacks, and, indeed, they had problems that went away after being "treated" by the quack. Does that make the quacks effective or give credibility to their theories about the human body?

The same applies to methods of affecting the human brain. As a non-expert, from the outside I can't tell the difference between NLP, Freudian psychotherapy, and whatever hocus-pocus Scientology says helps people. All have elaborate theories to explain their alleged benefits, and all have had people who swear it works.

To quote Wikipedia:

Because of the absence of any firm empirical evidence supporting its sometimes extravagant claims, NLP has enjoyed little or no support from the scientific community. It continues to make no impact on mainstream academic psychology, and only limited impact on mainstream psychotherapy and counselling.[12] However, it has some influence among private psychotherapists, including hypnotherapists, to the extent that they claim to be trained in NLP and ‘use NLP’ in their work. It has also had an enormous influence in management training, life coaching, and the self-help industry[13].

Until I do see some acceptance among the academic community, I remain unconvinced that NLP is anything more than a self-reinforcing collection of hypotheses, speculation, and metaphors. It could very well be otherwise, but I can't know that it isn't!

Comment author: gjm 26 April 2009 11:54:22PM 4 points [-]

[...] they push my "ignorance and superstition" buttons [...] things they could verify for themselves in ten minutes by simple self-experimentation [...]

Few of your comments here seem to me to describe things that are obviously checkable in ten minutes by simple self-experimentation. (Even ignoring the severe unreliability of self-experimentation, since doubtless there are at least some instances in which self-experimentation can provide substantial evidence.) Perhaps they are so checkable with the help of extra information that you've declined to provide. Perhaps I've just not read the right comments. Perhaps I've read the right comments and forgotten them. Would you care to clarify?

Comment author: pjeby 27 April 2009 01:30:36AM *  1 point [-]

Would you care to clarify?

Mostly, I've offered questions that people could ask themselves in relation to specific procrastination scenarios, that would give them an insight into the process of how they're doing it. IIRC, two people have reported back with positive hits; one of the two also had a second scenario, for which my first question did not produce a result, but it's not clear yet what the answer to my second question was. (I gave both questions up front, along with the sequence to use them in, and criteria for determining whether an answer was "near" or "far", along with instructions to reject the "far" answers. One respondent gave a "far" answer, so I asked them to repeat.)

I've also linked to a video offering a simple motivational technique based on my model; a few people have posted positive comments here, and I've also gotten a number of private emails from users here via the feedback form on my site, expressing gratitude for its usefulness to them. The video is just about 10 minutes long.

In another comment, I described a simple NLP submodalities exercise that could be tried in a few minutes, albeit with the disclaimer that some people find it hard to consciously observe or manipulate submodalities directly. (The technique in my video is a bit more indirect, and designed to avoid conscious interference in the less-conscious aspects of the process.)

I've referenced various books on other techniques I've used, and I believe I even mentioned that Byron Katie's site at thework.org includes a free 20-page excerpt from Loving What Is that provides instructions for a testable technique that operates on the same fundamental basis as my models.

I'm really not sure what the heck else people want. Even if you claim, as Eliezer does, that he can't understand my writing, it's not like I haven't referenced plenty of other people's writing, and even my spoken language (in the video) as alternative options.

Comment author: ciphergoth 27 April 2009 08:12:33AM 6 points [-]

I also find your writing difficult. If you'll accept a recommendation, I think your readership here might get more from shorter comments in which more work has gone into each word.

Comment author: gjm 27 April 2009 10:17:21AM 1 point [-]

So, I watched the video (some time ago, when you posted about it) and gave it one trial. The technique wasn't effective for me on the task I tried it on. The particular failure mode was one you mentioned in the video, and if you are correct about the generality with which it makes the technique not work then I would expect the technique to be generally ineffective for the things I'd benefit from motivational help with.

Your suggestions about identifying the causes of procrastination: I haven't tried that yet, and it sounds interesting; I notice that when someone did try it and got results that didn't perfectly match your theory your immediate response was not "oh, that's interesting; perhaps my theory needs some tweaking" but "I don't believe you". Can you see how this might make people skeptical?

Referencing books is only helpful in so far as (1) it's not necessary to read the whole of a lengthy book to extract the small piece of information you've been asked for, (2) the book is clearly credible, and (3) the book is actually available (e.g., in lots of libraries, or inexpensive, or online). To those who are skeptical about the whole self-help business, #2 is a pretty difficult criterion to meet.

Comment author: pjeby 27 April 2009 05:04:26PM 0 points [-]

If you are correct about the generality with which it makes the technique not work then I would expect the technique to be generally ineffective for the things I'd benefit from motivational help with.

Indeed. It is supposed to be a free sample, after all. The work I charge for is fixing those things that make it not work. The things that make motivation not work are much, much more diverse than the things that make it actually work.

I notice that when someone did try it and got results that didn't perfectly match your theory your immediate response was not "oh, that's interesting; perhaps my theory needs some tweaking" but "I don't believe you".

My response was, "you didn't follow directions", actually. Unless you're talking about the first part where the only information given was, "it didn't work". If you've ever done software tech support, you already know that "it didn't work" is not a well-formed answer. (Similarly, the later answer given was also not well-formed, by the criteria I laid out in advance.)

Failure to meet entry criterion for a technique does not constitute failure of the technique or the model: if you build a plane without an engine, and it doesn't take off, this does not represent a failure of aerodynamics. Indeed, aerodynamics predicts that failure mode, and so did I.

The response I got was not unexpected; it's common for people to have trouble at first, especially on things they don't want to look too closely at. I've had people spend up to 30 minutes in the "talk around the problem" failure mode before they could actually look at what they were thinking. The other most common failure mode is that somebody does see or hear something, but rejects it as nonsensical or irrelevant, then reports that they didn't get anything.

Third most common failure mode is lack of body awareness or physical suppression, but I know he doesn't have that as a general problem because his first response indicated awareness. His first response also indicated he is capable of perceiving responses, so that pretty much narrows it down to avoidance or assumption of irrelevance . If it's neither, then it might be relevant to a model update, especially if it's a repeatable result.

(At this point, however, he's going to have to repeat the asking of the second question, to test that, though, because these responses don't stick in long-term memory; in a sense, they are long-term memory.)

Comment author: gjm 27 April 2009 07:16:26PM 1 point [-]

Indeed. It is supposed to be a free sample, after all.

I think this (not the fact that it's a free sample, but the fact that apparently it's a feature, not a bug, if it doesn't work well for many people) makes it rather unuseful as a try-it-yourself demonstration of how good your models and techniques are.

My response was "you didn follow directions", actually. Unless you're talking about the first part where the only information given was, "it didn't work".

There was no such first part; even jimrandomh's initial response had more information than that in it. And after he gave more information your reply was still "I don't believe you" rather than "you didn't follow directions". Interested parties can check the thread for themselves.

Failure to meet entry criterion for a technique does not constitute failure of the technique.

No, to be sure. But once you hedge your description of your technique and what it's supposed to achieve with so many qualifications -- once you say, in so many words, that you expect it not to work when tried -- how can it possibly be reasonable for you to use it as an example of how you've supplied us with empirically testable evidence for what you say?

Saying "You can check my ideas by trying this technique -- but of course it's quite likely not to work" is just like saying "You can check my belief in God by praying to him for a miracle -- but of course he works in mysterious ways and often says no."

Comment author: pjeby 27 April 2009 07:52:22PM -1 points [-]

I think this (not the fact that it's a free sample, but the fact that apparently it's a feature, not a bug, if it doesn't work well for many people) makes it rather unuseful as a try-it-yourself demonstration of how good your models and techniques are.

The point of the exercise is that it's targeted to work for as many people as possible for a fairly narrow range of tasks, so as to give a sample of what it's like when it works.

Even chronic procrastinators can achieve success with the technique, as long as they don't use it on the thing they're procrastinating on -- it only works if you don't distract yourself with other thoughts, and if you're stressed about something, you're probably going to distract yourself with other thoughts.

Most people, however, don't seem to have any significant stressors about cleaning their desk. Also, it's not a difficult thing to visualize in its completed form.

Btw, just as a datapoint, what did you try it on, and what failure mode did you encounter? I am, ironically, MORE interested in failure reports than successes; the video continually gets rave reviews, but as much as I enjoy them, I can't learn anything new from another success report!

There was no such first part; even jimrandomh's initial response had more information than that in it. And after he gave more information your reply was still "I don't believe you" rather than "you didn't follow directions". Interested parties can check the thread for themselves.

I just rechecked myself; here are the relevant portions. Jim said:

When I think about it, I get no physical response whatsoever, and the only thoughts that come to mind are directly relevant details of the task. I'm completely unable to begin working on it.

I took this statement as a literal description of what happened, i.e., jim thought about "it" -- whatever "it" was -- got no physical response, and had thoughts about the details of the task. THEN (2nd step) he was unable to begin working on it.

"Unable to begin working on it" is the part I referred to as not well-formed; this does not contain any description of how he arrived at that conclusion. It is the equivalent of "it doesn't work" in tech support.

The unspecified "it" is also potentially relevant; I don't know if he refers there to the task itself, or one of the questions I said to ask about the task; and this is an important distinction. I've also noticed that some people can "think about their task" and not get a response because they are not thinking about actually starting on the task... and Jim's statements would be consistent with a sequence of thinking about the idea of the task, followed by preparing to actually perform the task... at which point an undescribed response is occurring, whereby he is then "unable to" perform the task.

I commented on the conflict between these two statements:

Those two statements are, AFAICT, incompatible

Meaning: as far as I can tell, those statements are not talking about the same thing. I.e., one is a referent to some sort of pre-task preparation unrelated to the problem, and the other is actually about beginning it.

In other words: all the information was in the first sentence, but the second one is where the problem actually is. So I then asked Jim to direct his attention to that part of his thought process, and get more specific:

How do you know you're completely unable to begin working on it? What stops you? What would happen if you DID begin working on it?

He then replied with two more not well-formed statements; instead of describing his thoughts or experiences, he replied with abstract, "far" explanations about the subject matter, instead of his direct response to the subject matter, i.e.:

Apathy. According to the attention-allocating part of my brain, it's of lower priority than games and blogs, even though my conscious mind disagrees.

and:

Knowing what I know now, I'd make some progress. A week ago, I would've stared at my to-do list for awhile, unable to decide which item to start with, until the phone rang or something else diverted my attention.

Neither of these utterances describes a concrete experience; they are verbalizations of precisely the kind I described in the "how to know if you're making shit up" comment beforehand. They are far, not near thinking, and my techniques only use far thinking to ask questions, and determine what questions to ask. The answers sought, however, are exclusively "near".

Thus, when someone replies with a "far" answer, I know that they have not actually answered my question or followed instructions - they are not using the part of their brain that will produce the desired result.

Notice, by the way, that at no time did I say I did not believe him. I took him quite literally at his word, to the extent that he gave me words that map to some sort of experience.

Comment author: gjm 27 April 2009 09:24:40PM 0 points [-]

I tried it on the same example you proposed: desk-clearing. My desk is a mess; I would quite like it to be less of a mess; clearing it is never a high enough priority to make it happen. But I don't react to the thought of a clear desk with the "Mmmmmm..." response that you say is necessary for the technique to work.

As for your discussion with Jim: you did not at any point tell him that he didn't do what you'd told him to, or say anything that implied that; you did say that you think his statements contradict one another (implication: at least one of them is false; implication: you do not believe him). And then when he claimed that what stopped him was apathy and down-prioritizing by "the attention-allocating part of my brain" you told him that that wasn't really an answer, and your justification for that was that his brain doesn't really work in the way he said (implication: what he said was false; aliter, you didn't believe him).

So although you didn't use the words "I don't believe him", you did tell him that what he said couldn't be correct.

Incidentally, I find your usage of the word "incompatible" as described here so bizarre that it's hard not to see it as a rationalization aimed at avoiding admitting that you told jimrandomh he'd contradicted himself when in fact all he'd done was to say two things that couldn't both be true if your model of his mind is correct. However, I'll take your word for it that you really meant what you say you meant, and suggest that when you're using a word in so nonstandard a way you might do well to say so at the time.

Comment author: Eliezer_Yudkowsky 27 April 2009 01:58:59AM 1 point [-]

Can you link to these things? Your comments? Here? There's an LW search box.

Comment author: pjeby 27 April 2009 02:28:38AM *  4 points [-]
Comment author: Eliezer_Yudkowsky 27 April 2009 06:33:32AM 4 points [-]

"How To Tell If You're Making Shit Up" seems useful. Do you see why this would seem useful to me while "NLP Submodalities" doesn't?

Comment author: pjeby 27 April 2009 05:19:19PM *  3 points [-]

Do you see why this would seem useful to me while "NLP Submodalities" doesn't?

For the same reason that yours and Robin's writing on biases is more useful than the source material, I imagine. That is, it's been predigested. It probably also doesn't hurt that I have to teach "how to tell if you're making shit up" to every single client of mine, so I have some practice at doing so! (Albeit mostly in real-time interaction.)

FYI, NLP volume I represents the more detailed "brain software" model from which that summary was derived, which I recommended to you because you said you couldn't follow my writing.

You can also see why I was excited when Robin started posting about near/far stuff on OB -- it fit very nicely into the work I was already doing, and into the NLP presupposition that "conscious verbal responses are to be treated as unsubstantiated rumor unless confirmed by unconscious nonverbal response" -- i.e., don't trust what somebody says about their behavior, because that's not the system that runs the behavior.

The Near/far distinction mainly added an evolutionary explanation that was not a part of NLP, and gave a better why for not trusting the verbal explanation. Near/far in a literal sense, as in "people respond differently based on distance in space/time/abstraction level of visualization", has been part of the NLP models for over 20 years now. But once again, the mainstream experiments are just now being done, presumably by people who've never heard of NLP, or who assume it's crackpottery.

Comment author: [deleted] 27 April 2009 09:05:13AM *  3 points [-]

deleted

Comment author: reg 27 April 2009 03:09:30PM 1 point [-]

"Roberts knew the experimental science that let him interpret what he was seeing, in terms of deep factors that actually did exist."

As these the same kinds of deep factors that show that watching talking heads on TV in the morning will cure insomnia because "Anthropological research suggests that early humans had lots of face-to-face contact every morning "? - Roberts' solution for insomnia as described in NYT: http://www.nytimes.com/2005/09/11/magazine/11FREAK.html

Comment author: HCE 28 April 2009 06:38:01AM *  0 points [-]

watching life-sized talking heads in the morning is roberts' way of lifting his spirits, not his cure for insomnia.

Comment author: reg 28 April 2009 10:12:32AM 3 points [-]

ok, but it's still merely a 'just-so' story with no worthwhile evidence behind it.

Comment author: NancyLebovitz 16 May 2012 02:54:27PM 1 point [-]

So far as the Shangri-La Diet is concerned, a boring explanation for the weird pattern of strong success, partial success, and utter failure is that biology is complicated.

There's a little about the biological basis for hunger and satiety in Gina Kolata's . IIRC, there was only one chapter about hormones, and it was written for a popular audience. I skimmed it anyway, and don't remember the details.

I doubt Seth's evolutionary explanation, though I wouldn't mind a little research on whether success with his diet is correlated with food neophilia and/or food neophobia.

Comment author: adamzerner 22 December 2013 02:07:44AM 0 points [-]

I think that there is a certain level of abstraction for which advice is most effective. The level of abstraction most people use is obviously way too high, but getting into experimental results and math seems to be too low a level of abstraction. The chain of logical steps that link experiments/math to advice is long, and I think below the level of consciousness.

Comment author: lukeprog 04 January 2013 07:58:54AM 0 points [-]

There is nothing so practical as a good theory.

Kurt Lewin, speaking about psychological theories in particular