Once upon a time, Seth Roberts took a European vacation and found that he started losing weight while drinking unfamiliar-tasting caloric fruit juices.

    Now suppose Roberts had not known, and never did know, anything about metabolic set points or flavor-calorie associations—all this high-falutin' scientific experimental research that had been done on rats and occasionally humans.

    He would have posted to his blog, "Gosh, everyone!  You should try these amazing fruit juices that are making me lose weight!"  And that would have been the end of it.  Some people would have tried it, it would have worked temporarily for some of them (until the flavor-calorie association kicked in) and there never would have been a Shangri-La Diet per se.

    The existing Shangri-La Diet is visibly incomplete—for some people, like me, it doesn't seem to work, and there is no apparent reason for this or any logic permitting it.  But the reason why as many people have benefited as they have—the reason why there was more than just one more blog post describing a trick that seemed to work for one person and didn't work for anyone else—is that Roberts knew the experimental science that let him interpret what he was seeing, in terms of deep factors that actually did exist.

    One of the pieces of advice on OB/LW that was frequently cited as the most important thing learned, was the idea of "the bottom line"—that once a conclusion is written in your mind, it is already true or already false, already wise or already stupid, and no amount of later argument can change that except by changing the conclusion.  And this ties directly into another oft-cited most important thing, which is the idea of "engines of cognition", minds as mapping engines that require evidence as fuel.

    If I had merely written one more blog post that said, "You know, you really should be more open to changing your mind—it's pretty important—and oh yes, you should pay attention to the evidence too."  And this would not have been as useful.  Not just because it was less persuasive, but because the actual operations would have been much less clear without the explicit theory backing it up.  What constitutes evidence, for example?  Is it anything that seems like a forceful argument?  Having an explicit probability theory and an explicit causal account of what makes reasoning effective, makes a large difference in the forcefulness and implementational details of the old advice to "Keep an open mind and pay attention to the evidence."

    It is also important to realize that causal theories are much more likely to be true when they are picked up from a science textbook than when invented on the fly—it is very easy to invent cognitive structures that look like causal theories but are not even anticipation-controlling, let alone true.

    This is the signature style I want to convey from all those posts that entangled cognitive science experiments and probability theory and epistemology with the practical advice—that practical advice actually becomes practically more powerful if you go out and read up on cognitive science experiments, or probability theory, or even materialist epistemology, and realize what you're seeing.  This is the brand that can distinguish LW from ten thousand other blogs purporting to offer advice.

    I could tell you, "You know, how much you're satisfied with your food probably depends more on the quality of the food than on how much of it you eat."  And you would read it and forget about it, and the impulse to finish off a whole plate would still feel just as strong.  But if I tell you about scope insensitivity, and duration neglect and the Peak/End rule, you are suddenly aware in a very concrete way, looking at your plate, that you will form almost exactly the same retrospective memory whether your portion size is large or small; you now possess a deep theory about the rules governing your memory, and you know that this is what the rules say.  (You also know to save the dessert for last.)

    I want to hear how I can overcome akrasia—how I can have more willpower, or get more done with less mental pain.  But there are ten thousand people purporting to give advice on this, and for the most part, it is on the level of that alternate Seth Roberts who just tells people about the amazing effects of drinking fruit juice.  Or actually, somewhat worse than that—it's people trying to describe internal mental levers that they pulled, for which there are no standard words, and which they do not actually know how to point to.  See also the illusion of transparency, inferential distance, and double illusion of transparency.  (Notice how "You overestimate how much you're explaining and your listeners overestimate how much they're hearing" becomes much more forceful as advice, after I back it up with a cognitive science experiment and some evolutionary psychology?)

    I think that the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms—thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.

    Note the grade of increasing difficulty in citing:

    • Concrete experimental results (for which one need merely consult a paper, hopefully one that reported p < 0.01 because p < 0.05 may fail to replicate)
    • Causal accounts that are actually true (which may be most reliably obtained by looking for the theories that are used by a majority within a given science)
    • Math validly interpreted (on which I have trouble offering useful advice because so much of my own math talent is intuition that kicks in before I get a chance to deliberate)

    If you don't know who to trust, or you don't trust yourself, you should concentrate on experimental results to start with, move on to thinking in terms of causal theories that are widely used within a science, and dip your toes into math and epistemology with extreme caution.

    But practical advice really, really does become a lot more powerful when it's backed up by concrete experimental results, causal accounts that are actually true, and math validly interpreted.

    New to LessWrong?

    New Comment
    114 comments, sorted by Click to highlight new comments since: Today at 11:57 PM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    The thing is, it can take a long time until the deep theory to support a given practical advice is discovered and understood. Moving forward through trial and error can give faster and as effective results.

    If you look at human history you will find several examples like the making of steel where practical procedures where discovered through massive experimentation centuries before the theoretical basis to understand them.

    6MrShaggy15y
    This comment is I think an essential couterbalance to the post's valid points. To expand a little, the book Good Calories, Bad Calories by Gary Taubes argues that bad nutritional recommendations were adopted by leading medical and then governmental associations, partly justified by the above advice (we need recommendations to help people now, can't wait for full testing). So someone could refer to this as an example of why the comment above is dangerous in areas that are harder to test than the efficacy of steel production (which I presume they knew worked better than other procedures, whereas some nutritional effects have long term consequences that aren't clear or it's not clear which component of the recommendation is affecting what). However, Taubes also shows that this was also used to justify overlooking flaws in the evidence, and he points to a group heuristic bias (if that's the right term) of information cascades. There are other biases and failures of rationality (how certain statistical evidence was interpreted) in the story as well. So all this to say, while trial and error give give faster and as effective results, the less clear the measurement of the results are, the more care required interpreting them. When stated, it sounds obvious and I almost feel dumb for saying it, yet it's one of those rules honored more in the breach as they say. In the field of nutrition, you'll have headlines that say "Meat causes cancer" based on a study that points to a small statistical correlation between two diets which have very many differences other than type and amount of meat and itself concludes that more studies are called for to examine possible links between meat and cancer but not other possible causes that are just as much pointed to by the study.
    9matt15y
    The harm didn't come from "leading medical and then governmental associations" adopting recommendations before they were proven, it came from them holding to those recommendations when the evidence had turned.
    1magfrump14y
    I probably would have voted this comment up had it been formatted more nicely. A lot of your point was lost on me because of the single large paragraph.
    0roland15y
    In my comment I wasn't thinking particularly about nutrition. Regarding bad nutritional recommendations(and health recommendations in general) they may also be the consequence of studies. The thing is, when will we ever be done with the "full testing"? Science is constantly improving and in the future we will probably be horrified by some of the things we do now and that will later be proven to be wrong. The best thing we can do is to be careful and prepared to update swiftly on new evidence.

    It seems to me that many people don't realize that math results have to be validly interpreted in order to be compelling. LOTS of bad thinking by smart people tends to involve sloppiness in the interpretation of the math. Auman was prone to this problem and so are people thinking about his agreement theorem.

    This may be pointing at a bias that I don't have a name for-- the belief that the pathway between a possible cause-effect pair can be neglected.

    It's believing that all you need is the right laws, without having to pay attention to how they're enforced. It's believing that if you are the right sort of person, your life will automatically work well. It's believing that more education will lead to a more prosperous society without having ways for people to apply what they know.

    "Roberts knew the experimental science that let him interpret what he was seeing, in terms of deep factors that actually did exist."

    As these the same kinds of deep factors that show that watching talking heads on TV in the morning will cure insomnia because "Anthropological research suggests that early humans had lots of face-to-face contact every morning "? - Roberts' solution for insomnia as described in NYT: http://www.nytimes.com/2005/09/11/magazine/11FREAK.html

    0HCE15y
    watching life-sized talking heads in the morning is roberts' way of lifting his spirits, not his cure for insomnia.
    4reg15y
    ok, but it's still merely a 'just-so' story with no worthwhile evidence behind it.

    So far as the Shangri-La Diet is concerned, a boring explanation for the weird pattern of strong success, partial success, and utter failure is that biology is complicated.

    There's a little about the biological basis for hunger and satiety in Gina Kolata's Rethinking Thin: The New Science of Weight Loss---and the Myths and Realities of Dieting. IIRC, there was only one chapter about hormones, and it was written for a popular audience. I skimmed it anyway, and don't remember the details.

    I doubt Seth's evolutionary explanation, though I wouldn't mind a litt... (read more)

    http://sethroberts.net/science/ is totally unconvincing. The main promoter of the diet doesn't seem to have any decent evidence that it works.

    Lacking evidence, it seems like another fad diet, whose most obvious purpose is to sell diet books by telling people what they desperately want to hear - that they can diet and lose weight - while still eating whatever they like.

    To me, it looks like junk science that distracts people from advice that might actually help them.

    5badger15y
    The graph of Roberts's weight compared to fructose water intake on p. 73 of "What makes food fattening?" is very persuasive in my mind. I don't think there is any evidence that it is effective in the population at large, but I think it is clear cut that it worked for Roberts. I don't think the cynical explanation gets very far. The details of the diet are freely available. There is only a single, cheap, slim book that Roberts published so that someone could learn about the diet in a format other than his website. Roberts could easily be mistaken, but I think his tone has consistently been "here is a little-known, easy technique that was highly effective for me; I have a theory why it could work for you too". It's hard to make money by telling someone to take three tablespoons of extra-light olive oil a day in addition to whatever other diet they are following.
    2timtyler15y
    One rat is just not statistically significant evidence - especially not when the rat is also the salesman. I don't know whether Roberts is motivated by wealth, fame, or whatever - nor do I care very much.
    4Luke_A_Somers12y
    Many tests on the same rat can be statistically significant! Do X, Y changes in the rat. Undo it, Y changes back. Repeat until it's statistically certain connection... We just have no particular reason to expect that it'll generalize well to others. This really stands out to me as a physicist because we do things like one rat tests all the time. Well, usually we get a few other 'rats', but we rely heavily on the notion that identically prepared matter is... identical. Biology, of course, doesn't allow that shortcut. Clinicians sometimes have a cohort of 1 for rare diseases... but of course that's simply the best they can do under the circumstances.
    2timtyler12y
    True - but it won't be too convincing if self-experimenting on yourself with your own diet. Science is based on confirmations of experiments by other scientists.
    2Luke_A_Somers12y
    The rat being the salesman is the more serious issue there, yes.
    3prase15y
    I agree that the theory is unconvincing. Roberts seems to argue that organisms have brain-regulated mechanism which force the organisms to eat more if the food is more easily available. Such behaviour could be beneficial because during famines the supplies would be later depleted, but the explanation smells of group selection - I suppose that especially during famines the individual who eats as much as possible and stores that as fat will have great advantage against more modest members of his group, not speaking about other species. Am I missing something?
    5timtyler15y
    Pop evo-psych stories are a marketing strategy for diets, not a real reason to follow one. Look at the paleo diet - which apparently promotes the ancestral state of malnourishment and dehydration, on the basis of an evo-psych story. Diets are best evaluated by testing them, not by telling memorable stories about their origins.
    0prase15y
    Why evo-psych? Psychology has nothing to do with that. Diets are, of course, evaluated by testing, but Roberts goes further and makes an explanation of his diet, and whether this explanation is consistent from evolutionary perspective is a relevant question.
    2timtyler15y
    Or, in my view, not as far, by promoting an almost totally-untested diet.
    3pjeby15y
    Yes - the cost of gathering the food. Roberts's hypothesis is that if food is not plentiful, it's counterproductive to be so hungry that you burn a lot of calories looking for more food, versus sitting tight and drawing on your fat stores. Conversely, if food is plentiful, you'd be an idiot not to go get as much as you can handle.

    What if you trust the author? In that case, perhaps it's a more efficient use of your time to have the author "just tell you what to do".

    Derev Sivers thinks so - https://sivers.org/2do.

    I think that there is a certain level of abstraction for which advice is most effective. The level of abstraction most people use is obviously way too high, but getting into experimental results and math seems to be too low a level of abstraction. The chain of logical steps that link experiments/math to advice is long, and I think below the level of consciousness.

    [This comment is no longer endorsed by its author]Reply

    There is nothing so practical as a good theory.

    Kurt Lewin, speaking about psychological theories in particular

    I think that the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.

    Actually, speaking as somebody who's done this, ... (read more)

    9Eliezer Yudkowsky15y
    This post was, to some extent, directed particularly at you. It would seem that you haven't taken my advice... I wish I knew of some good experimental results to back it up, as this would render it less ignorable. What you're talking about above is not a concrete experimental result. Neither is it a standard causal theory, nor is it a causal theory that strikes me as particularly likely to be true in the absence of experimental validation. Nor is it valid math validly interpreted, or logic that seems necessarily true across lawful possible worlds. I don't care if it works for you and for other people you know; that doesn't show anything about the truth of the model; there's this thing called a placebo effect. The advice fails to meet the standard we're accustomed to, and that's why we're ignoring it. It is just one more theory on the Internet at this point, and one more set of orders delivered in a confident tone but not explained well enough to interpret at all, really.
    -1roland15y
    I'm relieved to read this Eliezer, because I thought it was just me who perceived pjeby's advice as misguided.
    0gjm15y
    I've been whining at him for a while, though my complaint isn't so much that his advice is misguided, as that he keeps offering pronouncements about how the mind works and how to make it work better, but evidence that his model and methods are sound seems sorely lacking (here, at least).

    he keeps offering pronouncements about how the mind works and how to make it work better

    ...much of which has come attached with things that are actually possible to investigate and test on your own, and a few people have actually posted comments describing their results, positive or negative. I've even pointed to bits of research that support various aspects of my models.

    But if you're allergic to self-experimentation, have a strong aversion to considering the possibility that your actions aren't as rational as you'd like to think, or just don't want to stop and pay attention to what goes on in your head, non-verbally... then you really won't have anything useful to say about the validity or lack thereof of the model.

    I think it's very interesting that so far, nobody has opposed anything I've said on the grounds that they tested it, and it didn't work.

    What they've actually been saying is, they don't think it's right, or they don't think it will work, or that NLP has been invalidated, or ANYTHING at all other than: I tried thus-and-such using so-and-so procedure, and it appears that my results falsify this-or-that portion of the model you are proposing."

    In a community of sel... (read more)

    4Eliezer Yudkowsky15y
    I am unable to make enough sense of what you say to try it. It is not written in a language I can read.
    3pjeby15y
    And that's not a criticism I have a problem with. Hell, if you actually tried something and it didn't work, and you gave me enough information to be able to tell what you did and what result you got instead , that would be excellent criticism, in my book. Helpful criticism is helpful, and always welcomed, at least by me.
    -1Vladimir_Nesov15y
    Why shouldn't you?
    5pjeby15y
    I don't understand. Why should I have a problem with Eliezer's criticism, or any considered criticism or honest opinion? It is only ignorant criticism and anti-applause lights that I have a problem with.
    4Vladimir_Nesov15y
    Well, that's ambiguity in interpretation of "having a problem with something". I (mis)interpreted your statement to mean "this kind of criticism doesn't bother me", that is you are not going to change anything in yourself in response, which would be unhealthy, whereas you seem to have intended it to say "this kind of criticism doesn't offend me".
    3Torben15y
    I'm allergic to self-experimentation. I find that I'm not a very good judge on my own reactions. Furthermore, self-experimentation is probably the worst way to go about setting up a true model of the world.
    0hargup9y
    So basically are you saying Eliezer, gjm and others are falling for the fallacy fallacy ?
    -2pjeby15y
    Have you read the book? If not, I respectfully suggest you have not the slightest clue what you're talking about.
    4Vladimir_Nesov15y
    This kind of argument is a winner in a war of attrition. It is a true game stopper, better than the responses of ever increasing length. It's only fair that you have to argue the opponent into getting the book first. As a quick preliminary check, I looked it up on Wikipedia, and the following characterization doesn't inspire: -- NLP and science on Wikipedia
    4Eliezer Yudkowsky15y
    Online source? I've read The Gentle Art of Verbal Self-Defense on matching modalities and it did not much impress me; I followed Nesov's link and it says that NLP is currently in a state of having tried and failed to present evidence. I'm not likely to buy another book at that point, but could perhaps be convinced to read an online source which presents the result of an experiment.
    2pjeby15y
    Argh. You edited after I started replying. Here's an online source that presents the result of an experiment, from the "NLP and Science" page on Wikipedia: By the way, as far as I can tell, the entire "NLP and science" page on Wikipedia is devoted to discussion of claims made in books other than NLP volume I, or that any rate are not central to the rep-systems and strategies model presented in volume I. The major popular confusion about NLP is confusing techniques with the modeling method. Volume I is about modeling strategies: understanding what people do in their heads and bodies as a way of communicating those behaviors to other people. This is only tangentially related to therapeutic or persuasive applications of the models. So, the idea of predicate matching is an application of NLP; not NLP itself. I've never read the Gentle Art of Verbal Self-Defense, so I've got no idea what it says or whether it's sensible, any more than I could say whether an arbitrary "science" book is useful or helpful. FWIW, Worldcat says there's a copy at a library roughly 43 miles from SIAI HQ.
    4Torben15y
    This seems... so classically crackpot. I admit to initial skepticism towards NLP, but your posts have done nothing to alleviate that and most everything to confirm it. Are you saying that the best book (and thus the model) is 30 years old and the best experiments are 20 years old? How about the experiments that went into proposing the model? To paraphrase someone, how was this model carved out of existence? Which information led to its identification contrary to the thousands of crackpot 'theories' of the mind? And what is your obsession with self-experimentation? That sounds like Hare Krishna. You're not doing well to distinguish NLP over the run-of-the-mill internet woo.
    9pjeby15y
    No, the best book I know of, about the core model of NLP: that everything we call "thinking" consists of manipulating sensory information, in one form or another, and that cognitive algorithms consist of transforming, combining, and comparing information across different sensory systems. 30 years ago, that was a revolutionary idea; now, it's not actually that far off the beaten track, in that there's recent mainstream support for a many of its ideas. (NLP had near/far distinctions 20 years ago, for example, and the critical role of physical sensations in mental recognition of emotions.) Bandler was editing books on therapy, listening to recordings of some very successful therapists, and noticed some interesting commonalities in their language. He talked to a linguistics professor at his college, who noticed it too. Building on Bateson and Korzybski, they put together a linguistic model of information processing, to show how surface language structure reflects deep structure -- i.e., what something says about how you're likely thinking, grounded in what the therapists were doing to identify broken internal models in their clients. In other words, they noticed that the successful therapists were noticing certain patterns of things people said, and then asking questions that forced the clients to reconsider their mental model of a situation. Now, if this sounds familiar, it's because REBT and CBT are based on the exact same thing, just without - AFAIK - as precise of a model as the linguistic one developed by B&G. And AFAIK, B&G described it first. In my original version of this post, I went on to describe how they got to other models -- that also now have experimental support -- but it got bloody long. Short version: they got microexpressions first too, AFAIK, although they didn't claim them to be universal. NLP practice drills focus on recognizing what the person in front of you is doing, not what everyone in the world might do. That it produces useful results
    0[anonymous]15y
    Online source?
    1pjeby15y
    The link I gave to Amazon. If you mean a free online version, I don't know of any. The Structure of Magic, Volume I is probably easier to find as a torrent or something, but it deals mostly with the mapping between linguistic structure and inner models. It predates NLP vI, and was the basis for the method by which they discovered the rep system and strategies model that was begun in NLP vI. It has been literally decades since I read it, and I don't own a copy, so offhand I don't know how illustrative it would be by comparison.
    6Vladimir_Nesov15y
    When I read this, I get the same feeling as before, when you wrote about changing your ways in order to introduce your techniques to this forum. The feeling is that when you talk of rigor, you see it as a mere custom, something socially required, and quite amusing, really, since all that rigor can't be true, anyway. After all, it's only possible to make attempts at being precise, so who are you kidding. Plus, truth is irrelevant. And here we are, the LessWrong crowd, all for the image, none for the substance, bad for efficiency.
    7pjeby15y
    I wouldn't say that of everybody on LessWrong, but there is certainly a vocal contingent of that stripe. That contingent unfortunately also suffers from the use of cognitive models that, to me, are as primitive as the medieval four-humors model. So when they push my "ignorance and superstition" buttons in the same posts where they're demanding properly validated rituals and papers for things they could verify for themselves in ten minutes by simple self-experimentation, it's rather difficult to take them seriously as "rationalists". (Especially when they go on to condemn theists for suffering from the same delusions as they are, just externally directed.) I totally don't mind engaging with people who want to learn something and are willing to actually look at experience, instead of just talking about it and telling themselves they already know what works or what is likely to work, without actually trying it. The other people, I can't do a damn thing for. If your interest is in "science", I can't help you. I'm not a scientist, and I'm not trying to increase the body of knowledge of science. Science is a movement; I'm interested in individuals. And individual rationalists ought to be able to figure things out for themselves, without needing the stamp of authority. I also have no interest in being an authority -- the only authority that counts in any field is your own results.

    they're demanding properly validated rituals and papers for things they could verify for themselves in ten minutes by simple self-experimentation

    This is why I hope that the next P. J. Eby starts out by first reading the OBLW sequences, and only then begins his explorations into akrasia and willpower.

    You cannot verify anything by self-experimentation to nearly the same strength as by "properly validated rituals and papers". The control group is not there as impressive ritual. It is there because self-experimentation is genuinely unreliable.

    I agree with Seth Roberts that self-experimentation can provide a suggestive source of anecdotal evidence in advance of doing the studies. It can tell you which studies to do. But in this case it would appear that formal studies were done and failed to back up the claims previously supported by self-experimentation. This is very, very bad. And it is also very common - the gold standard shows that introspection is not systematically trustworthy.

    5matt15y
    I'm a bit confused as to your goal, Eliezer. Are you trying to find a fully general solution to the akraisia problem, applicable to any human currently alive… or do you want to know how you can overcome akrasia? The first is going to be a fair bit harder than the second, and you probably don't have time to do that and save the world. If you shoot a little lower on this one and just try to find something that works for you I think your argument will change… quite a lot.
    2pjeby15y
    If you think that's the case, you didn't read the whole Wikipedia page on that, or the cite I gave to a 2001 paper that independently re-creates a portion of NLP's model of emotional physiology. I've seen more than one other peer-reviewed paper in the past that's recreated some portion of "NLP, Volume I", as in, a new experimental result that supports a portion of the NLP model. Hell, hyperbolic discounting using the visual representation system was explained by NLP submodalities research two decades ago, for crying out loud. And the somatic marker hypothesis is at the very core of NLP. Affective asynchrony? See discussions of "incongruence" and "anchor collapsing" in NLP vI, which demonstrate and explain the existence of duality of affect. IOW, none of the real research validation of NLP has the letters "N-L-P" on it . Unreliable for what purpose? I would think that for any individual's purpose, self-experimentation is the ONLY standard that counts... it's of no value to me if a medicine is statistically proven to work 99% of the time, if it doesn't work for ME.
    4Vladimir_Nesov15y
    This sounds like being uninterested in the chances of winning a lottery, since the only thing that matters is whether the lottery will be won by ME, and it costs only a buck to try (perform a self-experiment).

    This sounds like being uninterested in the chances of winning a lottery, since the only thing that matters is whether the lottery will be won by ME, and it costs only a buck to try (perform a self-experiment).

    And yet, this sort of thinking produces people who get better results in life, generally. Successful people know they benefit from learning to do one more useful thing than the other guy, so it doesn't matter if they try fifty things and 49 of them don't work, whether those fifty things are in the same book or different books, because the payoff of something that works is (generally speaking) forever.

    Success in learning, IOW, is a black-swan strategy: mostly you lose, and occasionally you win big. But I don't see anybody arguing that black swan strategies are mathematically equivalent to playing the lottery.

    IMO, the rational strategy is to try things that might work better, knowing that they might fail, yet trying to your utmost to take them seriously and make them work. Hell, I even read "Dianetics" once, or tried to. I got a third of the way through that huge tome before I concluded that it was just a giant hypnotic induction via boredom. (Things I read later about Scientology's use of the book seem to actually support this hypothesis.)

    1Vladimir_Nesov15y
    This became infeasible with the invention of printing press. There is too much stuff out there, for any given person to learn. Or to ever see all the titles of the stuff that exists. Or the names of the fields for which it's written. There is too much science, and even more nonsense. You can't just tell "read everything". It's physically impossible. P.S. See this disclaimer, on second thought I connotationally disagree with this comment.
    4pjeby15y
    What happened to "Shut up and do the impossible"? ;-) More seriously, what difference does it make? The winning attitude is not that you have to read everything, it's that if you find one useful thing every now and then that improves your status quo, you already win. Also, when it comes to self-help, you're in luck -- the number of actually different methods that exist is fairly small, but they are infinitely repeated over and over again in different books, using different language. My personal sorting tool of choice is looking for specificity of language: techniques that are described in as much sensory-oriented, "near" language as possible, with a minimum of abstraction. I also don't bother evaluating things that don't make claims that would offer an improvement over anything else I've tried, and I have a preference for reading authors who've offered insightful models and useful techniques in the past. Lately, I've gotten over my snobbish tendency to avoid authors who write things I know or suspect aren't true (e.g. stupid quantum mechanics interpretations); I've realized that it just doesn't have as much to do with whether they will actually have something useful to say, as I used to think it did.
    2Vladimir_Golovin15y
    PJ, is there a survey / summary / list of these methods online? Could you please link, or, if there's no such survey, summarize the methods briefly?
    1pjeby15y
    90% of everything is hypnosis, NLP, or the law of attraction -- and in a very significant way, they are all the same thing "under the hood", at different degrees of modeling detail and with different preferred operating channels. NLP has the most precise models, and the greatest emphasis on well-formedness criteria and testing. (At least, the founders had those emphases; "pop NLP" often seems to not even know what well-formedness is.) Hypnosis, OTOH, is just a trancy-form of NLP, LoA, or both. Pretty much everything in the self-help field can be viewed as a special case, application, or "tips and hints" variation of one of those three things, but using individual authors' terminology, metaphors, and case histories. The possible failure modes are pretty much the same across all of them, too. There is, by the way, one author who writes about non-mystical applications of the so-called "law of attraction": Robert Fritz. He's the only person I'm aware of who's brought an almost-NLP level of rigor and precision to that concept, and with absolutely no mystical connotations or bad science whatsoever. He doesn't call it LoA; he refers to it as the "creative process", and shows how it's the process that artists, musicians, and even inventors and entrepreneurs normally use to create results. (i.e., a strictly mental+physical process that engages the brain's planning systems, much like what I showed in my video, but on a larger scale.) His books also contain the largest collection of documented failure modes (biases and broken beliefs) that interfere with this process, based on his workshops and client work. I've found it to be invaluable in my own practice. (The biggest shortcoming of Fritz's work compared to some more mystical LoA works, however, is that he doesn't address general emotional state or "abundance mindset" issues, at least not directly.)
    0Vladimir_Golovin15y
    BTW, I think that the Law of Attraction is basically a manifestation of successful self-priming (plus the other self-conditioning phenomenon Anna Salamon posted about - can't find the post). And yes, the pull motivation trick seems to fit here perfectly.
    2Vladimir_Nesov15y
    Viva randomness! At least it's better than stupidity. And is about as effective as reversed stupidity. Which is not intelligence. You should know better what you need, what's good for you, than a random number generator. And you should work on your field of study being better than a procedure for crafting another random option for such a random choice. I wonder how long it'll take to stumble on success if you use a hypothetical "buy a random popular book" order option on Amazon. P.S. See this disclaimer, on second thought I connotationally disagree with this comment.
    4Nick_Tarleton15y
    Strawman?
    2Vladimir_Nesov15y
    Guilty. It doesn't particularly apply in this case, since the argument is that randomness is the best available option for now, because intelligence doesn't work yet for this case. I'm overidentifying with the general negative move I've made on pjeby, and as a result I've indulged myself in a couple of wrong responses, in a comment above and to an extent in a preceding one, although both also hold a fair amount of truth, but express it with dishonest connotation. This comment was based on an argument with a person who explicitly insisted that tossing a coin is better than deciding for yourself.
    2pjeby15y
    Kindly point to the specific words which you think meant that, so that I can see whether I need to be more clear, or whether you just rounded to a cliche. Edit to add: Whoops, I just did the same thing to you. I see now that your comment was saying that you were rounding to a cached argument from a discussion with somebody else about tossing coins, not implying that that was what I said. Sorry for the confusion.
    0Nick_Tarleton15y
    But pjeby isn't even saying that – even reading completely random books, which AFAICT he doesn't advocate, invokes a powerful optimization process (writers and publishers).
    0Vladimir_Nesov15y
    You always do the random thing relative to the options you are given. That doesn't change the problem, as far as I can see, just applies it to a different situation
    0Nick_Tarleton15y
    Point taken; still, different from my very literal interpretation of letting a random number generator decide what you need.
    0Vladimir_Nesov15y
    You can't literally make only random actions. You can't make random muscle movements. You may use random long-term goals, which can be analogized with being a fanatic, or middle-term goals, analogy with a crazy person, or random short-term goals, analogous to being clinically mad. In any case, whatever I could mean by random action, it's necessarily already quite abstract, selected from few intelligent options.
    2pjeby15y
    You sound like someone arguing that evolution shouldn't be able to work because it's all "blind chance". Learning, like evolution, is "unblind chance": what interests me is a combination of what I encounter plus what I already know. The more I learn, the more I learn about what is and isn't useful, and I've found it useful to drop (or at least reduce the priority of) certain filters that I previously had, while tightening up other filters. That's not really "random", in the same way that natural selection is not "random".
    0Vladimir_Nesov15y
    That still isn't the same as self-experimenting with every procedure that was ever thought up and supported by a visible enough school. As an intelligent being, you should be able to do better than randomness, and well better than evolution. That's the power of intelligence.
    2Nick_Tarleton15y
    Still strawman? pjeby said:
    2Vladimir_Nesov15y
    See? I don't even remember reading it.
    1Eliezer Yudkowsky15y
    You keep using that phrase. I do not think it means what you think it does.
    3sprocket15y
    The phrase makes some kind of sense to me (although not in that particular case), so in case you're not just trying to drop a geeky reference, let me try to explain what I make of this phrase. Assume members of alien species X have two reasoning modes A and B which account for all their thinking. In my mind, I model these "modes" as logical calculi, but I guess you could translate this to two distinct points in the "space of possible minds". An Xian is at any one time instance either in mode A or B, but under certain conditions the mode can flip. Except for these two reasoning modes, there is a heuristic faculty, which guides the application of specific rules in A and B. Some conclusions can be reached in mode A but not in B, and vice versa, so ideally, an Xian would master performing switches between them. Now here's the problem: Switching between A and B can only happen if a certain sequence of seemingly nonsensical reasoning steps is taken. Since the sequence is nonsensical, an Xian with a finely tuned heuristic for either A or B will be unlikely to encounter it in the course of normal reasoning. Now, say that Bloob, an accomplished Xian A-thinker, finds out how to do the switch to B and thus manages to prove a theorem of high-value. Bloob will now have major problems communicating his results to his A-thinking peers. They will look at a couple of his proof steps, conclude that they are nonsensical and label him a crackpot. Bloob might instead decide (whatever that word means in my story) to target people who are familiar with the switch from A to B. He can show them one of the proof steps, and hope that their heuristic "remembers" that they lead to something good down the road. Such a nonsensical proof step may be saying "Shut up and to the impossible". So, I suspect that humans do have something like those reasoning modes. They are not necessarily just two, it might not be appropriate to call all of them reasoning, but the main point is that thinking a th

    Assume members of alien species X have two reasoning modes A and B which account for all their thinking. In my mind, I model these "modes" as logical calculi, but I guess you could translate this to two distinct points in the "space of possible minds". ...

    An Xian is at any one time instance either in mode A or B, but under certain conditions the mode can flip. Except for these two reasoning modes, there is a heuristic faculty, which guides the application of specific rules in A and B. Some conclusions can be reached in mode A but not in B, and vice versa, so ideally, an Xian would master performing switches between them. ...

    So, I suspect that humans do have something like those reasoning modes. They are not necessarily just two, it might not be appropriate to call all of them reasoning, but the main point is that thinking a thought might change the rules of thinking.

    Excellent comment! You have hit the nail very nearly square on the head. Allow me to make one minor adjustment to your aim, and then relate your analogy back to the fields of self-help, NLP, Zen, normal waking consciousness, etc.

    See, it's not the content of the thought that switches modes, but h... (read more)

    0NancyLebovitz12y
    I'd generally agree with that, but I was recently at an excellent qi gong workshop taught by Yang Yang, who told the students to do qi gong with an attitude of "I am a master". As far as I can tell, this has the advantage of overriding habits of thinking "I'm just a student, I'm not very good at this". It might also override habits of thinking "I have to show how good I am".
    0pjeby12y
    Note that "I am a master" is not falsifiable, unless you also have some idea of what being a master consists of. This isn't a problem if you believe (for example) that a master is someone who is always learning and improving, and who makes mistakes. Of course, at that point, you are right back to having a capability belief. ;-)
    -1Vladimir_Nesov15y
    Okay. Another take. Is this really true? How long would it take for a new-commer to walk through every available option? How much would it cost? What is the chance he should expect before starting the whole endeavor that any of the available options will help? For the last question, the lottery analogy fits perfectly, no "works only for ME" excuse.
    4MrShaggy15y
    I've read dozens of self-help books and numerous websites, etc. and pjeby's claims of repetition seem mostly true (and his point that some who have unscientific philosophies have great practical advice is definitely true in my experience).
    2pjeby15y
    That huge numbers of books are about the same things, in different language? Absolutely. Books that contain something genuinely new in self-help are exceedingly rare in my experience. Books that have one or two new twists or better metaphors for explaining the same things are enormously common. Take for example, "the law of attraction". I don't believe it has any objective external basis: rather, it's a matter of 1. motivation and 2. making your own luck -- i.e. "chance favors the prepared mind". However, the quality of information about its practical applications varies widely, and some of the most woo-woo crazy books -- like one of the ones supposedly written by a spirit being channeled from another universe -- actually have the best practical information for leveraging the psychological benefits of belief. I'm specifically talking about the "emotional energy scale" model from the book "Ask and It Is Given". Note that I don't know if they invented that model or swiped it from some psych researcher... and I don't really care. By putting that information into a useful context, they gave me more usable information than raw experimental data would have provided. Now, if I were looking for "truth", I'd certainly trust peer-reviewed research more than I'd trust a channeled being from beyond. But if the being from beyond offers a useful model distinction, I don't especially care if it's true. Now, some people reading this are going to think because I mentioned the LoA that I believe all that quantum garbage -- but I do not. I do believe, however, that self-fulfilling prophecies are useful, and the LoA literature is a great source of raw practical data in the application of self-fulfilling prophecy, as long as you ignore all their theories about why anything works, and focus on testing specific physical and mental techniques, and break down the attitudes. For example, one fascinating commonality of themes in this literature: the idea of gratitude or abundance, giving
    4Vladimir_Nesov15y
    I haven't read the above yet, I'll do it later; but I want to make a general observation for now: everybody would be better off if your replies were shorter. You are already talking past many of the people here, so you should focus on communicating clearly, which may mean fast back-and-forth understanding checks, not on communicating lots of stuff, all of which doesn't do any good.
    2Vladimir_Nesov15y
    My initial question was an introduction to the rest, which ask whether the method of looking at everything is going to pay off. I don't ask for details about the content, since the worth of looking at these details is exactly what I'm asking about. I split the following question into its own thread: Now you are talking past my question again. The conversation started where you asserted that it's possible to test all of the available methods on yourself, since there are so few genuinely different ones. In response you recommend sticking to one method. Fine. What are the answers to my questions for a single randomly selected method (among a number of surface-filtered available options)?
    3pjeby15y
    My available samples say: Years, thousands, and slim. Of course, people for whom these things are not the case, will be considerably less likely to be my customer, so it's a severely biased sample. (Which also means that it's possible my techniques work best on people who try lots of self-help and fail, but that seems more like an advantage than a disadvantage to me.) However, I have noticed that highly-successful people also own large self-help libraries, but they are not disappointed in them, because they always find at least ONE thing of use to them in EVERY book. My original point, which you still seem to be ignoring, is that I am not and have never been advocating that a self-help seeker engage in a random walk of self-help books. I am saying that people who succeed in life have the attitude that they can find at least one useful thing in every circumstance they encounter, if they apply themselves to looking for it, and applying it. Cultivating that attitude is what I actually recommended, as you will see if you return to the beginning of the thread.
    0Vladimir_Nesov15y
    My question, however, was about the worth of studying the theories of which you speak, and in particular of interpreting your long comments that try to communicate them. Thank you for answering it.
    0Vladimir_Nesov15y
    What might well be true? The connotation of my question that implies that your field is worthless? I was specifically asking how much it's worth, only the conclusion that you may draw, as an expert, not the reflections leading to naught. The rest of your comment also talks past the questions. You note that you receive student feedback that could answer my questions, talk about your book implying that it'll answer my questions, talk about how the still completely unknown to me efficiency of your methods improves from personal tutoring.
    3pjeby15y
    Yes. I'm a rather outspoken critic of the field, and not just for marketing reasons. The problem isn't the industry, it's that developing "kicking" skills requires practice, and for practice to work you have to have feedback, even if that feedback is you yourself checking your performance against some model. Most self-help material doesn't even teach explicitly making these checks, let alone giving substantive criteria for telling whether you've done something correctly or not. People are left to blindly stumble on the right method, if they happen to hear a metaphor that works for them or read in someone's story about doing it wrong, how they're doing it wrong. The entire field -- at least in books -- is like teaching people to ride bicycles without giving them any bicycles to practice on. Common practice in workshops isn't a hell of a lot better, but your odds are a lot better of stumbling on a workshop where you can get coached or walked through something. Even there, testability, repeatability, and trainability are not the focus. So yes, the entire self-help field might as well be a lottery right now, if you have no information on where to start. Many of my students, like me, own literally hundreds of self-help books, from which they got little or no help until they "got it" from something I wrote or said or did with them. As for me, I just got lucky enough to get an insight from computer programming that opened my eyes to what was going on, that gave me my first "rosetta stone" for the field.
    4Cyan15y
    Unreliable for getting true explanations. Self-experimentation is generally too poorly controlled to give unconfounded data about what really caused a result. (Also, typically sample size is too small to justify generalizability.)
    2MrShaggy15y
    The way I'd put it for this stuff is that experiments help communicate why someone would try a technique, they help people distinguish signal from noise, because there are a ton of people out there saying X works for me.

    I totally don't mind engaging with people who want to learn something and are willing to actually look at experience, instead of just talking about it and telling themselves they already know what works or what is likely to work, without actually trying it. The other people, I can't do a damn thing for.

    If your interest is in "science", I can't help you. I'm not a scientist, and I'm not trying to increase the body of knowledge of science. Science is a movement; I'm interested in individuals. And individual rationalists ought to be able to figure things out for themselves, without needing the stamp of authority.

    I also have no interest in being an authority -- the only authority that counts in any field is your own results.

    The plural of anecdote is not data. Many people will tell you how they were cured by faith healers or other quacks, and, indeed, they had problems that went away after being "treated" by the quack. Does that make the quacks effective or give credibility to their theories about the human body?

    The same applies to methods of affecting the human brain. As a non-expert, from the outside I can't tell the difference between NLP, Freudian psychotherap... (read more)

    4gjm15y
    Few of your comments here seem to me to describe things that are obviously checkable in ten minutes by simple self-experimentation. (Even ignoring the severe unreliability of self-experimentation, since doubtless there are at least some instances in which self-experimentation can provide substantial evidence.) Perhaps they are so checkable with the help of extra information that you've declined to provide. Perhaps I've just not read the right comments. Perhaps I've read the right comments and forgotten them. Would you care to clarify?
    2pjeby15y
    Mostly, I've offered questions that people could ask themselves in relation to specific procrastination scenarios, that would give them an insight into the process of how they're doing it. IIRC, two people have reported back with positive hits; one of the two also had a second scenario, for which my first question did not produce a result, but it's not clear yet what the answer to my second question was. (I gave both questions up front, along with the sequence to use them in, and criteria for determining whether an answer was "near" or "far", along with instructions to reject the "far" answers. One respondent gave a "far" answer, so I asked them to repeat.) I've also linked to a video offering a simple motivational technique based on my model; a few people have posted positive comments here, and I've also gotten a number of private emails from users here via the feedback form on my site, expressing gratitude for its usefulness to them. The video is just about 10 minutes long. In another comment, I described a simple NLP submodalities exercise that could be tried in a few minutes, albeit with the disclaimer that some people find it hard to consciously observe or manipulate submodalities directly. (The technique in my video is a bit more indirect, and designed to avoid conscious interference in the less-conscious aspects of the process.) I've referenced various books on other techniques I've used, and I believe I even mentioned that Byron Katie's site at thework.org includes a free 20-page excerpt from Loving What Is that provides instructions for a testable technique that operates on the same fundamental basis as my models. I'm really not sure what the heck else people want. Even if you claim, as Eliezer does, that he can't understand my writing, it's not like I haven't referenced plenty of other people's writing, and even my spoken language (in the video) as alternative options.
    7Paul Crowley15y
    I also find your writing difficult. If you'll accept a recommendation, I think your readership here might get more from shorter comments in which more work has gone into each word.
    2Eliezer Yudkowsky15y
    Can you link to these things? Your comments? Here? There's an LW search box.
    5pjeby15y
    * How To Tell If You're Making Shit Up * NLP Submodalities Experiment * Motivation technique video Edit to add: * Useful background on "entry criterion" for these techniques
    5Eliezer Yudkowsky15y
    "How To Tell If You're Making Shit Up" seems useful. Do you see why this would seem useful to me while "NLP Submodalities" doesn't?
    4pjeby15y
    For the same reason that yours and Robin's writing on biases is more useful than the source material, I imagine. That is, it's been predigested. It probably also doesn't hurt that I have to teach "how to tell if you're making shit up" to every single client of mine, so I have some practice at doing so! (Albeit mostly in real-time interaction.) FYI, NLP volume I represents the more detailed "brain software" model from which that summary was derived, which I recommended to you because you said you couldn't follow my writing. You can also see why I was excited when Robin started posting about near/far stuff on OB -- it fit very nicely into the work I was already doing, and into the NLP presupposition that "conscious verbal responses are to be treated as unsubstantiated rumor unless confirmed by unconscious nonverbal response" -- i.e., don't trust what somebody says about their behavior, because that's not the system that runs the behavior. The Near/far distinction mainly added an evolutionary explanation that was not a part of NLP, and gave a better why for not trusting the verbal explanation. Near/far in a literal sense, as in "people respond differently based on distance in space/time/abstraction level of visualization", has been part of the NLP models for over 20 years now. But once again, the mainstream experiments are just now being done, presumably by people who've never heard of NLP, or who assume it's crackpottery.
    4[anonymous]15y
    deleted
    0gjm15y
    So, I watched the video (some time ago, when you posted about it) and gave it one trial. The technique wasn't effective for me on the task I tried it on. The particular failure mode was one you mentioned in the video, and if you are correct about the generality with which it makes the technique not work then I would expect the technique to be generally ineffective for the things I'd benefit from motivational help with. Your suggestions about identifying the causes of procrastination: I haven't tried that yet, and it sounds interesting; I notice that when someone did try it and got results that didn't perfectly match your theory your immediate response was not "oh, that's interesting; perhaps my theory needs some tweaking" but "I don't believe you". Can you see how this might make people skeptical? Referencing books is only helpful in so far as (1) it's not necessary to read the whole of a lengthy book to extract the small piece of information you've been asked for, (2) the book is clearly credible, and (3) the book is actually available (e.g., in lots of libraries, or inexpensive, or online). To those who are skeptical about the whole self-help business, #2 is a pretty difficult criterion to meet.
    3pjeby15y
    Indeed. It is supposed to be a free sample, after all. The work I charge for is fixing those things that make it not work. The things that make motivation not work are much, much more diverse than the things that make it actually work. My response was, "you didn't follow directions", actually. Unless you're talking about the first part where the only information given was, "it didn't work". If you've ever done software tech support, you already know that "it didn't work" is not a well-formed answer. (Similarly, the later answer given was also not well-formed, by the criteria I laid out in advance.) Failure to meet entry criterion for a technique does not constitute failure of the technique or the model: if you build a plane without an engine, and it doesn't take off, this does not represent a failure of aerodynamics. Indeed, aerodynamics predicts that failure mode, and so did I. The response I got was not unexpected; it's common for people to have trouble at first, especially on things they don't want to look too closely at. I've had people spend up to 30 minutes in the "talk around the problem" failure mode before they could actually look at what they were thinking. The other most common failure mode is that somebody does see or hear something, but rejects it as nonsensical or irrelevant, then reports that they didn't get anything. Third most common failure mode is lack of body awareness or physical suppression, but I know he doesn't have that as a general problem because his first response indicated awareness. His first response also indicated he is capable of perceiving responses, so that pretty much narrows it down to avoidance or assumption of irrelevance . If it's neither, then it might be relevant to a model update, especially if it's a repeatable result. (At this point, however, he's going to have to repeat the asking of the second question, to test that, though, because these responses don't stick in long-term memory; in a sense, they are long-term mem
    1gjm15y
    I think this (not the fact that it's a free sample, but the fact that apparently it's a feature, not a bug, if it doesn't work well for many people) makes it rather unuseful as a try-it-yourself demonstration of how good your models and techniques are. There was no such first part; even jimrandomh's initial response had more information than that in it. And after he gave more information your reply was still "I don't believe you" rather than "you didn't follow directions". Interested parties can check the thread for themselves. No, to be sure. But once you hedge your description of your technique and what it's supposed to achieve with so many qualifications -- once you say, in so many words, that you expect it not to work when tried -- how can it possibly be reasonable for you to use it as an example of how you've supplied us with empirically testable evidence for what you say? Saying "You can check my ideas by trying this technique -- but of course it's quite likely not to work" is just like saying "You can check my belief in God by praying to him for a miracle -- but of course he works in mysterious ways and often says no."
    2pjeby15y
    The point of the exercise is that it's targeted to work for as many people as possible for a fairly narrow range of tasks, so as to give a sample of what it's like when it works. Even chronic procrastinators can achieve success with the technique, as long as they don't use it on the thing they're procrastinating on -- it only works if you don't distract yourself with other thoughts, and if you're stressed about something, you're probably going to distract yourself with other thoughts. Most people, however, don't seem to have any significant stressors about cleaning their desk. Also, it's not a difficult thing to visualize in its completed form. Btw, just as a datapoint, what did you try it on, and what failure mode did you encounter? I am, ironically, MORE interested in failure reports than successes; the video continually gets rave reviews, but as much as I enjoy them, I can't learn anything new from another success report! I just rechecked myself; here are the relevant portions. Jim said: I took this statement as a literal description of what happened, i.e., jim thought about "it" -- whatever "it" was -- got no physical response, and had thoughts about the details of the task. THEN (2nd step) he was unable to begin working on it. "Unable to begin working on it" is the part I referred to as not well-formed; this does not contain any description of how he arrived at that conclusion. It is the equivalent of "it doesn't work" in tech support. The unspecified "it" is also potentially relevant; I don't know if he refers there to the task itself, or one of the questions I said to ask about the task; and this is an important distinction. I've also noticed that some people can "think about their task" and not get a response because they are not thinking about actually starting on the task... and Jim's statements would be consistent with a sequence of thinking about the idea of the task, followed by preparing to actually perform the task... at which point an undescri
    0gjm15y
    I tried it on the same example you proposed: desk-clearing. My desk is a mess; I would quite like it to be less of a mess; clearing it is never a high enough priority to make it happen. But I don't react to the thought of a clear desk with the "Mmmmmm..." response that you say is necessary for the technique to work. As for your discussion with Jim: you did not at any point tell him that he didn't do what you'd told him to, or say anything that implied that; you did say that you think his statements contradict one another (implication: at least one of them is false; implication: you do not believe him). And then when he claimed that what stopped him was apathy and down-prioritizing by "the attention-allocating part of my brain" you told him that that wasn't really an answer, and your justification for that was that his brain doesn't really work in the way he said (implication: what he said was false; aliter, you didn't believe him). So although you didn't use the words "I don't believe him", you did tell him that what he said couldn't be correct. Incidentally, I find your usage of the word "incompatible" as described here so bizarre that it's hard not to see it as a rationalization aimed at avoiding admitting that you told jimrandomh he'd contradicted himself when in fact all he'd done was to say two things that couldn't both be true if your model of his mind is correct. However, I'll take your word for it that you really meant what you say you meant, and suggest that when you're using a word in so nonstandard a way you might do well to say so at the time.
    4pjeby15y
    Did you ask yourself what it is that you would enjoy about it if it were already clean? (Again, this is strictly for my information.) Note that the procedure described in the video asks for you to wonder about what sorts of qualities would be good if you already had a clean desk, in order to find something that you like about the idea enough to generate the feeling of pleasure or relief. Au contraire, I said: That is, I directed him to the "How To Know If You're Making Shit Up" comment -- the comment in which I gave him the directions, and which explained why his utterance was not well-formed. This is an awful lot of projection on your part. The contradiction I was pointing to was that he was talking about two different things -- the statements were incompatible with a description of the same thing. That is not anything like the same as "I don't believe you"; from what Jim said, I don't even have enough information to believe or not-believe something! Hence, "as far as I can tell" ("AFAICT"), and the request for more information... not unlike my requests for more information from you about what you tried. "It didn't work" is not an answer which provides me any information suitable for updating a model, any more than it is for a programmer trying to find a bug. The programmer needs to know at a minimum what you did, and what you got instead of the desired result. (Well, in the software case you also want to know what the desired result was; in this kind of context it can sometimes be assumed.) Because it isn't one: it's a made-up explanation, not a description of an experience. See the comment I referred him to. If someone states something that is not a testable hypothesis, how can I "believe" or "disbelieve" it? They are simply speaking nonsense. Unless Jim has a blueprint of his brain with something marked "attention-allocating part" and he has an EEG or brain scan to show this activity, how can I possibly assign any truth value to that claim? In contrast,
    2gjm15y
    This discussion is getting waaay too long and distinctly off-topic; but, as briefly as I can manage: Yes. No, I did not do that. I said that what you're doing looks a lot like post-hoc rationalization, but that I'd take your word that it wasn't. I meant what I said. I am updating all the time. Lots of things that you've said have led to adjustments (both ways) in my estimates for Pr(Philip knows exactly what he's talking about) and Pr(Philip is an outright charlatan) and the various intermediate possibilities. Perhaps you mean: what evidence would lead to a large upward change for the "better" possibilities? I'm not sure that any single smallish-sized piece of evidence would do that. But how about: some reasonably precise statements explaining key bits of your model, together with some non-anecdotal and publicly avaliable evidence for their correctness. I think that perhaps the problem here is that we are trying to treat you as a colleague whereas you prefer to treat us as clients. We say "your theories sound interesting; please tell us more about them, and provide some evidence"; you say "well, I want you to do such-and-such, and you have to do exactly what I tell you to". This is unhelpful because (1) it doesn't actually answer the question and (2) it is liable to feel patronizing, and people seldom react well to being patronized. (By "we" it is possible that I really mean "I", but it looks to me as if there are others who feel the same way.)
    3pjeby15y
    There are two modes of thinking. One directly makes you do things, the other one can only do so indirectly. One is based on non-verbal concrete sensory information, the other on verbal and mathematical abstractions. Verbal abstractions can comment on themselves or on sensory experience, or they can induce sensory experience through the process of self-suggestion -- e.g. priming and reading stories are both examples of translating verbal information to the sensory system, to produce emotional responses and/or actions. More specifically, we make decisions and take action by reference to "feelings" (in the technical definition of physical awareness of the body/mind changes produced by an emotional response). Feelings (or more precisely, the emotions that generate the feelings) occur in response to predictions made by our brain, using past sensory experience. But because the sensory system does not "understand", only predict, many of these predictions are based on limited observation, confirmation bias, etc. When our behavior is not as we expect -- when we experience being "blocked" -- it is because our conscious verbal/abstract assessment or prediction does not match our sensory-level prediction. We "know" there is no ghost, but run away anyway. Surfacing the actual sensory prediction allows it to be modified, by comparing it to contradicting sensory evidence, whether real or imagined. This is the bulk of the portion of my model that relates to treating chronic procrastination, though most of it has further applications. You'll need to define "evidence". But the parts of what I said above that aren't part of the experimentally-backed near/far model and the "somatic marker hypothesis" can be investigated in personal experience. And here's a paper supporting the memory-prediction-emotion-action cycle of my model. Actually, it does. I'm trying to tell you how to experience the particular types of experience that demonstrate practical applications of the model give
    1Cyan15y
    I think the problem here is that the internet is great when you want to share information with people but is not a consistently good venue for convincing people of something, particularly when the initially least convinced people are self-selecting for interaction with you. Pick your battles, I'd say.
    0badger15y
    Just to check, you agree that to be useful any model of the brain has to correspond to how the brain actually works? To that extent, you are seeking a true model. However, if I understand you correctly, your model is a highly compressed representation of how the mind works, so it might not superficially resemble a more detailed model. If this is correct, I can empathize with your position here: any practically useful model of the brain has to be highly compressed, but at this high level of compression, accurate models are mostly indistinguishable from bullshit at first glance. I am still very unsure about the accuracy of what you are propounding, but anecdotally your comments here have been useful to me.
    3pjeby15y
    No, it only has to produce the same predictions that a "corresponding" model would, within the area of useful application. Note, for example, that the original model of electricity is backwards -- Benjamin Franklin thought the electrons flowed from the "positive" end of a battery, but we found out later it was the other way 'round. Nonetheless, this mistake did not keep electricity from working! Now, let's compare to the LoA people, who claim that there is a mystical law of the universe that causes nice thoughts to attract nice things. This notion is clearly false... and yet some people are able to produce results that make it seem true. So, while I would prefer to have a "true" model that explains the results (and I think I have a more-parsimonious model that does), this does not stop anyone from making use of the "false" model to produce a result, as long as they don't allow their knowledge of its falsity to interfere with them using it. See also dating advice, i.e., "pickup" -- some schools of pickup have models of human behavior which may be false, yet still produce results. Others have refined those models to be more parsimonious, and produced improved results. Yet all the models produce results for some people -- most likely the people who devote their efforts to application first, critique second... rather than the other way around. A model can actually BE bullshit and still produce valuable results! It's not that the model is too compressed, it's that it includes excessive description. For example, the LoA is bullshit because it's just a made-up explanation for a real phenomenon. If all the LoA people said was, "look, we found that if we take this attitude and think certain thoughts in a certain way, we experience increased perception of ways to exploit circumstances to meet our goals, and increased motivation to act on these opportunities", then that would be a compressed model! NLP is such a model over a slightly different sphere, in that it says,
    4pjeby15y
    By the way, the technique given in my thoughts-into-action video is based on extracting precisely the above notion, and reproducing the effect on a small scale, with a short timeframe, and without resorting to mysticism or "quantum physics". IOW, the people who successfully used the technique therein have already experienced an "increased perception of ways to exploit the circumstances (of a messy desk) to meet the goal (of a clean one), and increased motivation to act on those opportunities".
    -2gjm15y
    I didn't say "nasty", I said "patronizing". If someone tells you that by praying in a particular way anyone can achieve spiritual union with the creator of the universe, and you ask for evidence, it is Not Helpful if they tell you "just try it and see". (Especially if they add that actually, on past experience, the chances are that if you try it you won't see because you won't really be doing it right; and that to do it right you have to suspend your disbelief in what they're telling you and agree to obey all their instructions. But that's a separate can of worms.) Because (1) you won't know for sure whether you really have achieved spiritual union with the creator of the universe (it might just feel that way), and (2) you'll have discovered scarcely anything about how it works for anyone else. You might be more impressed if they can point to some sort of statistical evidence that shows (say) that people who pray in their preferred way are particularly good at discovering new laws of physics, which they attribute to their intimate connection to the creator of the universe. More briefly: If someone asks for evidence, then "if you do exactly what I tell you to and suspend disbelief, then you might feel what I say you will" is not answering their question. I haven't observed this progressive retreat (it looks more to me like a progressive realisation on your part of what the fussier denizens of LW had wanted all along). But I do have a comment on the last step you described -- "the paper has to say it's about NLP". For anyone who isn't a professional psychologist, neurologist, cognitive scientist, or whatever, determining whether (and how far) a paper like Damasio's supports your claims is a decidedly nontrivial business. (It's easy to verify that some similar words crop up in somewhat-similar contexts, but that's not the same.) Whereas, if if a paper says "Our findings provide strong confirmation for the wibbling hypothesis of NLP" and what you're saying is "I acce
    2badger15y
    I think this is a little unfair. Extending the Mormon Wednesday discussion, I didn't take my church leader's suggestions to "read the Book of Mormon and pray about it" because, in retrospect, I had an extremely low prior probability that my thoughts could be communicated to a divine being who would respond to them with warm fuzzies. I don't think pjeby's claims that practicing certain mental states/self hypnosis (I'm unclear on exactly what he is advocating) can influence our subconscious are that implausible. That doesn't mean his theories are right, but they seem plausible that even the weak evidence of self-experimentation might say something about them.
    2pjeby15y
    I'm suggesting that priming, suggestion, hypnosis, NLP, placebo effects, creative visualization and a host of other psychological and new-age phenomena are ALL functions of the near/far divide, relying on a single precondition that might be called "suspension of disbelief". Or more precisely, refraining from verbal overshadowing -- or something that's suspiciously close to being able to be described that way. From an evolutionary POV, you might say my hypothesis is that verbal overshadowing actually evolved in a "persuasion arms race", specifically as an anti-persuasion defense, to prevent others from verbally exploiting our exposed unconscious processes. IOW, if simple language evolved first, and was hooked directly to the "near" process (because that's all there was), then it could be exploited by others -- we would be "gullible" or "suggestible". We would then evolve more sophisticated verbal intelligence, both to better exploit others, and to better defend ourselves. Unfortunately, while this arguably gave rise to "intelligence" and "consciousness" as we know them, it also means that we're cut off from being able to exploit our own near systems, unless we learn how to shut off the shields long enough to put stuff in (or take stuff out, change it, etc.). Most self-help material consists of elaborate explanations to convince people to let down the shields by believing that what they say is true. However, in truth it is only necessary to not engage in disbelieving -- to not shoot down the incoming data, whether it's being provided by one's self, the therapist or hypnotist, or something you read in a book. However, instead of "truth" as a guide for what you install in the near system, one should use usefulness, since it is entirely possible to believe different things in the two systems without conflict. I consider the near system to basically be a robot that I program for my own use, so I can feel free to exploit its beliefs based on what results I, the prog
    7badger15y
    Well, I am genuinely appreciative of your attempts to explain, whether they are getting through or not.
    5pjeby15y
    Actually, I should be thanking you and the other people I've been replying to, because I just realized what pure gold I ended up with. I didn't actually realize I had an implicit synthesis of the entire self-help field on my hands; in fact, I never consciously synthesized it before. And when I was telling my wife about it this evening, the ramifications of what should be possible under this simplified model hit me like a ton of bricks. And it was the questions that Vladimir Nesov, gjm, Vladimir Golovin and others asked -- about the techniques, the model, the self-help field in general, the similarities -- combined with sprocket's post about "A/B" thinking that primed me with the right context to put it all together in a tightly integrated way. The refined model makes everything make a whole lot more sense to me -- failures and successes alike. (For example, I now have an idea of why certain "affirmation" techniques are likely to work better than others, for some poeple.) As soon as I get some rest, I have some things I want to try. Because if this more-unified model is indeed "less wrong" than my previous one, I just "levelled up" in my art. Frackin' awesome! I think my massive investment of time here is actually going to pay off. But whether it enables me to do anything new or not, this revision is still a big step forward in simplified communication regarding what I already do. So either way... Thank you, LWers, I couldn't have done it without you!
    4Vladimir_Nesov15y
    Hmmm... I wish you well, but usually this kind of revelation, when put into writing and left to draw on a shelf for a couple of weeks, reveals itself as much less wonderful than it originally seemed to be. Although usually it's also a step forward, even if in the direction opposite to where you were walking before.
    4gjm15y
    One might get the opposite impression, but in fact I am too. One reason why I keep whingeing at Philip is that his style of presentation makes it very difficult to tell where he is on the charlatan-to-expert spectrum, and that wouldn't bother me if I didn't think there was at least a chance that he's near the expert end.
    2pjeby15y
    No, because the amount of time I've spent attempting to communicate these things might have been better spent teaching more people who actually need the information badly enough to jump at the chance to apply it, and whose primary criterion for the quality of the information is whether it helps them. The only thing that makes it a tossup is that here, I'm forced to search for better and better metaphors, and more compact ways to communicate things... which is good practice/feedback for certain parts of the book I'm currently writing. But my current inability to quantify the effects of that practice, vs. the easily measurable time spent and the equivalent number of words towards a finished book, the tradeoff doesn't look so good.