This reminded me of the post on connectionism. I tried searching "a person who isn't Genghis Khan" and surely enough, the first things it comes up with are all related to Genghis Kahn.
I think that "imagine you're using Google" could be a fairly useful heuristic for trying to phrase queries to your brain.
Hi, The post is short, sweet and get's the point across. However I feel it could be better with a little bit more information including multiple sources. http://en.wikipedia.org/wiki/Trust,_but_verify
To my understanding, what you are describing here is what is called a transderivational search in Neuro-Linguistic Programming. It is basically a "satisficing" (suffice+satisfy) fuzzy search.
http://en.wikipedia.org/wiki/Transderivational_search http://en.wikipedia.org/wiki/Satisficing
Here's a pet peeve of mine: I think this site could find A LOT of benefit in delving into NLP. I mean, the whole field is basically a quest to find the machine-code of the human psyche. The version of NLP that is represented on sites like SkepDic seems like a poor re...
I think one of my favorite things is to see someone earnestly defend a marginally valuable and slightly controversial theory on LW, because the resulting dynamics cause the good parts of the theory to be revealed while simultaneously producing an object lesson in identifying junk science and filtering poorly tested claims with reasonableness. Most of the regular commenters wouldn't advocate or support a theory like NLP and if it was left to them the community wouldn't produce conversation trees like this one, which I find quite educational.
I wish there was some natural way for me to use the voting system to express "Boo!" to the idea of LW becoming infested with normal NLP jargon and culture, but "Thanks!" for starting and sticking with a massive comment tree defending NLP. As there is no natural way to express this, I'm writing this comment and upvoting here and here explicitly :-)
I agree that it seems worth looking into. I've looked into NLP a little bit. I'm always turned off by the voices of its practitioners. Their tonality, speed, excitement, and rhythym scream "I am trying to sell you snake oil!" to me. This is odd for people who claim to be masters of subcommunication via speech. They often repeat the charlatan pattern I first observed in Tom Brown Jr., of spending as much time telling you how great what they are telling you is, as telling you things.
This applies also to the popular self-improvement gurus, including Tony Robbins. I cannot stand to listen to an audio of him; it's like being trapped in a small room with a door-to-door vacuum-cleaner salesman.
Possibly I'm erroneously assuming that the vacuum-cleaner salesman voice is suboptimal because it annoys me.
I read an interview with a spammer, who said he experimented with different message types, and switched to writing spam in all uppercase with exclamation points because it got more positive responses.
Possibly, good mass-market salesmen optimize to sell to stupid people, whether what they are selling is good or bad.
Yes, I've talked to at least one person who worked as a car salesperson for a while and is surprisingly intelligent for that job. Their take was essentially that for a lot of people the obvious salesmany tactics work. Moreover, they asserted that the people who they don't work on are also generally people where even more polite tactics often won't work on, so one isn't losing that much.
I don't know how much this applies to cars, but I'd suspect that this applies even more to spamming.
Whether this applies to the NLP people is probably to some extent whether the NLP people are trying to attract smart critical thinkers or trying to attract the general population. I don't know enough about their goals to accurately speculate.
I haven't heard of NLP before, but reading about it now it's setting off all my old skepticism alarms. The claims it makes seem to be very vague and optimistic. I'm especially wary of things like the links you provided that talk about having "over 200 patterns"; I don't buy my textbooks based on their page count.
Self-hacking is cool, but any advice given along those lines needs to be backed up by solid literature something fierce (i.e. see lukeprog's How to Be Happy) to be plausible, and even then you should generally expect that any given piece of advice will only have a moderate chance of working on any given person.
Saying "I'm smart and I think it's worthwhile" isn't enough; lots of smart people think religion is worthwhile. If NLP has a central theory behind it, rather than just being an umbrella term for a bunch of disparate self-hacking techniques, then where can we find a step-by-step explanation and solid justification of that theory? And if there's isn't a central theory, then each "pattern" will have to be presented and justified on its own, and survive on its own merits independent of its sisters.
the amazing stuff I am always reading about
I'm sure it's amazing to read about. How amazing have you found it in practice?
Wikipedia suggests that NLP doesn't have any science behind it and it's predictions have been tested and disconfirmed. I'd have to hear a good explanation for this before giving NLP much time.
That Wikipedia page confirms that its widely disrespected, but read the Wikipedia on the actual studies performed. There is supporting research, some of it fairly impressive. The list of ratio of supportive studies to dismissive studies is very much skewed in support of NLP on this wikipedia page.
There are a few issues I see here.
1) NLP sets off big time "SCAM!" flags, since they seem to be trying to use NLP to sell NLP (to idiots).
2) Their theories can be useful, but are still crap. You can test it and "disprove" it by finding a flaw without finding the part that made it useful.
Because of these, it's going to take work to extract the value that's there.
3) It's hard to test things that have more than a couple causal factors. The hypnosis research, which is more respected, falls prey to this all the time. They measure one correlation resulting from a giant mess of factors without holding other factors constant (because they have failed to even identify them) and then are surprised when they cant' get consistent results for their oversimplified model.
4) "NLP" is being used too loosely. If they do a study that fails to find evidence for one theoretical cl...
Another huge link full of the jargon isn't helpful. What do the leaders in the field claim for it? I'm looking for straight-forward English sentences describing the effects of employing NLP. Like how, if I asked someone what you can do with aerodynamics they might reply "Build things that fly!".
It seems to be a kind of psychological therapy- but there are hundreds of such methods some supported by licensed clinicians and others not. All of it is subject to a huge placebo-like effect-- to the point where all of it may be no better than talking to a bartender about your problems. So picking a random set of ideas in the entire therapy/self help memeplex and saying "Less Wrong should investigate this" is a bit preposterous absent powerful claims or good evidence. Why shouldn't Less Wrong investigate one of the following instead: expressive art therapy, acceptance and commitment therapy, functional analytic therapy, CBT, humanistic psychotherapy, existential psychotherapy, integrative-existential psychotherapy, re-evaluation counseling, psychodynamics, holistic psychotherapy, hypnotherapy, logotherapy, person-centered psychotherapy, primal therapy, psychosynthesis, REBT, RLT, Jungian psychotherapy, Lacanian psychotherapy, DBT, DDP, DNMS, conversation therapy, dance therapy, Daseinanalytic psychotherapy, feminist therapy, Gestalt therapy, Holotropic breathwork, rebirthing- breathwork, IFSM.
Oh one more thing: if you've seen PJ Eby's "How to clean your desk video", then that's pretty much an NLP technique he uses. I think the term is "future-pacing".
If you're not sure whether the correct term is future-pacing, I think that rather than suggesting LW investigate NLP, perhaps you should do some more investigating of it. ;-)
(Hint: technically, you could maybe stretch the term to say I am future-pacing the feeling of enjoyment, but as generally applied in NLP, future-pacing is used to link a behavior to a context, and that is not at all how it's being applied in the video.)
And, as long as I'm commenting here, I'll say that I agree NLP has LW-worthwhile things in it; the linguistic meta-model, for example, is a key rationalist toolkit.
Unfortunately, even though NLP began as an effort to make psychotherapy more evidence-based and results-oriented, the field as a whole went Affective Death Spiral a long time ago (or some other sort of death spiral), and actually extracting wheat from the chaff is incredibly difficult.
I've spent countless hours reading, analyzing, watching videos... and the REAL meat of the subject is almost always in little offhand re...
The main thing I think folks are objecting to here is the idea of 'swallowing the NLP pill.'
You'll see plenty of self hacks and hacks that work on others (dark arts, etc) but none of it will be labeled NLP. I imagine plenty of the techniques we have here were even inspired in one way or another by NLP.
But here's my main point. We have kept our ideas' scope down for a reason. We DO NOT WANT lukeprog's How To Be Happy to sound authoritative. The reason for that is if it turns out to be 'more wrong' it will be that much easier to let go of.
Introducing the label NLP to our discussions will lend (for some of us) a certain amount of Argument from Authority to the supporters of whoever takes the NLP side, and we really do not want that.
"We DO NOT WANT lukeprog's How To Be Happy to sound authoritative. The reason for that is if it turns out to be 'more wrong' it will be that much easier to let go of."
This.
Whenever you give a collection of concepts a name, you almost automatically start to create a conceptual "immune system" to defend it, keep it intact in the face of criticism. This sort of getting-attached-to-names strikes me as approximately the opposite of Rationalist Taboo. (Hey, did someone just dis Rationalist Taboo? Lemme at 'em!)
I think the problem is not just giving hypotheses names, but giving large collections of hypotheses names. It bundles them together so that the strongest hypotheses in the group can defend the weakest ones, or the weakest ones can damage the strongest ones, even if the different hypotheses aren't actually related in a technical sense.
Dividing hypotheses into "NLP" and "not-NLP" is an attempt to carve hypothesis-space at its natural joints, and therefore needs to be justified by clear shared dependencies among those hypotheses.
Ahhh! Finally I have a good analogy for negative capability! You are learning to browse the first page of results dispassionately.
My friend kept repeating roughly the same arguments to me about why he couldn't feel better about his situation. I rather suspect I've done something similar in regards to some of my problems.
The nature of self-defeating behavior is to be self-sustaining. Or to put it another way, our problems usually live one meta-level above the place we insist they are. (Or perhaps one assumption-level below?)
IOW, the arguments we repeat about why we can't do something are correct, if viewed from within the assumptions we're making. The trick is that at least one of those assumptions must therefore be wrong, and you have to find out which ones. The original NLP metamodel is one such tool for identifying such assumptions, or at least pointing to where an assumption must exist in order for the argument to appear to make sense.
when I try asking myself about my motivations, they form cycles rather than (as the book) a straight line to the basic motivations.
There are at least a couple ways you could end up cycling, that I can think of. One is that you're not actually connecting with your near-mode brain about the subject, and are thus ending up in abstractions. Another is that you're not placing enough well-formedness constraint on your questions. At each level, you have to imagine that you already have ALL the things you wanted before.... which would make it kind of difficult to cycle back to wanting a previous thing.
In other words, the most likely cause (assuming you're not just verbalizing in circles and not connecting with actual near-mode feelings and images and such), is that you're not fully imagining having the things that you want, and experiencing what it would be like to already have them.
This is a stumbling block for a lot of techniques, not just Core Transformation. The key to overcoming it is to notice whether you have something preventing you from imagining "what it would be like", like that you think it's unrealistic, bad, or whatever. Noticing and handling these objections are the real meat of almost ANY mindhacking process, because they're the "second meta-level" issues I alluded to above, that are otherwise so very hard to notice or identify.
If you don't address these objections, but instead just plow through the technique (whether it's CT or anything else), you'll get inconsistent results, problems that seem to go away and then come back, etc.
(NLP sometimes refers to these things as "ecology", but relatively little time is spent on the subject in entry-level training. It's something that you need lots of examples of in order to really "get", because the principles by themselves are like saying you can ride a bike by "pumping the pedals and maintaining your balance". Knowing it and doing it just aren't the same.)
I tried going to a practitioner, and I'm now a lot more cynical about certifications.
Sadly, NLP practitioner certification at best means that you learned some REALLY basic stuff and were able to do it when supervised, and while doing it with people who are receiving the same training at the same time.
That is, NLP certification drills are done by trainee groups, who thus already know what's expected of them, which means nobody gets much experience of what it would be like to walk somebody through a technique who didn't receive the same training!
Your idea that the basis of the problem that Core Transformation is people not letting themselves feel what they're actually feeling makes sense.
Not actually what I said: it's about not allowing ourselves to feel good unless certain conditions are met. Or more precisely, our brain's rules about feelings are not reflexive: if you have a rule that says "feel bad when things don't go well", this does NOT imply that you will feel good when things do go well!
And, you will actually be better off having rules that tell you to feel good even when things don't go well, because bad feelings are not very useful when it comes to motivating constructive action. They're much better at telling us to avoid things than getting us to accomplish things.
(By the way, another common cause of self-defeating behavior being self-sustaining is that we tend to filter incoming concepts to match our existing frameworks. So, where my phrasing was ambiguous ("allow ourselves to feel certain things"), your brain may have pattern-matched that to "feel what we're feeling", even though that's almost the opposite of what I intended to say. The "certain things" I was referring to were feelings like the Andreas's notion of "core states": things that most of us aren't already feeling.)
The practitioner I went to was specifically certified in Core Transformation, not just NLP.
I just heard a comment by Braddock of Lovesystems that was brilliant: All that your brain does when you ask it a question is hit "search" and return the first hit it finds. So be careful how you phrase your question.
Say you just arrived at work, and realized you once again left your security pass at home. You ask yourself, "Why do I keep forgetting my security pass?"
If you believe you are a rational agent, you might think that you pass that question to your brain, and it parses it into its constituent parts and builds a query like
X such that cause(X, forget(me, securityPass))
and queries its knowledge base using logical inference for causal explanations specifically relevant to you and your security pass.
But you are not rational, and your brain is lazy; and as soon as you phrase your question and pass it on to your subconscious, your brain just Googles itself with a query like
why people forget things
looks at the first few hits it comes across, maybe finds their most-general unifier, checks that it's a syntactically valid answer to the question, and responds with,
"Because you are a moron."
Your inner Google has provided a plausible answer to the question, and it sits back, satisfied that it's done its job.
If you instead ask your brain something more specific, such as, "What can I do to help me remember my security pass tomorrow?", thus requiring its answer to refer to you and actions to remember things and tomorrow, your brain may come up with something useful, such as, "Set up a reminder now that will notify you tomorrow morning by cell phone to bring your security pass."
So, try to be at least as careful when asking questions of your brain, as when asking them of Google.