pjeby comments on Practical Advice Backed By Deep Theories - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (112)
When I read this, I get the same feeling as before, when you wrote about changing your ways in order to introduce your techniques to this forum. The feeling is that when you talk of rigor, you see it as a mere custom, something socially required, and quite amusing, really, since all that rigor can't be true, anyway. After all, it's only possible to make attempts at being precise, so who are you kidding. Plus, truth is irrelevant. And here we are, the LessWrong crowd, all for the image, none for the substance, bad for efficiency.
I wouldn't say that of everybody on LessWrong, but there is certainly a vocal contingent of that stripe. That contingent unfortunately also suffers from the use of cognitive models that, to me, are as primitive as the medieval four-humors model.
So when they push my "ignorance and superstition" buttons in the same posts where they're demanding properly validated rituals and papers for things they could verify for themselves in ten minutes by simple self-experimentation, it's rather difficult to take them seriously as "rationalists". (Especially when they go on to condemn theists for suffering from the same delusions as they are, just externally directed.)
I totally don't mind engaging with people who want to learn something and are willing to actually look at experience, instead of just talking about it and telling themselves they already know what works or what is likely to work, without actually trying it. The other people, I can't do a damn thing for.
If your interest is in "science", I can't help you. I'm not a scientist, and I'm not trying to increase the body of knowledge of science. Science is a movement; I'm interested in individuals. And individual rationalists ought to be able to figure things out for themselves, without needing the stamp of authority.
I also have no interest in being an authority -- the only authority that counts in any field is your own results.
This is why I hope that the next P. J. Eby starts out by first reading the OBLW sequences, and only then begins his explorations into akrasia and willpower.
You cannot verify anything by self-experimentation to nearly the same strength as by "properly validated rituals and papers". The control group is not there as impressive ritual. It is there because self-experimentation is genuinely unreliable.
I agree with Seth Roberts that self-experimentation can provide a suggestive source of anecdotal evidence in advance of doing the studies. It can tell you which studies to do. But in this case it would appear that formal studies were done and failed to back up the claims previously supported by self-experimentation. This is very, very bad. And it is also very common - the gold standard shows that introspection is not systematically trustworthy.
I'm a bit confused as to your goal, Eliezer.
Are you trying to find a fully general solution to the akraisia problem, applicable to any human currently alive… or do you want to know how you can overcome akrasia? The first is going to be a fair bit harder than the second, and you probably don't have time to do that and save the world.
If you shoot a little lower on this one and just try to find something that works for you I think your argument will change… quite a lot.
If you think that's the case, you didn't read the whole Wikipedia page on that, or the cite I gave to a 2001 paper that independently re-creates a portion of NLP's model of emotional physiology. I've seen more than one other peer-reviewed paper in the past that's recreated some portion of "NLP, Volume I", as in, a new experimental result that supports a portion of the NLP model.
Hell, hyperbolic discounting using the visual representation system was explained by NLP submodalities research two decades ago, for crying out loud. And the somatic marker hypothesis is at the very core of NLP. Affective asynchrony? See discussions of "incongruence" and "anchor collapsing" in NLP vI, which demonstrate and explain the existence of duality of affect.
IOW, none of the real research validation of NLP has the letters "N-L-P" on it .
Unreliable for what purpose? I would think that for any individual's purpose, self-experimentation is the ONLY standard that counts... it's of no value to me if a medicine is statistically proven to work 99% of the time, if it doesn't work for ME.
Unreliable for getting true explanations. Self-experimentation is generally too poorly controlled to give unconfounded data about what really caused a result. (Also, typically sample size is too small to justify generalizability.)
The way I'd put it for this stuff is that experiments help communicate why someone would try a technique, they help people distinguish signal from noise, because there are a ton of people out there saying X works for me.
This sounds like being uninterested in the chances of winning a lottery, since the only thing that matters is whether the lottery will be won by ME, and it costs only a buck to try (perform a self-experiment).
And yet, this sort of thinking produces people who get better results in life, generally. Successful people know they benefit from learning to do one more useful thing than the other guy, so it doesn't matter if they try fifty things and 49 of them don't work, whether those fifty things are in the same book or different books, because the payoff of something that works is (generally speaking) forever.
Success in learning, IOW, is a black-swan strategy: mostly you lose, and occasionally you win big. But I don't see anybody arguing that black swan strategies are mathematically equivalent to playing the lottery.
IMO, the rational strategy is to try things that might work better, knowing that they might fail, yet trying to your utmost to take them seriously and make them work. Hell, I even read "Dianetics" once, or tried to. I got a third of the way through that huge tome before I concluded that it was just a giant hypnotic induction via boredom. (Things I read later about Scientology's use of the book seem to actually support this hypothesis.)
This became infeasible with the invention of printing press. There is too much stuff out there, for any given person to learn. Or to ever see all the titles of the stuff that exists. Or the names of the fields for which it's written. There is too much science, and even more nonsense. You can't just tell "read everything". It's physically impossible.
P.S. See this disclaimer, on second thought I connotationally disagree with this comment.
What happened to "Shut up and do the impossible"? ;-)
More seriously, what difference does it make? The winning attitude is not that you have to read everything, it's that if you find one useful thing every now and then that improves your status quo, you already win.
Also, when it comes to self-help, you're in luck -- the number of actually different methods that exist is fairly small, but they are infinitely repeated over and over again in different books, using different language.
My personal sorting tool of choice is looking for specificity of language: techniques that are described in as much sensory-oriented, "near" language as possible, with a minimum of abstraction. I also don't bother evaluating things that don't make claims that would offer an improvement over anything else I've tried, and I have a preference for reading authors who've offered insightful models and useful techniques in the past.
Lately, I've gotten over my snobbish tendency to avoid authors who write things I know or suspect aren't true (e.g. stupid quantum mechanics interpretations); I've realized that it just doesn't have as much to do with whether they will actually have something useful to say, as I used to think it did.
PJ, is there a survey / summary / list of these methods online? Could you please link, or, if there's no such survey, summarize the methods briefly?
90% of everything is hypnosis, NLP, or the law of attraction -- and in a very significant way, they are all the same thing "under the hood", at different degrees of modeling detail and with different preferred operating channels.
NLP has the most precise models, and the greatest emphasis on well-formedness criteria and testing. (At least, the founders had those emphases; "pop NLP" often seems to not even know what well-formedness is.) Hypnosis, OTOH, is just a trancy-form of NLP, LoA, or both.
Pretty much everything in the self-help field can be viewed as a special case, application, or "tips and hints" variation of one of those three things, but using individual authors' terminology, metaphors, and case histories. The possible failure modes are pretty much the same across all of them, too.
There is, by the way, one author who writes about non-mystical applications of the so-called "law of attraction": Robert Fritz. He's the only person I'm aware of who's brought an almost-NLP level of rigor and precision to that concept, and with absolutely no mystical connotations or bad science whatsoever. He doesn't call it LoA; he refers to it as the "creative process", and shows how it's the process that artists, musicians, and even inventors and entrepreneurs normally use to create results. (i.e., a strictly mental+physical process that engages the brain's planning systems, much like what I showed in my video, but on a larger scale.)
His books also contain the largest collection of documented failure modes (biases and broken beliefs) that interfere with this process, based on his workshops and client work. I've found it to be invaluable in my own practice.
(The biggest shortcoming of Fritz's work compared to some more mystical LoA works, however, is that he doesn't address general emotional state or "abundance mindset" issues, at least not directly.)
You keep using that phrase. I do not think it means what you think it does.
The phrase makes some kind of sense to me (although not in that particular case), so in case you're not just trying to drop a geeky reference, let me try to explain what I make of this phrase.
Assume members of alien species X have two reasoning modes A and B which account for all their thinking. In my mind, I model these "modes" as logical calculi, but I guess you could translate this to two distinct points in the "space of possible minds".
An Xian is at any one time instance either in mode A or B, but under certain conditions the mode can flip. Except for these two reasoning modes, there is a heuristic faculty, which guides the application of specific rules in A and B. Some conclusions can be reached in mode A but not in B, and vice versa, so ideally, an Xian would master performing switches between them.
Now here's the problem: Switching between A and B can only happen if a certain sequence of seemingly nonsensical reasoning steps is taken. Since the sequence is nonsensical, an Xian with a finely tuned heuristic for either A or B will be unlikely to encounter it in the course of normal reasoning.
Now, say that Bloob, an accomplished Xian A-thinker, finds out how to do the switch to B and thus manages to prove a theorem of high-value. Bloob will now have major problems communicating his results to his A-thinking peers. They will look at a couple of his proof steps, conclude that they are nonsensical and label him a crackpot.
Bloob might instead decide (whatever that word means in my story) to target people who are familiar with the switch from A to B. He can show them one of the proof steps, and hope that their heuristic "remembers" that they lead to something good down the road. Such a nonsensical proof step may be saying "Shut up and to the impossible".
So, I suspect that humans do have something like those reasoning modes. They are not necessarily just two, it might not be appropriate to call all of them reasoning, but the main point is that thinking a thought might change the rules of thinking.
I think this idea is very close to the whole area of NLP, hypnosis, and some new-age ideas, e.g., Carlos Castaneda explicitly wants to "teach" you how to shift your mind-state around in the space of possible minds (which is egg-shaped incidentally). Not that any of these have ever done anything for me, but I also haven't tried following them.
From self-experimentation (sorry), Buddhist meditation seems to be a kind of thinking that can change the rules of thinking, and I think there is some evidence that it actually changes the brain structurally.
Given the possibility of certain thoughts changing the rules of thinking, what is the rational thing to do? If there's a good answer to this I'm grateful for a link.
Viva randomness! At least it's better than stupidity. And is about as effective as reversed stupidity. Which is not intelligence.
You should know better what you need, what's good for you, than a random number generator. And you should work on your field of study being better than a procedure for crafting another random option for such a random choice. I wonder how long it'll take to stumble on success if you use a hypothetical "buy a random popular book" order option on Amazon.
P.S. See this disclaimer, on second thought I connotationally disagree with this comment.
Strawman?
You sound like someone arguing that evolution shouldn't be able to work because it's all "blind chance". Learning, like evolution, is "unblind chance": what interests me is a combination of what I encounter plus what I already know.
The more I learn, the more I learn about what is and isn't useful, and I've found it useful to drop (or at least reduce the priority of) certain filters that I previously had, while tightening up other filters. That's not really "random", in the same way that natural selection is not "random".
Okay. Another take. Is this really true? How long would it take for a new-commer to walk through every available option? How much would it cost? What is the chance he should expect before starting the whole endeavor that any of the available options will help? For the last question, the lottery analogy fits perfectly, no "works only for ME" excuse.
I've read dozens of self-help books and numerous websites, etc. and pjeby's claims of repetition seem mostly true (and his point that some who have unscientific philosophies have great practical advice is definitely true in my experience).
That huge numbers of books are about the same things, in different language? Absolutely. Books that contain something genuinely new in self-help are exceedingly rare in my experience. Books that have one or two new twists or better metaphors for explaining the same things are enormously common.
Take for example, "the law of attraction". I don't believe it has any objective external basis: rather, it's a matter of 1. motivation and 2. making your own luck -- i.e. "chance favors the prepared mind". However, the quality of information about its practical applications varies widely, and some of the most woo-woo crazy books -- like one of the ones supposedly written by a spirit being channeled from another universe -- actually have the best practical information for leveraging the psychological benefits of belief.
I'm specifically talking about the "emotional energy scale" model from the book "Ask and It Is Given". Note that I don't know if they invented that model or swiped it from some psych researcher... and I don't really care. By putting that information into a useful context, they gave me more usable information than raw experimental data would have provided.
Now, if I were looking for "truth", I'd certainly trust peer-reviewed research more than I'd trust a channeled being from beyond. But if the being from beyond offers a *useful *model distinction, I don't especially care if it's true.
Now, some people reading this are going to think because I mentioned the LoA that I believe all that quantum garbage -- but I do not. I do believe, however, that self-fulfilling prophecies are useful, and the LoA literature is a great source of raw practical data in the application of self-fulfilling prophecy, as long as you ignore all their theories about why anything works, and focus on testing specific physical and mental techniques, and break down the attitudes.
For example, one fascinating commonality of themes in this literature: the idea of gratitude or abundance, giving things freely to others and it will be given unto you, and a "friendly universe". It's interesting that, although some of these writers are borrowing from each other, others seem to have independently stumbled on an idea or attitude that reflects this notion: that in some larger way, "everything happens for a reason" or "the world is an abundant and giving place".
Most will also insist on the importance of adopting this mindset for achieving results, which makes me wonder: could it be that there is some hardwired machinery in our brains that is triggered by conditions of perceived "abundance"? Is it then triggered by acting as-if conditions are abundant, in the same way that smiling can trigger happiness or friendliness?
It's certainly food for further thought, although in my current simplified model of LoA, I assume that this is more of a test condition: i.e., if someone cannot act as-if they are in abundance, then they have not successfully made whatever internal transition is required. This seems a more parsimonious model at this point, than assuming that the actions themselves are relevant.
They would probably be FAR better off picking ONE book and sticking to it with absolute Zen-master determination, especially if they choose a book that offers sensory based language, and most importantly, a way to tell if you're doing it right in a relatively short period of time. Comparatively few books contain this, but browsing in a bookstore will certainly find you a few. (I've linked to a few here in the past; "Loving What Is" and "Re-create Your Life" are two of the easiest for a beginner to master, if they pay close attention to the extra distinctions about "listening to yourself" that I've thrown out here on LW. )
Sadly, if you limit yourself to books only, this might well be true. Live trainings and coaching are substantially more likely to make a difference, because the feedback loop can be closed.
I have had more than one student report that after live work with me, they were able to go back and understand all the things in self-help books that they were never able to apply before, because now they knew what those books were actually talking about, once they had experiential reference points. (It's unfortunately a lot easier to recognize whether a guru is "for real" once you are one, than before.)
My original goal for the book I am currently writing was to create a kind of Rosetta Stone for self-help material, but I have concluded that all I can really do is make such a Rosetta Stone for the sort of person who already would've found my approach enlightening -- or more precisely, I can write a book that will get past the kind of filters that would keep a lot of those people from learning from the sources I learned things from. But the very fact that I do it that way will be a filter for a different group of people!
And this, by the way, is why we won't see a scientifically-validated model of these things any time soon: learning them really requires a feedback loop of some kind, and most books don't include enough of one to work with EVERYBODY, only for the set of people whose perceptual filters initially match those used by the writer. (Of course, even if there was such a feedback loop, it's not prestigious to test practical ideas that somebody else came up with, versus impractical new ones.)
In the first draft of my book, I listed all sorts of ways to get a certain popular visualization technique wrong, that had bedeviled me and some of my students in the past. My newer students read it... and promptly found NEW ways to get it wrong, that I had to give them live feedback to fix.
I'll add those ways of getting it wrong to the second draft, but I'm now far less confident that it is possible to eliminate ALL the ways that somebody can misinterpret a discussion of how to observe or manipulate their internal experience.
(And if I actually included ALL the ways I know of to get popular techniques or self-help ideas wrong, it would be much longer than the instructions of how to get them right... thereby making an unusable and unmarketable book. Which is probably why most self-help books only give a handful of misinterpretations and hope for the best. It probably doesn't hurt that there are also financial rewards for selling some of your readers on live programs, but I honestly would like there to be a book that doesn't need that option... I've just given up on my current book being that book.)
By far the best way to learn is with someone who can tell from your external behavior whether you're doing it wrong, being a kind of human biofeedback system. The way I learned was definitely the hard way.
However, for the kind of successful person that I was talking about, these caveats don't apply. A person with the attitude I was referring to, will find something useful in virtually anything they read, and promptly apply it. These are also the people who need self-help least, but that was actually part of my original point.
What I probably wasn't clear enough on, was that it's this attitude that determines the person's success in LIFE, not their success in finding good self-help books! We are now way off of that particular reservation.
The plural of anecdote is not data. Many people will tell you how they were cured by faith healers or other quacks, and, indeed, they had problems that went away after being "treated" by the quack. Does that make the quacks effective or give credibility to their theories about the human body?
The same applies to methods of affecting the human brain. As a non-expert, from the outside I can't tell the difference between NLP, Freudian psychotherapy, and whatever hocus-pocus Scientology says helps people. All have elaborate theories to explain their alleged benefits, and all have had people who swear it works.
To quote Wikipedia:
Until I do see some acceptance among the academic community, I remain unconvinced that NLP is anything more than a self-reinforcing collection of hypotheses, speculation, and metaphors. It could very well be otherwise, but I can't know that it isn't!
Few of your comments here seem to me to describe things that are obviously checkable in ten minutes by simple self-experimentation. (Even ignoring the severe unreliability of self-experimentation, since doubtless there are at least some instances in which self-experimentation can provide substantial evidence.) Perhaps they are so checkable with the help of extra information that you've declined to provide. Perhaps I've just not read the right comments. Perhaps I've read the right comments and forgotten them. Would you care to clarify?
Mostly, I've offered questions that people could ask themselves in relation to specific procrastination scenarios, that would give them an insight into the process of how they're doing it. IIRC, two people have reported back with positive hits; one of the two also had a second scenario, for which my first question did not produce a result, but it's not clear yet what the answer to my second question was. (I gave both questions up front, along with the sequence to use them in, and criteria for determining whether an answer was "near" or "far", along with instructions to reject the "far" answers. One respondent gave a "far" answer, so I asked them to repeat.)
I've also linked to a video offering a simple motivational technique based on my model; a few people have posted positive comments here, and I've also gotten a number of private emails from users here via the feedback form on my site, expressing gratitude for its usefulness to them. The video is just about 10 minutes long.
In another comment, I described a simple NLP submodalities exercise that could be tried in a few minutes, albeit with the disclaimer that some people find it hard to consciously observe or manipulate submodalities directly. (The technique in my video is a bit more indirect, and designed to avoid conscious interference in the less-conscious aspects of the process.)
I've referenced various books on other techniques I've used, and I believe I even mentioned that Byron Katie's site at thework.org includes a free 20-page excerpt from Loving What Is that provides instructions for a testable technique that operates on the same fundamental basis as my models.
I'm really not sure what the heck else people want. Even if you claim, as Eliezer does, that he can't understand my writing, it's not like I haven't referenced plenty of other people's writing, and even my spoken language (in the video) as alternative options.
I also find your writing difficult. If you'll accept a recommendation, I think your readership here might get more from shorter comments in which more work has gone into each word.
So, I watched the video (some time ago, when you posted about it) and gave it one trial. The technique wasn't effective for me on the task I tried it on. The particular failure mode was one you mentioned in the video, and if you are correct about the generality with which it makes the technique not work then I would expect the technique to be generally ineffective for the things I'd benefit from motivational help with.
Your suggestions about identifying the causes of procrastination: I haven't tried that yet, and it sounds interesting; I notice that when someone did try it and got results that didn't perfectly match your theory your immediate response was not "oh, that's interesting; perhaps my theory needs some tweaking" but "I don't believe you". Can you see how this might make people skeptical?
Referencing books is only helpful in so far as (1) it's not necessary to read the whole of a lengthy book to extract the small piece of information you've been asked for, (2) the book is clearly credible, and (3) the book is actually available (e.g., in lots of libraries, or inexpensive, or online). To those who are skeptical about the whole self-help business, #2 is a pretty difficult criterion to meet.
Indeed. It is supposed to be a free sample, after all. The work I charge for is fixing those things that make it not work. The things that make motivation not work are much, much more diverse than the things that make it actually work.
My response was, "you didn't follow directions", actually. Unless you're talking about the first part where the only information given was, "it didn't work". If you've ever done software tech support, you already know that "it didn't work" is not a well-formed answer. (Similarly, the later answer given was also not well-formed, by the criteria I laid out in advance.)
Failure to meet entry criterion for a technique does not constitute failure of the technique or the model: if you build a plane without an engine, and it doesn't take off, this does not represent a failure of aerodynamics. Indeed, aerodynamics predicts that failure mode, and so did I.
The response I got was not unexpected; it's common for people to have trouble at first, especially on things they don't want to look too closely at. I've had people spend up to 30 minutes in the "talk around the problem" failure mode before they could actually look at what they were thinking. The other most common failure mode is that somebody does see or hear something, but rejects it as nonsensical or irrelevant, then reports that they didn't get anything.
Third most common failure mode is lack of body awareness or physical suppression, but I know he doesn't have that as a general problem because his first response indicated awareness. His first response also indicated he is capable of perceiving responses, so that pretty much narrows it down to avoidance or assumption of irrelevance . If it's neither, then it might be relevant to a model update, especially if it's a repeatable result.
(At this point, however, he's going to have to repeat the asking of the second question, to test that, though, because these responses don't stick in long-term memory; in a sense, they are long-term memory.)
I think this (not the fact that it's a free sample, but the fact that apparently it's a feature, not a bug, if it doesn't work well for many people) makes it rather unuseful as a try-it-yourself demonstration of how good your models and techniques are.
There was no such first part; even jimrandomh's initial response had more information than that in it. And after he gave more information your reply was still "I don't believe you" rather than "you didn't follow directions". Interested parties can check the thread for themselves.
No, to be sure. But once you hedge your description of your technique and what it's supposed to achieve with so many qualifications -- once you say, in so many words, that you expect it not to work when tried -- how can it possibly be reasonable for you to use it as an example of how you've supplied us with empirically testable evidence for what you say?
Saying "You can check my ideas by trying this technique -- but of course it's quite likely not to work" is just like saying "You can check my belief in God by praying to him for a miracle -- but of course he works in mysterious ways and often says no."
The point of the exercise is that it's targeted to work for as many people as possible for a fairly narrow range of tasks, so as to give a sample of what it's like when it works.
Even chronic procrastinators can achieve success with the technique, as long as they don't use it on the thing they're procrastinating on -- it only works if you don't distract yourself with other thoughts, and if you're stressed about something, you're probably going to distract yourself with other thoughts.
Most people, however, don't seem to have any significant stressors about cleaning their desk. Also, it's not a difficult thing to visualize in its completed form.
Btw, just as a datapoint, what did you try it on, and what failure mode did you encounter? I am, ironically, MORE interested in failure reports than successes; the video continually gets rave reviews, but as much as I enjoy them, I can't learn anything new from another success report!
I just rechecked myself; here are the relevant portions. Jim said:
I took this statement as a literal description of what happened, i.e., jim thought about "it" -- whatever "it" was -- got no physical response, and had thoughts about the details of the task. THEN (2nd step) he was unable to begin working on it.
"Unable to begin working on it" is the part I referred to as not well-formed; this does not contain any description of how he arrived at that conclusion. It is the equivalent of "it doesn't work" in tech support.
The unspecified "it" is also potentially relevant; I don't know if he refers there to the task itself, or one of the questions I said to ask about the task; and this is an important distinction. I've also noticed that some people can "think about their task" and not get a response because they are not thinking about actually starting on the task... and Jim's statements would be consistent with a sequence of thinking about the idea of the task, followed by preparing to actually perform the task... at which point an undescribed response is occurring, whereby he is then "unable to" perform the task.
I commented on the conflict between these two statements:
Meaning: as far as I can tell, those statements are not talking about the same thing. I.e., one is a referent to some sort of pre-task preparation unrelated to the problem, and the other is actually about beginning it.
In other words: all the information was in the first sentence, but the second one is where the problem actually is. So I then asked Jim to direct his attention to that part of his thought process, and get more specific:
He then replied with two more not well-formed statements; instead of describing his thoughts or experiences, he replied with abstract, "far" explanations about the subject matter, instead of his direct response to the subject matter, i.e.:
and:
Neither of these utterances describes a concrete experience; they are verbalizations of precisely the kind I described in the "how to know if you're making shit up" comment beforehand. They are far, not near thinking, and my techniques only use far thinking to ask questions, and determine what questions to ask. The answers sought, however, are exclusively "near".
Thus, when someone replies with a "far" answer, I know that they have not actually answered my question or followed instructions - they are not using the part of their brain that will produce the desired result.
Notice, by the way, that at no time did I say I did not believe him. I took him quite literally at his word, to the extent that he gave me words that map to some sort of experience.
I tried it on the same example you proposed: desk-clearing. My desk is a mess; I would quite like it to be less of a mess; clearing it is never a high enough priority to make it happen. But I don't react to the thought of a clear desk with the "Mmmmmm..." response that you say is necessary for the technique to work.
As for your discussion with Jim: you did not at any point tell him that he didn't do what you'd told him to, or say anything that implied that; you did say that you think his statements contradict one another (implication: at least one of them is false; implication: you do not believe him). And then when he claimed that what stopped him was apathy and down-prioritizing by "the attention-allocating part of my brain" you told him that that wasn't really an answer, and your justification for that was that his brain doesn't really work in the way he said (implication: what he said was false; aliter, you didn't believe him).
So although you didn't use the words "I don't believe him", you did tell him that what he said couldn't be correct.
Incidentally, I find your usage of the word "incompatible" as described here so bizarre that it's hard not to see it as a rationalization aimed at avoiding admitting that you told jimrandomh he'd contradicted himself when in fact all he'd done was to say two things that couldn't both be true if your model of his mind is correct. However, I'll take your word for it that you really meant what you say you meant, and suggest that when you're using a word in so nonstandard a way you might do well to say so at the time.
Did you ask yourself what it is that you would enjoy about it if it were already clean? (Again, this is strictly for my information.) Note that the procedure described in the video asks for you to wonder about what sorts of qualities would be good if you already had a clean desk, in order to find something that you like about the idea enough to generate the feeling of pleasure or relief.
Au contraire, I said:
That is, I directed him to the "How To Know If You're Making Shit Up" comment -- the comment in which I gave him the directions, and which explained why his utterance was not well-formed.
This is an awful lot of projection on your part. The contradiction I was pointing to was that he was talking about two different things -- the statements were incompatible with a description of the same thing.
That is not anything like the same as "I don't believe you"; from what Jim said, I don't even have enough information to believe or not-believe something! Hence, "as far as I can tell" ("AFAICT"), and the request for more information... not unlike my requests for more information from you about what you tried.
"It didn't work" is not an answer which provides me any information suitable for updating a model, any more than it is for a programmer trying to find a bug. The programmer needs to know at a minimum what you did, and what you got instead of the desired result. (Well, in the software case you also want to know what the desired result was; in this kind of context it can sometimes be assumed.)
Because it isn't one: it's a made-up explanation, not a description of an experience. See the comment I referred him to.
If someone states something that is not a testable hypothesis, how can I "believe" or "disbelieve" it? They are simply speaking nonsense. Unless Jim has a blueprint of his brain with something marked "attention-allocating part" and he has an EEG or brain scan to show this activity, how can I possibly assign any truth value to that claim?
In contrast, if Jim presents me with a sensory-specific description of his experience, I have the option of taking him at his word. His experience may be subjective, but it at least is something I can model internally and have a reasonable certainty that I know what he's talking about.
For example, when a client tells me they have a "feeling", for instance, my minimum criterion is that they can describe it in sensory terms, including its rough location in the body. If they say, "it's just a feeling", then I have no information I can actually use. Same goes for a vague description like "I just can't do it", or in Jim's case, "I'm completely unable to begin".
If you want to make any sort of progress in an art of thinking and behavior, it is necessary to be excruciatingly precise when you talk about the thinking and behavior. Abstract language is dreadfully imprecise, as you can see from the present exchange. However, people routinely use such abstract language while thinking they're being precise, which is why the first order of business with my clients is breaking through their fuzzy ways of speaking and thinking about their thinking.
That was not "all" he'd done: he also said things that couldn't both be true if they were talking about the same thing, and that is what I was referring to. I then proceeded on the assumption that there were thus two different things, occurring in succession, one of which I had virtually no information about, only assumptions.
You seem to want me to speak as if I don't believe my model is true. However, I have enough experience applying that model to enough different people to know that the probability of someone using imprecise language or not doing precisely what I asked them to do is significantly higher (by which I mean at least one, maybe two orders of magnitude) higher than the probability that they are offering me any information that can update my model, let alone falsify it.
That means I need more bits of data about a hypothetically-disconfirming event, than I do about a confirming event... which is why I asked Jim for more information, and why I've done the same with you.
That you are selectively ignoring everything I'm doing to get good information, while simultaneously accusing me of post-hoc rationalization, suggests that it's your own epistemology that needs a bit more work.
Perhaps you should state in advance what criteria it is that you would like me to meet, so that I don't have to keep up with a moving target. That is, what evidence would convince you to update?
Can you link to these things? Your comments? Here? There's an LW search box.
How To Tell If You're Making Shit Up
NLP Submodalities Experiment
Motivation technique video
Edit to add:
"How To Tell If You're Making Shit Up" seems useful. Do you see why this would seem useful to me while "NLP Submodalities" doesn't?
For the same reason that yours and Robin's writing on biases is more useful than the source material, I imagine. That is, it's been predigested. It probably also doesn't hurt that I have to teach "how to tell if you're making shit up" to every single client of mine, so I have some practice at doing so! (Albeit mostly in real-time interaction.)
FYI, NLP volume I represents the more detailed "brain software" model from which that summary was derived, which I recommended to you because you said you couldn't follow my writing.
You can also see why I was excited when Robin started posting about near/far stuff on OB -- it fit very nicely into the work I was already doing, and into the NLP presupposition that "conscious verbal responses are to be treated as unsubstantiated rumor unless confirmed by unconscious nonverbal response" -- i.e., don't trust what somebody says about their behavior, because that's not the system that runs the behavior.
The Near/far distinction mainly added an evolutionary explanation that was not a part of NLP, and gave a better why for not trusting the verbal explanation. Near/far in a literal sense, as in "people respond differently based on distance in space/time/abstraction level of visualization", has been part of the NLP models for over 20 years now. But once again, the mainstream experiments are just now being done, presumably by people who've never heard of NLP, or who assume it's crackpottery.
deleted