gjm comments on Practical Advice Backed By Deep Theories - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (112)
Mostly, I've offered questions that people could ask themselves in relation to specific procrastination scenarios, that would give them an insight into the process of how they're doing it. IIRC, two people have reported back with positive hits; one of the two also had a second scenario, for which my first question did not produce a result, but it's not clear yet what the answer to my second question was. (I gave both questions up front, along with the sequence to use them in, and criteria for determining whether an answer was "near" or "far", along with instructions to reject the "far" answers. One respondent gave a "far" answer, so I asked them to repeat.)
I've also linked to a video offering a simple motivational technique based on my model; a few people have posted positive comments here, and I've also gotten a number of private emails from users here via the feedback form on my site, expressing gratitude for its usefulness to them. The video is just about 10 minutes long.
In another comment, I described a simple NLP submodalities exercise that could be tried in a few minutes, albeit with the disclaimer that some people find it hard to consciously observe or manipulate submodalities directly. (The technique in my video is a bit more indirect, and designed to avoid conscious interference in the less-conscious aspects of the process.)
I've referenced various books on other techniques I've used, and I believe I even mentioned that Byron Katie's site at thework.org includes a free 20-page excerpt from Loving What Is that provides instructions for a testable technique that operates on the same fundamental basis as my models.
I'm really not sure what the heck else people want. Even if you claim, as Eliezer does, that he can't understand my writing, it's not like I haven't referenced plenty of other people's writing, and even my spoken language (in the video) as alternative options.
So, I watched the video (some time ago, when you posted about it) and gave it one trial. The technique wasn't effective for me on the task I tried it on. The particular failure mode was one you mentioned in the video, and if you are correct about the generality with which it makes the technique not work then I would expect the technique to be generally ineffective for the things I'd benefit from motivational help with.
Your suggestions about identifying the causes of procrastination: I haven't tried that yet, and it sounds interesting; I notice that when someone did try it and got results that didn't perfectly match your theory your immediate response was not "oh, that's interesting; perhaps my theory needs some tweaking" but "I don't believe you". Can you see how this might make people skeptical?
Referencing books is only helpful in so far as (1) it's not necessary to read the whole of a lengthy book to extract the small piece of information you've been asked for, (2) the book is clearly credible, and (3) the book is actually available (e.g., in lots of libraries, or inexpensive, or online). To those who are skeptical about the whole self-help business, #2 is a pretty difficult criterion to meet.
Indeed. It is supposed to be a free sample, after all. The work I charge for is fixing those things that make it not work. The things that make motivation not work are much, much more diverse than the things that make it actually work.
My response was, "you didn't follow directions", actually. Unless you're talking about the first part where the only information given was, "it didn't work". If you've ever done software tech support, you already know that "it didn't work" is not a well-formed answer. (Similarly, the later answer given was also not well-formed, by the criteria I laid out in advance.)
Failure to meet entry criterion for a technique does not constitute failure of the technique or the model: if you build a plane without an engine, and it doesn't take off, this does not represent a failure of aerodynamics. Indeed, aerodynamics predicts that failure mode, and so did I.
The response I got was not unexpected; it's common for people to have trouble at first, especially on things they don't want to look too closely at. I've had people spend up to 30 minutes in the "talk around the problem" failure mode before they could actually look at what they were thinking. The other most common failure mode is that somebody does see or hear something, but rejects it as nonsensical or irrelevant, then reports that they didn't get anything.
Third most common failure mode is lack of body awareness or physical suppression, but I know he doesn't have that as a general problem because his first response indicated awareness. His first response also indicated he is capable of perceiving responses, so that pretty much narrows it down to avoidance or assumption of irrelevance . If it's neither, then it might be relevant to a model update, especially if it's a repeatable result.
(At this point, however, he's going to have to repeat the asking of the second question, to test that, though, because these responses don't stick in long-term memory; in a sense, they are long-term memory.)
I think this (not the fact that it's a free sample, but the fact that apparently it's a feature, not a bug, if it doesn't work well for many people) makes it rather unuseful as a try-it-yourself demonstration of how good your models and techniques are.
There was no such first part; even jimrandomh's initial response had more information than that in it. And after he gave more information your reply was still "I don't believe you" rather than "you didn't follow directions". Interested parties can check the thread for themselves.
No, to be sure. But once you hedge your description of your technique and what it's supposed to achieve with so many qualifications -- once you say, in so many words, that you expect it not to work when tried -- how can it possibly be reasonable for you to use it as an example of how you've supplied us with empirically testable evidence for what you say?
Saying "You can check my ideas by trying this technique -- but of course it's quite likely not to work" is just like saying "You can check my belief in God by praying to him for a miracle -- but of course he works in mysterious ways and often says no."
The point of the exercise is that it's targeted to work for as many people as possible for a fairly narrow range of tasks, so as to give a sample of what it's like when it works.
Even chronic procrastinators can achieve success with the technique, as long as they don't use it on the thing they're procrastinating on -- it only works if you don't distract yourself with other thoughts, and if you're stressed about something, you're probably going to distract yourself with other thoughts.
Most people, however, don't seem to have any significant stressors about cleaning their desk. Also, it's not a difficult thing to visualize in its completed form.
Btw, just as a datapoint, what did you try it on, and what failure mode did you encounter? I am, ironically, MORE interested in failure reports than successes; the video continually gets rave reviews, but as much as I enjoy them, I can't learn anything new from another success report!
I just rechecked myself; here are the relevant portions. Jim said:
I took this statement as a literal description of what happened, i.e., jim thought about "it" -- whatever "it" was -- got no physical response, and had thoughts about the details of the task. THEN (2nd step) he was unable to begin working on it.
"Unable to begin working on it" is the part I referred to as not well-formed; this does not contain any description of how he arrived at that conclusion. It is the equivalent of "it doesn't work" in tech support.
The unspecified "it" is also potentially relevant; I don't know if he refers there to the task itself, or one of the questions I said to ask about the task; and this is an important distinction. I've also noticed that some people can "think about their task" and not get a response because they are not thinking about actually starting on the task... and Jim's statements would be consistent with a sequence of thinking about the idea of the task, followed by preparing to actually perform the task... at which point an undescribed response is occurring, whereby he is then "unable to" perform the task.
I commented on the conflict between these two statements:
Meaning: as far as I can tell, those statements are not talking about the same thing. I.e., one is a referent to some sort of pre-task preparation unrelated to the problem, and the other is actually about beginning it.
In other words: all the information was in the first sentence, but the second one is where the problem actually is. So I then asked Jim to direct his attention to that part of his thought process, and get more specific:
He then replied with two more not well-formed statements; instead of describing his thoughts or experiences, he replied with abstract, "far" explanations about the subject matter, instead of his direct response to the subject matter, i.e.:
and:
Neither of these utterances describes a concrete experience; they are verbalizations of precisely the kind I described in the "how to know if you're making shit up" comment beforehand. They are far, not near thinking, and my techniques only use far thinking to ask questions, and determine what questions to ask. The answers sought, however, are exclusively "near".
Thus, when someone replies with a "far" answer, I know that they have not actually answered my question or followed instructions - they are not using the part of their brain that will produce the desired result.
Notice, by the way, that at no time did I say I did not believe him. I took him quite literally at his word, to the extent that he gave me words that map to some sort of experience.
I tried it on the same example you proposed: desk-clearing. My desk is a mess; I would quite like it to be less of a mess; clearing it is never a high enough priority to make it happen. But I don't react to the thought of a clear desk with the "Mmmmmm..." response that you say is necessary for the technique to work.
As for your discussion with Jim: you did not at any point tell him that he didn't do what you'd told him to, or say anything that implied that; you did say that you think his statements contradict one another (implication: at least one of them is false; implication: you do not believe him). And then when he claimed that what stopped him was apathy and down-prioritizing by "the attention-allocating part of my brain" you told him that that wasn't really an answer, and your justification for that was that his brain doesn't really work in the way he said (implication: what he said was false; aliter, you didn't believe him).
So although you didn't use the words "I don't believe him", you did tell him that what he said couldn't be correct.
Incidentally, I find your usage of the word "incompatible" as described here so bizarre that it's hard not to see it as a rationalization aimed at avoiding admitting that you told jimrandomh he'd contradicted himself when in fact all he'd done was to say two things that couldn't both be true if your model of his mind is correct. However, I'll take your word for it that you really meant what you say you meant, and suggest that when you're using a word in so nonstandard a way you might do well to say so at the time.
Did you ask yourself what it is that you would enjoy about it if it were already clean? (Again, this is strictly for my information.) Note that the procedure described in the video asks for you to wonder about what sorts of qualities would be good if you already had a clean desk, in order to find something that you like about the idea enough to generate the feeling of pleasure or relief.
Au contraire, I said:
That is, I directed him to the "How To Know If You're Making Shit Up" comment -- the comment in which I gave him the directions, and which explained why his utterance was not well-formed.
This is an awful lot of projection on your part. The contradiction I was pointing to was that he was talking about two different things -- the statements were incompatible with a description of the same thing.
That is not anything like the same as "I don't believe you"; from what Jim said, I don't even have enough information to believe or not-believe something! Hence, "as far as I can tell" ("AFAICT"), and the request for more information... not unlike my requests for more information from you about what you tried.
"It didn't work" is not an answer which provides me any information suitable for updating a model, any more than it is for a programmer trying to find a bug. The programmer needs to know at a minimum what you did, and what you got instead of the desired result. (Well, in the software case you also want to know what the desired result was; in this kind of context it can sometimes be assumed.)
Because it isn't one: it's a made-up explanation, not a description of an experience. See the comment I referred him to.
If someone states something that is not a testable hypothesis, how can I "believe" or "disbelieve" it? They are simply speaking nonsense. Unless Jim has a blueprint of his brain with something marked "attention-allocating part" and he has an EEG or brain scan to show this activity, how can I possibly assign any truth value to that claim?
In contrast, if Jim presents me with a sensory-specific description of his experience, I have the option of taking him at his word. His experience may be subjective, but it at least is something I can model internally and have a reasonable certainty that I know what he's talking about.
For example, when a client tells me they have a "feeling", for instance, my minimum criterion is that they can describe it in sensory terms, including its rough location in the body. If they say, "it's just a feeling", then I have no information I can actually use. Same goes for a vague description like "I just can't do it", or in Jim's case, "I'm completely unable to begin".
If you want to make any sort of progress in an art of thinking and behavior, it is necessary to be excruciatingly precise when you talk about the thinking and behavior. Abstract language is dreadfully imprecise, as you can see from the present exchange. However, people routinely use such abstract language while thinking they're being precise, which is why the first order of business with my clients is breaking through their fuzzy ways of speaking and thinking about their thinking.
That was not "all" he'd done: he also said things that couldn't both be true if they were talking about the same thing, and that is what I was referring to. I then proceeded on the assumption that there were thus two different things, occurring in succession, one of which I had virtually no information about, only assumptions.
You seem to want me to speak as if I don't believe my model is true. However, I have enough experience applying that model to enough different people to know that the probability of someone using imprecise language or not doing precisely what I asked them to do is significantly higher (by which I mean at least one, maybe two orders of magnitude) higher than the probability that they are offering me any information that can update my model, let alone falsify it.
That means I need more bits of data about a hypothetically-disconfirming event, than I do about a confirming event... which is why I asked Jim for more information, and why I've done the same with you.
That you are selectively ignoring everything I'm doing to get good information, while simultaneously accusing me of post-hoc rationalization, suggests that it's your own epistemology that needs a bit more work.
Perhaps you should state in advance what criteria it is that you would like me to meet, so that I don't have to keep up with a moving target. That is, what evidence would convince you to update?
This discussion is getting waaay too long and distinctly off-topic; but, as briefly as I can manage:
Yes.
No, I did not do that. I said that what you're doing looks a lot like post-hoc rationalization, but that I'd take your word that it wasn't. I meant what I said.
I am updating all the time. Lots of things that you've said have led to adjustments (both ways) in my estimates for Pr(Philip knows exactly what he's talking about) and Pr(Philip is an outright charlatan) and the various intermediate possibilities. Perhaps you mean: what evidence would lead to a large upward change for the "better" possibilities? I'm not sure that any single smallish-sized piece of evidence would do that. But how about: some reasonably precise statements explaining key bits of your model, together with some non-anecdotal and publicly avaliable evidence for their correctness.
I think that perhaps the problem here is that we are trying to treat you as a colleague whereas you prefer to treat us as clients. We say "your theories sound interesting; please tell us more about them, and provide some evidence"; you say "well, I want you to do such-and-such, and you have to do exactly what I tell you to". This is unhelpful because (1) it doesn't actually answer the question and (2) it is liable to feel patronizing, and people seldom react well to being patronized.
(By "we" it is possible that I really mean "I", but it looks to me as if there are others who feel the same way.)
There are two modes of thinking. One directly makes you do things, the other one can only do so indirectly. One is based on non-verbal concrete sensory information, the other on verbal and mathematical abstractions.
Verbal abstractions can comment on themselves or on sensory experience, or they can induce sensory experience through the process of self-suggestion -- e.g. priming and reading stories are both examples of translating verbal information to the sensory system, to produce emotional responses and/or actions.
More specifically, we make decisions and take action by reference to "feelings" (in the technical definition of physical awareness of the body/mind changes produced by an emotional response).
Feelings (or more precisely, the emotions that generate the feelings) occur in response to predictions made by our brain, using past sensory experience. But because the sensory system does not "understand", only predict, many of these predictions are based on limited observation, confirmation bias, etc.
When our behavior is not as we expect -- when we experience being "blocked" -- it is because our conscious verbal/abstract assessment or prediction does not match our sensory-level prediction. We "know" there is no ghost, but run away anyway.
Surfacing the actual sensory prediction allows it to be modified, by comparing it to contradicting sensory evidence, whether real or imagined.
This is the bulk of the portion of my model that relates to treating chronic procrastination, though most of it has further applications.
You'll need to define "evidence". But the parts of what I said above that aren't part of the experimentally-backed near/far model and the "somatic marker hypothesis" can be investigated in personal experience. And here's a paper supporting the memory-prediction-emotion-action cycle of my model.
Actually, it does. I'm trying to tell you how to experience the particular types of experience that demonstrate practical applications of the model given above. Not following instructions won't produce that result, because you'll still be using the verbal thinking mode and commenting on your own comments instead of noticing your sensory experience.
My goal is not to define a "true" model of the brain; my goals are about doing useful things with the brain. The model I have exists to serve the results, not the other way around. I already had the model before I heard of "near/far", "somatic marker hypothesis", or the "feeling/emotion" model in that paper, so they are merely supporting/confirming results, not what I used to generate the model to start with. I was interested in them because they added interesting or useful details to the model.
Actually, I'm handling folks with kid gloves, compared to my students. If Jim were an actual client, there are things he said that I would have cut him off in the middle of, and said, "okay, that's great, but how about: [repeat question here] Just ask the question, and wait for an answer."
I usually give people more leeway towards the beginning of a session, and let them finish their ramblings before going on, but I cut it off more and more quickly as the session proceeds... especially if there's an audience, and they're thus wasting everyone's time, not just mine, their own, and the money they're spending.
I also woudn't have bothered to refer Jim to my well-formedness guidelines until after I first got the desired result: i.e., a change to his automatic thought process. Once I had a verified success, only then would be the time to re-iterate about different modes of thought, and pointing back to how different statements he made did or did not conform to the guidelines.
Since my goal here was to provide information rather than training services -- and because this is a public, rather than private forum -- I tilted my responses accordingly. This is not me doing my impression of Eliezer or Jeffreysai; it's me bending over backwards to be nice, possibly at the expense of conveying quality information.
The real conflict that I see is that for me, "quality information" means "information you can apply". Whereas, it seems the prevailing standard on LW (at least for the most-vocal commenters) is that "quality" equals some abstraction about "truth", that progressively retreats. It's not enough to be true for one person, it must be true for lots of people. No, all people. No, it has to be all people, even if they don't follow instructions. No, it has to have had experiments in a journal. No, the experiments can't just be in support of the NLP model, the paper has to say it's about NLP, because we can't be bothered to look at where NLP said the same things 20-30 years ago.
Frankly, I'm beginning to forget why I bothered trying to share any information here in the first place.
I think the problem here is that the internet is great when you want to share information with people but is not a consistently good venue for convincing people of something, particularly when the initially least convinced people are self-selecting for interaction with you. Pick your battles, I'd say.
I didn't say "nasty", I said "patronizing".
If someone tells you that by praying in a particular way anyone can achieve spiritual union with the creator of the universe, and you ask for evidence, it is Not Helpful if they tell you "just try it and see". (Especially if they add that actually, on past experience, the chances are that if you try it you won't see because you won't really be doing it right; and that to do it right you have to suspend your disbelief in what they're telling you and agree to obey all their instructions. But that's a separate can of worms.) Because (1) you won't know for sure whether you really have achieved spiritual union with the creator of the universe (it might just feel that way), and (2) you'll have discovered scarcely anything about how it works for anyone else. You might be more impressed if they can point to some sort of statistical evidence that shows (say) that people who pray in their preferred way are particularly good at discovering new laws of physics, which they attribute to their intimate connection to the creator of the universe.
More briefly: If someone asks for evidence, then "if you do exactly what I tell you to and suspend disbelief, then you might feel what I say you will" is not answering their question.
I haven't observed this progressive retreat (it looks more to me like a progressive realisation on your part of what the fussier denizens of LW had wanted all along). But I do have a comment on the last step you described -- "the paper has to say it's about NLP". For anyone who isn't a professional psychologist, neurologist, cognitive scientist, or whatever, determining whether (and how far) a paper like Damasio's supports your claims is a decidedly nontrivial business. (It's easy to verify that some similar words crop up in somewhat-similar contexts, but that's not the same.) Whereas, if if a paper says "Our findings provide strong confirmation for the wibbling hypothesis of NLP" and what you're saying is "I accept the wibbling hypothesis as described in NLP texts", that makes it rather easier to get a handle on how much evidence the research actually gives for your claims.
(In the present case, unfortunately but quite reasonably Google Books only lets me read bits of Damasio's paper. I have basically no idea to what extent it confirms your underlying model of human cognition, and even less of whether it offers any support for the conclusions you draw from it about how to improve one's own mind.)
What, because one or two people haven't found what you've said useful, and have said so? That seems a bit extreme.
Just to check, you agree that to be useful any model of the brain has to correspond to how the brain actually works? To that extent, you are seeking a true model. However, if I understand you correctly, your model is a highly compressed representation of how the mind works, so it might not superficially resemble a more detailed model. If this is correct, I can empathize with your position here: any practically useful model of the brain has to be highly compressed, but at this high level of compression, accurate models are mostly indistinguishable from bullshit at first glance.
I am still very unsure about the accuracy of what you are propounding, but anecdotally your comments here have been useful to me.