Robyn Dawes, author of one of the original papers from Judgment Under Uncertainty and of the book Rational Choice in an Uncertain World—one of the few who tries really hard to import the results to real life—is also the author of House of Cards: Psychology and Psychotherapy Built on Myth.

    From House of Cards, chapter 1:

    The ability of these professionals has been subjected to empirical scrutiny—for example, their effectiveness as therapists (Chapter 2), their insight about people (Chapter 3), and the relationship between how well they function and the amount of experience they have had in their field (Chapter 4).  Virtually all the research—and this book will reference more than three hundred empirical investigations and summaries of investigations—has found that these professionals' claims to superior intuitive insight, understanding, and skill as therapists are simply invalid...

    Remember Rorschach ink-blot tests?  It's such an appealing argument: the patient looks at the ink-blot and says what he sees, the psychotherapist interprets their psychological state based on this.  There've been hundreds of experiments looking for some evidence that it actually works.  Since you're reading this, you can guess the answer is simply "No."  Yet the Rorschach is still in use.  It's just such a good story that psychotherapists just can't bring themselves to believe the vast mounds of experimental evidence saying it doesn't work—

    —which tells you what sort of field we're dealing with here.

    And the experimental results on the field as a whole are commensurate.  Yes, patients who see psychotherapists have been known to get better faster than patients who simply do nothing.  But there is no statistically discernible difference between the many schools of psychotherapy.  There is no discernible gain from years of expertise.

    And there's also no discernible difference between seeing a psychotherapist and spending the same amount of time talking to a randomly selected college professor from another field.  It's just talking to anyone that helps you get better, apparently.

    In the entire absence of the slightest experimental evidence for their effectiveness, psychotherapists became licensed by states, their testimony accepted in court, their teaching schools accredited, and their bills paid by health insurance.

    And there was also a huge proliferation of "schools", of traditions of practice, in psychotherapy; despite—or perhaps because of—the lack of any experiments showing that one school was better than another...

    I should really post more some other time on all the sad things this says about our world; about how the essence of medicine, as recognized by society and the courts, is not a repertoire of procedures with statistical evidence for their healing effectiveness; but, rather, the right air of authority.

    But the subject today is the proliferation of traditions in psychotherapy.  So far as I can discern, this was the way you picked up prestige in the field—not by discovering an amazing new technique whose effectiveness could be experimentally verified and adopted by all; but, rather, by splitting off your own "school", supported by your charisma as founder, and by the good stories you told about all the reasons your techniques should work.

    This was probably, to no small extent, responsible for the existence and continuation of psychotherapy in the first place—the promise of making yourself a Master, like Freud who'd done it first (also without the slightest scrap of experimental evidence).  That's the brass ring of success to chase—the prospect of being a guru and having your own adherents.  It's the struggle for adherents that keeps the clergy vital.

    That's what happens to a field when it unbinds itself from the experimental evidence—though there were other factors that also placed psychotherapists at risk, such as the deference shown them by their patients, the wish of society to believe that mental healing was possible, and, of course, the general dangers of telling people how to think.

    The field of hedonic psychology (happiness studies) began, to some extent, with the realization that you could measure happiness—that there was a family of measures that by golly did validate well against each other.

    The act of creating a new measurement creates new science; if it's a good measurement, you get good science.

    If you're going to create an organized practice of anything, you really do need some way of telling how well you're doing, and a practice of doing serious testing—that means a control group, an experimental group, and statistics—on plausible-sounding techniques that people come up with.  You really need it.

    Added:  Dawes wrote in the 80s and I know that the Rorschach was still in use as recently as the 90s, but it's possible matters have improved since then (as one commenter states).  I do remember hearing that there was positive evidence for the greater effectiveness of cognitive-behavioral therapy.

    New to LessWrong?

    New Comment
    58 comments, sorted by Click to highlight new comments since: Today at 9:45 AM

    "That's what happens to a field when it unbinds itself from the experimental evidence" - so the million dollar question for Less Wrong is: what experimental evidence can this community bind itself to, to avoid the same outcome?

    Well, yes, that is the fairly heavy subtext here.

    The most obvious thing to bind it to is whether A) people who post on LessWrong keep coming back, and B) how big of assholes LessWrong community members have a reputation for being. 

    I would say we're doing great on A but SBF has made B go very poorly for the moment.

    NLP is an interesting parallel example. Its founders believed in testing, but the students, not so much. For example, Bandler, in "Using Your Brain For A Change", wrote:

    A mathematician doesn't just get an answer and say, "OK, I'm done." He tests his answers carefully, because if he doesn't, other mathematicians will! That kind of rigor has always been missing from therapy and education. People try something and then do a two-year follow-up study to find out if it worked or not. If you test rigorously, you can find out what a technique works for and what it doesn't work for, and you can find out right away. And where you find out that it doesn't work, you need to try some other technology.

    It's a transcript from a lecture demonstration where he's just shown how to extinguish someone's urge to smoke, and is testing the result. The volunteer had just claimed she no longer wished to smoke, but Bandler insists that she actually take a cigarette from him, hold it in her hands, and play around with it.

    He says:

    When you do change work, don't back away from testing it; push it. Events in the world are going to push it, so you may as well do it so you can find out right away. That way you can do something about it. Observing your client's nonverbal responses will give you much more information than the verbal answers to your questions.

    He then points out the changed facial expression on the volunteer -- it appears that smelling the cigarette has restored her desire for one. He gives her some modified instructions, to repeat the technique being taught. Afterwards, they verify that the smell no longer acts as a compulsion trigger.

    Now, the crazy thing is, Bandler's been teaching this stuff for 20 years, but hardly anybody "gets it"... about testing, or damn near anything else.

    Few NLP practitioners do much testing; few NLP books even mention it. Even Bandler's own books don't say that much about it. The formal NLP trainings emphasize "outcome frame" (defining in advance what result you're trying to get), but not so much the process of testing that you've achieved that outcome.

    I suspect that this is simply because Bandler is a lousy teacher in some ways. In my personal experience, the most important parts of nearly every Bandler video, audio, or book are in what seem like almost offhand remarks... that happen to reveal volumes if you already happen to be close to figuring out the same thing for yourself. Perhaps this is why he's so insistent that an NLP certification is worthless if it doesn't come from him.

    You are describing old clinical psychology. Its gotten so much better. Rorschach tests are now only a very marginal measure within psychoanalytic psychology. Psychoanalytic/pscyhodynamic psychologists are themselves outcasts from mainstream clinical psychology, which is increasingly centered around evidence-based practice. For example, behaviorists are using systematic desensitization in novel and effective ways (for treating things like panic disorder), and cognitive-behavioral therapy is quite effective in treating depression: significantly more so than antidepressants.

    The important thing to remember is that patients often get the treatment they want. If you're a self-absorbed neurotic, and you want to spend an hour a week for years talking about yourself, you can find someone who will take your money. If you want effective treatment, you can find that too. Most patients don't want to get better, they want to feel like they are doing something, and especially they want to talk about themselves.

    The important thing to remember is that patients often get the treatment they want. If you're a self-absorbed neurotic, and you want to spend an hour a week for years talking about yourself, you can find someone who will take your money. [...] Most patients don't want to get better [...] they want to talk about themselves.

    Perhaps unwittingly, this comment suggests wherein the value of such psychotherapy lies. There's a social taboo against talking about oneself in this fashion, and a place that is "safe" from this (and other) conversational taboos may well be worth paying for.

    Well, Dawes wrote in the 80s and I remember being Rorschach'd in the 90s, but I can imagine that things have gotten better in the 00s. Still, I have to ask - have there been any experimental studies showing the improvement?

    (I do remember hearing that there was positive experimental validation for cognitive-behavioral therapy doing systematically better than other forms of psychotherapy.)

    Yes, cognitive-behavioral therapy has come out ahead of other methods in a number of randomized clinical trials.

    http://en.wikipedia.org/wiki/Cognitive_behavioral_therapy

    [Setting karma to null]

    Carl, following that link to its source brought me here: http://www.nice.org.uk/guidance/index.jsp?action=download&o=33188, where several randomized trials are mentioned. But I see no meta-analysis, so I still worry about publication selection biases, etc. Anyone know of a meta-analysis of this lit?

    I know I'm not teaching Robin anything, but it should be noted that meta-analyses often fail to overcome publication selection biases.

    I remember being Rorschach'd in the 90s

    Were you too young for this to have led to an awesome story?

    Psychoanalytic/pscyhodynamic psychologists are themselves outcasts from mainstream clinical psychology

    I don't know where you live, but in Germany or Austria this is not the case. This is part of a more general problem in Europe, which is under the terrible reign of continental philosophy.

    In my limited experience, the "hard problems" in philosophy are the problems which are either poorly defined and so people keep arguing about definitions without admitting it, or poorly analyzed, so people keep mixing decision theory with cognitive science, for example. While the traditional philosophy is good at asking (meta-)questions and noticing broad similarities, it is nearly useless at solving them. When a philosopher tries to honestly analyze a deep question, it usually stops being philosophy and becomes logic, linguistics, decision theory, computer science, physics or something else that qualifies as science. Hence Pearl and Kahneman and Russell, some Wittgenstein, Popper...

    In my limited experience, the "hard problems" in philosophy are the problems which are either poorly defined and so people keep arguing about definitions without admitting it, or poorly analyzed, so people keep mixing decision theory with cognitive science, for example.

    See also how many of the comments in this thread amounted to “if by sound you mean ‘acoustic wave’ it does, if you mean ‘auditory sensation’ it doesn't”.

    There's little evidence of anything else being better at solving them, so that is largely nirvana fallacy,

    Wait, what? There's little evidence of anything better than philosophy at solving problems? How about physics, cognitive science, computer science, mathematics, etc.?

    When a branch of philosophy becomes useful at solving problems, people give it a new name and no longer consider it part of philosophy.

    Then what is philosophy supposed to be? Just a field for asking questions (but not answering them)?

    Presumably its the place where questions that can't readily be answered, (or even formulated, .or may not even really be questions), live. A sin bin. The only realistic alternative is sweeping them under the carpet, since the idea of all questions automagically being answerable is a nirvana.

    Philosophy is the best thing there is at being philosophy. Its worse at answering its in questions than other fields are at answering their own questions, but its questions are harder,. It isnt broken in the sense that there is any easy way of fixing it, or a comparable alternative doing the same job,

    It is very important for rationality to notice the differences between

    1 Inferior compared to a real , comparable thing

    2 Inferior compared to unmplemented but realistic alternatives.

    3 Inferior compared to nirvanas.

    The system for generating new fields of research? After all, if it generates other areas that are no longer philosophy reasonably regularly, then that actually creates value.

    Does it (still) do so, though? I'm aware that most of what is now science used to be called "natural philosophy", but nowadays it doesn't really seem like there's anything left.

    The system for generating new fields of research?

    Is it a system for generating new fields of research, or is it just a catch-all bin where all the nebulous, hazy, and vague things are kept until they firm up enough to become fields of research?

    Them="the hard problems in philosophy", not "problems"

    How about physics, cognitive science, computer science, mathematics, etc.?

    How about philosophy of physics, philosophy of mathematics? Why do they exist?

    How about philosophy of physics, philosophy of mathematics?

    Do these things solve problems in physics or in mathematics? If so, do they solve them better than the actual fields do? If not, what problems do they solve?

    Do these things solve problems in physics or in mathematics?

    Are those the topic of the discussion? No.

    If not, what problems do they solve?

    Philosophical problems arising from the non philosophical fields mentioned.

    Note the doube whammy. Physics can't solve the average philosophical problem, and also can't solve the problems arising from physics,

    Philosophical problems arising from the non philosophical fields mentioned.

    Like?

    EDIT: Also, why should I care about these so-called philosophical problems?

    also can't solve the problems arising from physics

    Citation needed.

    Philosophical problems arising from the non philosophical fields mentioned.

    Like?

    See here and here.

    Also, why should I care about these so-called philosophical problems?

    "Philosophy isnt doing well enough at solving its problems compared to a realistic alternative" doesn't follow from "I don't care about philosophical problems".

    also can't solve the [philosophical] problems arising from physics

    Citation needed.

    Well, those disciplines exist. Maybe some evidence that they dont need to is required.

    This shouldn't be surprising. Medicine has a longer history than empirical science. For thousands of years it flourished without a second thought for outcome. Clearly whatever medicine is, socially speaking, it isn't reliant on the effectiveness of its methods for its survival. The same is true of education. Schools and universities existed long before there was anything to teach. Whatever social role they may play, imparting skill is a recent development, and clearly not the most central concern.

    I also have the same kind of unease about economics, especially macroeconomics. It's damn hard to run an experiment to test a hypothesis, and you have also have people running around claiming to be Austrian economists, Keynesian economists, Chicago school economists, etc.

    For example: does raising the minimum wage by a few dollars an hour really increase the percentage of people who don't have a job (the sum of the unemployed and those not in the labor force)? Can you prove it?

    I think it's a no-brainer that we should have taken some of the trillions of dollars we're spending on economic stimulus to study whether economic stimulus works. It would be a good investment to create a National Institute of Economics.

    [-][anonymous]15y-20

    The problem here is that if you have experimental results that you have no story to explain, is that really any better than having a story with no experimental results to back it up?

    I'm sure there are at least as many successful faith healers as there are instances where raising the minimum wage raised overall employment. The problem is that those results just don't make sense, so I feel perfectly justified in refusing to accept those results without a valid model to explain them.

    Today I learned that this psychotherapy criticism apparently has a name: http://en.wikipedia.org/wiki/Dodo_bird_verdict

    Cognitive behavioral therapy, at least, has repeatedly been shown to be effective. I'll cite a ref from Wikipedia: Cooper, Mick (2008). Essential Research Findings in Counselling and Psychotherapy: The Facts are Friendly. SAGE Publications. ISBN 9781847870421. But I haven't read that reference.

    The Rorschach test is still in use, and still being taught to students. There are studies claiming to find statistical significance in its results. They don't propose a mechanism; but they do have statistically-significant repeatable correlations between patient responses and their clinical problems. If you want to claim otherwise, get some of those papers and refute their methodology. Again, I haven't been interested enough to read the primary sources myself; but I've been told about them by someone I respect. The key questions are whether they erred by not doing something like a Bonferroni adjustment for the large number of possible hypotheses; and whether the statistically-significant correlations are clinically significant. You would have to read the papers to find out.

    Another example of schools proliferating without evidence: philosophy. Consider all the different schools of ethics which have sprung up: there's utilitarian ethics, deontological ethics, and virtue ethics, with vast numbers of sub categorizations under each school.

    Philosophers are more susceptible to this failure mode because on many important philosophical questions, a standard if not unanimous approach is argue that the question cannot be answered by evidence. Modal logicians trying to do metaphysics, for example.

    Consider all the different schools of ethics which have sprung up

    A few thousand years, and we've managed to come up with about three possible answers to the question 'what, in general, does one have most reason to do or want?'. Is your complaint that this is too many to have considered, or that the question isn't completely settled yet?

    a standard if not unanimous approach is argue that the question cannot be answered by evidence

    I know many philosophers who would be surprised by this assertion - I was under the impression the Empiricists pretty much won. In Ethics, particularly, moral observation is now a standard piece of the toolkit.

    Of course, the grain of truth here is that due to the fractured nature of philosophical schools, there are large communities of philosophers who don't realize other large communities of philosophers even exist. In a sense, nobody knows what philosophy doesn't know, even philosophers.

    Is your complaint that this is too many to have considered, or that the question isn't completely settled yet?

    My complaint is that little progress has been made over many years. There are three general ways to answer the question, sure. But each general answer is really an umbrella term covering a large number of answers. Some sects are similar to others, but they are still different sects.

    I know many philosophers who would be surprised by this assertion - I was under the impression the Empiricists pretty much won. In Ethics, particularly, moral observation is now a standard piece of the toolkit.

    In retrospect, my experience is probably colored by the small school I go to. From what I can tell, there are still rather large, if minority, groups of philosophers who disagree with the settled answer on many questions.

    My obligatory Edwin Jaynes quote on philosophers, quoting a colleague

    "Philosophers are free to do whatever they please, because they don't have to do anything right."

    Probability Theory http://books.google.com/books?id=tTN4HuUNXjgC&pg=PA144

    Is any of that avoidable?

    Please provide proof. Please don't point, yet again, to the highly debatable "solution" to FW.

    What kind of proof would you accept?

    "Philosophers are more susceptible to this failure mode because on many important philosophical questions, a standard if not unanimous approach is argue that the question cannot be answered by evidence."

    No, philosophers are more susceptible because most of them can't recognize that "cannot be answered by evidence" means an answer can't be obtained at all.

    To such individuals, reason is merely a passing fad coequal with every other way of asserting something, a fleeting hiccup that they're far too fashionable to consider important.

    No, philosophers are more susceptible because most of them can't recognize that "cannot be answered by evidence" means an answer can't be obtained at all.

    I would say both. Some things that philosophers think can't be answered by evidence are in fact answered by evidence, such as whether 2 + 2 = 4.

    I have a master degree in psychology. 

    Other than cognitive-behavioural therapies many form of brief psychotherapies have been developed in recent years to tackle specific problems, with scientific literature showing their efficacy.

    At the moment measuring the results, understanding and testing what works and why it worked, is an element of mental-health intervention that's receiving more and more attention and it's being required in many "schools" or approaches.

    From what I've seen, most of the resistance to this is coming from the "long" forms of psychotherapy, arguing that it's too difficult. 

     

    But yeah, as a field, psychotherapy is still struggling with the task of getting rid of a bunch of stuff that's about as evidence based as voodoo, and the standards required to operate in mental-health are way too lax. 

    Many books written to train psychotherapist will tell you that most of the healing power comes from the relationship between the psychotherapist and the patient, and that you have to rely a lot on your personal experience and intuition, with no mentions at all of attempts to do better and to improve the methodologies.

     

    Talking to a professional is still your best shot if you need to take care of your mental health, but you might want to find out which school or approach use the professional you are considering, and whether they have an approach of measuring results, operating on scientific evidence and so on.

     

    Edit:

    This was probably, to no small extent, responsible for the existence and continuation of psychotherapy in the first place—the promise of making yourself a Master, like Freud who'd done it first (also without the slightest scrap of experimental evidence).

    I agree wholeheartedly, still Freud has the credit that the field was such a disaster when he started, that talking to people and trying to go by ear was still a huge improvement. He wanted to help people and didn't understood the scientific approach well enough to use it in such a confusing and unexplored field as mental health, or just wanted to help people now and not 50 years later.

    His biggest success seems to be the radical intuition that talking to people with mental health issues yield better results than torturing them and locking them up, but it was still an improvement and I think scientific psychology would have been born a lot later without the impact his ideas had on popular culture, so I'd avoid picking on him too much.  

    Lest we be too, too hard on these folks, I should add that, as Eli mentions, there is some value in talking to someone about your problems. I have seen a therapist on occasion who mainly just listens and then tries to advise based on human experience. No lying on the couch, no ink blots, no hypnosis. Just a neutral party to talk to who will keep what I say in confidence. Sort of like Catholic confession, but without all the Hail Marys.

    Also, just in case anyone gets confused here (I'm sure someone will), psychotherapist are not the same thing as psychiatrist: the latter are actual doctors who can prescribe therapies, be they drug or other. They may not all be great either, but at least one that I've seen really knows his stuff.

    It's my understanding that a good amount of social science does the same--I was recently in a class on organization theory that made few if any testable predictions and ended up being a bunch of just-so stories about how people in organizations think.

    One of the side effects of prediciton markets of the Hansonian variety is that people would rapidly see who is doing good science and who isn't: wealth would be rapidly transferred from those who don't know how to make good evidence-centric predictions to those who do.

    Remember Rorschach ink-blot tests? It's such an appealing argument: the patient looks at the ink-blot and says what he sees, the psychotherapist interprets their psychological state based on this. There've been hundreds of experiments looking for some evidence that it actually works. Since you're reading this, you can guess the answer is simply "No."

    I checked. It doesn't say "No".

    I checked. It doesn't say "No".

    That depends on what Eliezer / you mean by "it." From my reading of the evidence, the claim that the psychotherapist can interpret the patient's psychological state by use of the Rorschach ink-blots has been refuted; any knowledge they get is probably from cold reading, and they fail to notice real evidence they're not looking for. This is an empirical statement about the population of psychotherapists, rather than the best application of the test, though.

    Cold reading sounds pretty negative. Cold reading is a technique used by mentalists, psychics, fortune-tellers, mediums and illusionists to dupe their marks. If you want to go with a negative comparison, perhaps consider Tasseography ;-)

    Cold reading sounds pretty negative. Cold reading is a technique used by mentalists, psychics, fortune-tellers, mediums and illusionists to dupe their marks. If you want to go with a negative comparison, perhaps consider Tasseography ;-)

    I chose that phrase for its precision, not its emotional valence; were there a more neutral yet readily understandable term, I would have picked it instead.

    I looked at the original 10 inkblots and saw pelvic bones in 9 of them. I wonder what that says about me...

    they are in fact more likely than anyone to assume that a dispute is a dispute about definitions.

    I have to say, speaking from my experiences while I was working on a philosophy major, both reading the work of and holding discussions with professional philosophers, and comparing that to my experiences here, that this is overwhelmingly not the case.

    And there's also no discernible difference between seeing a psychotherapist and spending the same amount of time talking to a randomly selected college professor from another field. It's just talking to anyone that helps you get better, apparently.

    See also http://lesswrong.com/lw/94t/meta_analysis_of_writing_therapy/

    Excellent analysis. I think your historical explanation is important- not the totality of the situation, but important. Problems with psychotherapy- 1) What are the goals? and 2) "I like what is happening" is a valid reason to consider it successful. Having to answer 1) above is not that unusual for a scientific endeavor. The utter subjective nature of 2) above makes it very unusual for a scientific study.

    And there's also no discernible difference between seeing a psychotherapist and spending the same amount of time talking to a randomly selected college professor from another field. It's just talking to anyone that helps you get better, apparently.

    Unless this has been tested for random people other than just college professors, there's a stronger case for saying that talking to a person of a certain intelligence and education level helps you get better. And I suspect that it doesn't generalise to "talking to anyone that helps you get better" but I haven't looked into it.

    (I'm sure there are other factors, but I'm just going by what was said about college professors.)

    [+][anonymous]15y-50