Illustrative of a common failure of rationality, with instrumental consequences such as buying sugar at ridiculous markups.
What LWers or rationalists in general actually do this?
Better yet, if you're criticizing an experimental design, would be to pinpoint the specific criticism to a mechanism: regression to the mean, natural course of the disease, measurement error, expectancy effect, and so on.
How often can one pinpoint this? Is it really helpful to insist that the shorthand be expanded on to even more speculative criticisms, or are we just letting the perfect be the enemy of the better here?
(This is a serious question. I speculated a great deal about why poorly controlled dual n-back experiments showed a large effect, but it wasn't until dozen of studies & 4 years later that Redick et al surveyed the subjects and enabled me to say 'ah, so part of it was expectancy effect!')
Is it really helpful to insist that the shorthand be expanded on to even more speculative criticisms, or are we just letting the perfect be the enemy of the better here?
Part of the point I hoped to make was that "raising the sanity waterline" would be well served by better awareness of the processes of scientific inference - statistics, experimental design and so on. More people should know about regression to the mean, confounding, biases, unblinding, file drawer effects - specific criticisms.
As a specific example, take the Blackwell study w...
TL;DR: I align with the minority position that "there is a lot less to the so-called placebo effect than people tend to think there is (and the name is horribly misleading)", a strong opinion weakly held.
The following post is an off-the cuff reply to a G+ post of gwern's, but I've been thinking about this on and off for quite a while. Were I to expand this for posting to Main, I would: a) go into more detail about the published research, b) introduce a second fallacy of reification for comparison, the so-called "10X variance in programmer productivity".
My agenda is to have this join my short series of articles on "software engineering as a diseased discipline", which I view as my modest attempt at "using Less Wrong ideas in your secret identity" and is covered at greater length in my book-in-progress.
I would therefore appreciate your feedback and probing at weak points.
Most of the time, talk of placebo effects (or worse of "the" placebo effect) falls victim to the reification fallacy.
My position is roughly "there is a lot less to the so-called placebo effect than people think there is (and the name is horribly misleading)".
More precisely: the term "placebo" in the context of "placebo controlled trial" has some usefulness, when used to mean a particular way of distinguishing between the null and test hypotheses in a trial: namely, that the test and control group receive exactly the same treatment, except that you substitute, in the control group, an inert substance (or inoperative procedure) for the putatively active substance being tested.
Whatever outcome measures are used, they will generally improve somewhat even in the control group: this can be due to many things, including regression to the mean, the disease running its course, increased compliance with medical instructions due to being in a study, expectancy effects leading to biased verbal self-reports.
None of these is properly speaking an "effect" causally linked to the inert substance (the "placebo pill"). The reification fallacy consists of thinking that because we give something a name ("the placebo effect") then there must be a corresponding reality. The false inference is "the people who improved in the control group were healed by the power of the placebo effect".
The further false inference is "there are ailments of which I could be cured by ingesting small sugar pills appropriately labeled". Some of my friends actually leverage this into justification for buying sugar in pharmacies at a ridiculous markup. I confess to being aghast whenever this happens in my presence.
A better name has been suggested: the "control response". This is experiment-specific, and encompasses all of the various mechanisms which make it look like "the control group improves when given a sugar pill / saline solution / sham treatment". Moreover it avoids hinting at mysterious healing powers of the mind.
Meta-analyses of those few studies that were designed to find an actual "placebo effect" (i.e. studies with a non-treatment arm, or studies comparing objective outcome measures for different placebos) have not confirmed it, the few individual studies that find a positive effect are inconclusive for a variety of reasons.
Doubting the existence of the placebo effect will expose you to immediate contradiction from your educated peers. One explanation seems to be that the "placebo effect" is a necessary argumentative prop in the arsenal of two opposed "camps". On the one hand proponents of CAM (Complementary and Alternative Medicine) will argue that "even if a herbal remedy is a placebo, who cares as long as it actually works" and must therefore assume that the placebo effect is real. On the other hand opponents of CAM will say "homeopathy or herbal remedies only seem to work because of the placebo effect, we can therefore dismiss all positive reports from people treating themselves with such".
I don't have a proper list of references yet, but see the following:
http://www.sciencebasedmedicine.org/index.php/behold-the-spin-what-a-new-survey-of-of-placebo-prescribing-really-tells-us/
http://www.sciencebasedmedicine.org/index.php/the-placebo-myth/
http://www.skeptic.com/eskeptic/09-05-20/
http://www.skepdic.com/placebo.html
http://content.onlinejacc.org/article.aspx?articleid=1188659
http://www.ncbi.nlm.nih.gov/pubmed/15257721
http://www.ncbi.nlm.nih.gov/pubmed/9449934