inklesspen comments on Med Patient Social Networks Are Better Scientific Institutions - Less Wrong

37 Post author: Liron 19 February 2010 08:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread. Show more comments above.

Comment author: inklesspen 20 February 2010 12:46:10AM 1 point [-]

According to the article, they lack crucial features such as double-blinding. Most social networks lack the openness and data retention critical for effective peer review. It is possible to learn something from a network like the one described, but I would hesitate to call it science.

Comment author: gwern 13 May 2012 08:00:06PM 1 point [-]

Lack of double-blinding ought to increase the false positive rate, right? But the result presented in the OP (the lithium) was a finding of a negative.

Comment author: PhilosophyTutor 13 May 2012 08:45:15PM 0 points [-]

No. Lack of double-blinding will increase the false negative rate too, if the patients, doctors or examiners think that something shouldn't work or should be actively harmful. If you test a bunch of people who believe that aspartame gives them headaches or that wifi gives them nausea without blinding them you'll get garbage out as surely as if you test homeopathic remedies unblinded on a bunch of people who think homeopathic remedies cure all ills.

In this particular case I think it's likely the system worked because it's relatively hard to kid yourself about progressing ALS symptoms, and even with a hole in the blinding sometimes more data is just better. This is about as easy as medical problems get.

Generalising from this to the management of chronic problems seems like a major mistake. There's far, far more scope to fool oneself with placebo effects, wishful thinking, failure to compensate for regression to the mean, attachment to a hypothesis and other cognitive errors with a chronic problem.

Comment author: gwern 14 May 2012 12:53:45AM *  1 point [-]

Fair enough. I don't think the biases are symmetrical though: these people have a real and life-threatening disease, so they approach any intervention hoping strongly that it will work; hence we should expect them to yield more false positives than false negatives compared to whatever an equal medical trial would yield. On the other hand, when we're looking at the chatrooms of hypochondriacs & aspartame sufferers, I think we can expect the bias to be reversed: if even crazy people find nothing to take offense to in something, that something may well be harmless.

This yields the useful advice that when looking at any results, we should look at whether the participants have an objectively (or at least, third-party) validated problem. If they do, we should pay attention to their nulls but less attention to their claims about what helps. And vice versa. (Can we then apply this to self-experimentation? I think so, but there we already have selection bias telling us to pay little attention to exciting news like 'morning faces help my bipolar', and more attention to boring nulls like 'this did nothing for me'.)

Kind of a moot point I guess, because the fakes do not seem to be well-organized at all.

Comment author: PhilosophyTutor 14 May 2012 01:35:58AM 0 points [-]

I think you're probably right in general, but I wouldn't discount the possibility that, for example, a rumour could get around the ALS community that lithium was bad, and be believed by enough people for the lack of blinding to have an effect. There was plenty of paranoia in the gay community about AZT, for example, despite the fact that they had a real and life-threatening disease, so it just doesn't always follow that people with real and life-threatening diseases are universally reliable as personal judges of effective interventions.

Similarly if the wi-fi "allergy" crowd claimed that anti-allergy meds from a big, evil pharmaceutical company did not help them that could be a finding that would hold up to blinding but then again it might not.

I do worry that some naive Bayesians take personal anecdotes to be evidence far too quickly, without properly thinking through the odds that they would hear such anecdotes in worlds where the anecdotes were false. People are such terrible judges of medical effectiveness that in many cases I don't think the odds get far off 50% either way.