by a363
1 min read

4

I'm confused...

If a group of people, from prehistory, believe something to be true (having X will make you feel Y) and that belief affects their behaviour (when they get X, they feel Y - like a placebo - although some who do get X do not feel Y - but they are the minority), then when scientific studies are conducted (by rational members of the same group) into the effects of X on behaviour Y,  it should yield results that X is strongly correlated with Y (there is no control group, everyone is exposed to the same belief and social pressure that reinforces that belief).

What will a rational member of that group conclude about the effects of x on y? How can you rationally correct for bias if there is no control?
New Comment
11 comments, sorted by Click to highlight new comments since:

The goal here is to distinguish between these three causal graphs:

(A) Pure placebo
X --> belief
      /
     |
     v
 Effect

(B) Direct effect
X --> belief
 \
  |
  v
 Effect

(C) Both direct and placebo effects
X --> belief
 \    /
  |  |
  v  v
 Effect

Unfortunately, you cannot possibly distinguish these if you only measure X and the effect, without either observing or controlling peoples' belief (such as by giving them placebos so they all believe they're getting X, or asking them survey questions about whether they think X will work). There are some theorems about which causal graphs can and can't be distinguished from which types of observations in Pearl's book Causality.

[-][anonymous]50

(there is no control group, everyone is exposed to the same belief and social pressure that reinforces that belief)

That does not mean there is no control group. What distinguishes the treatment arm from the control arm in a placebo-controlled study is that the treatment arm receives real X, and the control arm receives mock-X, which (ideally) is indistinguishable from X. The two arms of the study are not created by finding people who have different beliefs about whether X causes Y, but by giving them X or mock-X (which is the placebo) while preventing them from knowing which one they are getting.

It seems that the OP is rather speaking about situations where the effect is purely psychological anyway, but wants to distinguish whether it is "real" or "biased". As with "having a dog will make you happy because interaction with dogs satisfies human inherent desires" vs. "having a dog will make you happy because you expect it to be the case". Even if you managed to create a mock-dog capable of fooling the subjects into thinking that it was real, it would miss the point.

[-]a36300

Right. Or from another angle: people who do not have dogs are considered pariahs, so the dogless are getting a nocebo all the time. So when they take the placebo (dog) their increase in well being would mostly be through the elimination of the nocebo effect.

[-][anonymous]00

I see. Well, a placebo (belief-based) effect is part of the whole psychological effect (including interaction). It would probably even interact with the interaction - e.g. a belief that dogs are good for you would probably encourage the kind of interaction with the dog that makes the dog good for you, in which case the belief is an integral part of the overall mechanism.

So this is really a specific case of trying to tease apart different components of the overall psychological mechanism. In this case I don't think there's anything special about the "placebo" component of the overall mechanism that we especially need to tease apart from the other components. Sure, the placebo component is contingent on the belief that dogs are good for you and at some point in the future the tribe may lose that belief, and that's something to worry about. But the placebo component may not be the only component that is contingent on something that the tribe may lose over time. For example, maybe the beneficial psychological effect of dogs is contingent on the lifestyle of the tribe, which can affect how the tribe interacts with the dogs.

Still, a scientist might want to tease apart all the psychological effects experimentally. Probably not doable. There's only so much you can discover without opening the black box (e.g. conducting experiments on the tribe that are unethical).

Agreed.

I don't quite get the confusion.

Observing that people are likelier to experience Y upon getting X than people who don't get X (which, incidentally, is more than just "X is strongly correlated with Y," which is a weaker claim) is evidence that X reliably entails Y.

It is not evidence for any particular mechanism underlying that entailment (since it's equally compatible with a great many mechanisms).

So I'd be justified, based on a novel study that showed that, in increasing my estimate of the probability of Y given X, though I shouldn't increase my estimate of the probability of any particular mechanism.

The fact that the mechanism in this case is a bias pervasive within the group doesn't change any of that.

What are the exact hypotheses? It is difficult to say anything so generally. If the hypothesis is that there is a correlation between having X and feeling Y, once you find the correlation, you are done. If you want to test whether people feel Y only because they believe that X causes Y and for no other reason, you have to find somebody who doesn't believe that X causes Y. I can hardly imagine a situation where it is impossible, but if it is really impossible, the hypotheses

  1. People with X feel Y because they expect to feel Y when having X
  2. People with X feel Y for some other unspecified reason

may as well be experimentally indistinguishable. One can do better if the second hypothesis is more specific, or if the mechanism of placebo is better understood. But, as I have said, you should give an example where you can't find few people who don't share the placebo-causing belief.

There is research on this question, the keywords to look for are "lay theories" or "naive theories".

One approach is to measure people's naive theories (e.g., by asking people if they believe that X causes Y), to see if they are correlated with the outcome Y when X is present. Some people hold the belief that X causes Y more strongly than others, and this takes advantage of that variability to test if the strength of the belief is associated with the outcome.

Another approach is to manipulate people's beliefs (e.g. by telling them that scientific research has shown that X is unrelated to Y) and see if that changes the outcome Y (when X occurs).

In some cases, it's possible to make people believe that X occurred when it really did not; in that case researchers can use the same methodology as a placebo control study.

This reminds me of Epistemic Luck.

Maybe you could predict how large a pure-placebo effect would be, and see how much the observed effect deviates from this.