Motte and bailey is a technique by which one protects an interesting but hard-to-defend view by making it similar to a less interesting but more defensible position. Whenever the more interesting position - the bailey - is attacked - one retreats to the more defensible one - the motte -, but when the attackers are gone, one expands again to the bailey.
In that case, one and the same person switches between two interpretations of the original claim. Here, I rather want to focus on situations where different people make different interpretations of the original claim. The originator of the claim adds a number of caveats and hedges to their claim, which makes it more defensible, but less striking and sometimes also less interesting.* When others refer to the same claim, the caveats and hedges gradually disappear, however, making it more and more bailey-like.
A salient example of this is that scientific claims (particularly in messy fields like psychology and economics) often come with a number of caveats and hedges, which tend to get lost when re-told. This is especially so when media writes about these claims, but even other scientists often fail to properly transmit all the hedges and caveats that come with them.
Since this happens over and over again, people probably do expect their hedges to drift to some extent. Indeed, it would not surprise me if some people actually want hedge drift to occur. Such a strategy effectively amounts to a more effective, because less observable, version of the motte-and-bailey-strategy. Rather than switching back and forth between the motte and the bailey - something which is at least moderately observable, and also usually relies on some amount of vagueness, which is undesirable - you let others spread the bailey version of your claim, whilst you sit safe in the motte. This way, you get what you want - the spread of the bailey version - in a much safer way.
Even when people don't use this strategy intentionally, you could argue that they should expect hedge drift, and that omitting to take action against it is, if not outright intellectually dishonest, then at least approaching that. This argument would rest on the consequentialist notion that if you have strong reasons to believe that some negative event will occur, and you could prevent it from happening by fairly simple means, then you have an obligation to do so. I certainly do think that scientists should do more to prevent their views from being garbled via hedge drift.
Another way of expressing all this is by saying that when including hedging or caveats, scientists often seem to seek plausible deniability ("I included these hedges; it's not my fault if they were misinterpreted"). They don't actually try to prevent their claims from being misunderstood.
What concrete steps could one then take to prevent hedge-drift? Here are some suggestions. I am sure there are many more.
- Many authors use eye-catching, hedge-free titles and/or abstracts, and then only include hedges in the paper itself. This is a recipe for hedge-drift and should be avoided.
- Make abundantly clear, preferably in the abstract, just how dependent the conclusions are on keys and assumptions. Say this not in a way that enables you to claim plausible deniability in case someone misinterprets you, but in a way that actually reduces the risk of hedge-drift as much as possible.
- Explicitly caution against hedge drift, using that term or a similar one, in the abstract of the paper.
* Edited 2/5 2016. By hedges and caveats I mean terms like "somewhat" ("x reduces y somewhat"), "slightly", etc, as well as modelling assumptions without which the conclusions don't follow and qualifications regarding domains in which the thesis don't hold.
Asking scientists to keep their paper titles hedge-drift-resistant means (1) asking each individual scientist to do something that will reduce the visibility of their work relative to others', for the sake of a global benefit -- a class of policy that for obvious reasons doesn't have a great track record -- and (2) asking them to give their papers titles that are boring and wordy.
I agree that the world might be a better place if scientists consistently did this. But it doesn't seem very likely to happen.
(Also, here's what might happen if they almost consistently did this: the better, more conscientious scientists all write carefully hedged articles with carefully hedged titles, and journalists ignore all of them because they all sound like "Correlational analysis of OCEAN traits weakly suggest slight association between conscientiousness and Y-chromosome haplogroup O3". A few less careful scientists write lower-quality papers that, among other things, have titles like "The Chinese work harder: correlational analysis of OCEAN traits and genotype", and those are the ones that the journalists pick up. These are also the ones without the careful hedging in the actual analysis, without serious attempts to correct for multiple correlations, etc. So we end up with worse stuff in the press.)
I think roughly conscientious scientists already try doing that today, that's why I'm saying this isn't a hypothetical. Having a stronger norm might lead to more of them doing that; some defectors would always remain of course, but maybe they would at least be regarded less well within their own community. Forget journalists, right now we have a problem even with academic journals being biased towards catchy positive results.
Depends also about what we're talking about. Hones... (read more)