Epistemic status: I’m moderately confident in the positions I endorse here, and this series is the product of several months’ research, but it only skims the surface of the literature on these questions.

Bookkeeping: This post is part of a short series reviewing and commenting on papers in epistemology and philosophy of science concerning research norms. You can find the other posts in this series here and here. The sources used for these posts were suggested to me by my professor as a somewhat representative sample of the work on these subjects. The summaries and views expressed are my own unless otherwise stated. I have read the papers in the bibliography; I have not read the papers in the “See Also” section, but they are relevant to the discussion, and I encourage anyone interested to give them a shot. Many of the papers mentioned in this series are publicly available on philpapers.org.

 

Introduction

By 1804, the phlogiston theory was dead. Thirty years earlier, the same theory had been favored by almost every chemist in Europe. If the chemical revolution was resolved on the basis of reason and evidence, then it appears that there must have been some moment between 1774 and 1804 when the balance of evidence finally tipped against the phlogiston theory and in favor of Antoine-Laurent Lavoisier's "new chemistry." 

Imagine that the objective degree of confirmation of the phlogiston theory just prior to noon on April 23, 1787, was 0.51, that of the new chemistry 0.49. At noon, Lavoisier performed an important experiment, and the degrees of confirmation shifted to 0.49 and 0.51, respectively. Allowing for a time lag in the dissemination of the critical information, we can envisage that there was a relatively short interval after noon on April 23, 1787, before which all rational chemists were phlogistonians, and after which all were followers of Lavoisier. (Kitcher, p. 5).

Throughout "The Division of Cognitive Labor," philosopher Philip Kitcher envisions scenarios like the above in which the evidence for some theory A begins to slightly outweigh that for some competing theory B (after a period in which B's evidence outweighed A's). Kitcher contends that while any given scientist would be rational to believe A over B at this point (and investigate narrower hypotheses on the basis of that belief), if we were to allocate the research efforts for the community as a whole “so as to promote the eventual attainment of truth by the community,” we would want a better strategy than all-or-nothing investment in whichever theory has the slightest edge over the rest at the time. 

Notably, Kitcher stipulates that it’s fairly clear to most scientists that the evidence (slightly) favors whichever theory it favors. He also explains that a rational truth-seeking scientist will pursue whichever theory they think is better supported by the evidence, since “pursuing a doctrine that is likely to be false is likely to breed more falsehood.” Thus, he argues, we can’t get out of the problem by suggesting that some scientists believe one theory and pursue a competing theory, while trying to be correct in both regards (p. 8).

This idea is how he distinguishes between “individual rationality,” the abilities and strategies involved in accurate assessment of evidence, and “collective (or community) rationality,” (p. 6) the actions and strategies involved in bringing a community toward the truth. In the cases Kitcher describes, the default version of events is that scientists maximize individual rationality and that this leads to the all-or-nothing strategy he considers collectively irrational. 

The main argument

As an alternative, he suggests that the scientific community would benefit from a culture in which scientists seek to maximize personal gain in the form of credit and recognition for discoveries. His argument is that by seeking personal reward scientists in this scenario would choose to study neglected-yet-plausible hypotheses more often than purely truth-seeking scientists, since the former kind stands to gain greater recognition for being pioneers, while the latter kind is basically stuck to whichever theory has more support from the evidence, since each scientist is studying whatever seems true to them. 

The community effects of greedy scientists as envisioned by Kitcher are analogous to market effects, allowing the community to value ideas in proportion to their evidential support and plausibility by investing their individual cognitive labor in ways they think will yield results (p. 18). Much of the paper is devoted to testing how these strategies compare mathematically, and leads to the conclusion that the personal-reward-seeking scientists divide cognitive labor more rationally as a group than the individually truth-seeking scientists.

Before I move on to my own objection to the plausibility of Kitcher’s cases as a model for how scientists actually judge hypotheses and distribute labor, I should address an apparent flaw with the greedy scientist strategy. Namely, it may seem that scientists seeking glory over truth would be willing and likely to publish fraudulent results, either so they could publish more easily (and become more prolific) or to produce more provocative results. 

Unless I missed something, Kitcher doesn’t say much in response to this possibility except that “vices from greed to fraud [may] play a constructive role” in the rational division of cognitive labor (p. 18), but I believe he would argue that fraud has a high potential to ruin a scientist’s career and reputation, so rational greedy scientists are incentivized not to lie. This may or may not be a convincing argument, but Liam Kofi Bright makes a reasonably strong case in his paper “On fraud” that truth-seeking scientists may promote more fraud than the alternative. If Bright is correct, then our desire to avoid fraud may be another point in favor of the collective rationality of the greedy scientist strategy over the individually truth-seeking strategy.

My Objection

The kind of scenario that Kitcher builds his paper around seems to omit an important option for scientists, suspension of judgement. If the "objective degree of confirmation" of A is 0.51 and that for B is 0.49, I can only envision that rational scientists would suspend judgement about which theory is correct. There's so little difference in the degrees of confirmation that even if the scientists have perfect awareness of them (and thus no error bars), any new evidence could easily upset the balance and there is no clear lasting winner. This early commitment strikes me as similar to deciding who won a competitive election while half of the votes are still being counted; one candidate may be ahead by a small margin, but there's still so much to learn and so much that could happen that it doesn't make sense to declare even a temporary winner.

Given this uncertainty, I think the purely truth-seeking scientists would much sooner choose to explore both A and B than pile onto whichever theory has the edge at the moment. 

Maybe this would be harder to avoid in cases where the evidence is much more significantly in favor of one avenue of exploration. Kitcher discusses some cases like this later in the paper, but it seems to me like a truth-seeking scientist would be interested in a neglected research area simply due to the untapped opportunity for unique discoveries (i.e., for the sake of the possible additional knowledge itself). My hypothetical scientist would be in the same situation as Kitcher's "Hobbesian"/greedy scientists, who have a higher relative probability of receiving a prize when working on a method that few others are working on, except that for the truth-seeking scientist the prize is a better picture of the truth.

This runs against Kitcher’s argument that researching something without believing it would carry too high of a risk for truth-seeking scientists of propagating falsehoods. While I can feel the pull of the idea that it would be contradictory for a truth-seeking scientist to intentionally operate under assumptions that they think are exceedingly unlikely, it seems no more contradictory to me than a scientist investing their cognitive labor into the same unlikely payout for a sufficiently large prize with some other value.

Bibliography

Bright, Liam Kofi. “On fraud.” Philosophical Studies, vol. 174, no. 2, Feb. 2017, pp. 291–310. DOI.org (Crossref), doi:10.1007/s11098-016-0682-7.

Kitcher, Philip. “The Division of Cognitive Labor.” The Journal of Philosophy, vol. 87, no. 1, Jan. 1990, p. 5. DOI.org (Crossref), doi:10.2307/2026796

O’Connor, Cailin, and Justin Bruner. “Dynamics and Diversity in Epistemic Communities.” Erkenntnis, vol. 84, no. 1, Feb. 2019, pp. 101–19. DOI.org (Crossref), doi:10.1007/s10670-017-9950-y.

See Also

Bruner, Justin P. “Policing Epistemic Communities.” Episteme, vol. 10, no. 4, Dec. 2013, pp. 403–16. DOI.org (Crossref), doi:10.1017/epi.2013.34.

Zollman, Kevin J. S., and Journal of Philosophy Inc. “The Credit Economy and the Economic Rationality of Science:” The Journal of Philosophy, vol. 115, no. 1, 2018, pp. 5–33. DOI.org (Crossref), doi:10.5840/jphil201811511.

New Comment
3 comments, sorted by Click to highlight new comments since:
[-]Jay40

Actually, when these theories are in competition researching phlogiston looks exactly like researching the new chemistry.  What I mean is that even scientists holding on to the phlogiston theory will be aware of the results that favor the new chemistry and will design experiments specifically so that the results expected by one theory will be easily distinguishable from the predictions of the other theory.  As evidence piles up, both theories will be modified by their adherents to explain the experimental results; the worse theory will require more modification but the better one probably wasn't perfect.  Eventually someone writes a big review paper that summarizes the work in the field and comes out strongly in favor of oxygen-based theories; if there's no serious further debate the writers of future textbooks will refer to the big review paper.

I'm not sure whether this is true of chemistry, but the research process you describe certainly sounds plausible. As you say, there may be many cases in which the distribution of labor doesn't matter, because researching different theories looks the same.  One area in which researching different theories looks different is research into what killed the dinosaurs. Producing geological evidence relevant to the hypothesis that volcanoes killed the dinosaurs means digging at different sites from those you would investigate for evidence about whether a meteor killed the dinosaurs. There might be sites that contain evidence relevant to both, but for the most part, research is planned by using the data we already have to look at sites where we expect to find volcanic ash or meteorite craters. 

If everyone was satisfied with the answer that the meteor killed the dinosaurs, we'd miss out on the ways that volcanic activity contributed to the extinction event.

[-]Jay10

Fair enough.  I'm a chemist by training, so I described what I know.