In a recent post I posed the question: is the common good served by directing research efforts towards theoretical problems which are interesting to researchers?
komponisto defends interesting problems, arguing that researcher's perceptions of interestingness are often better able to predict future usefulness than anyone trying deliberately to determine what will be useful. This is a plausible claim (although I disagree), and I have encountered it a number of times in the last couple of days. This claim was advanced as a defense of the status quo, but if we really believe it then we should certainly try and understand all of its consequences.
When setting out to predict the usefulness of a research program (as I suggest we should), we are not required to do it via deductive arguments which estimate the likelihood of certain applications. We can use all of the data available, including how interesting the problem seems---to us, to other researchers, to lay people, etc. If intelligent observers' notions of interestingness are substantially corellated with future usefulness, potentially in unpredictable ways, then we would be wise to take this information into account. This is precisely what komponisto and others argue, and they conclude that we should support work on the problems an investigator finds most interesting. I claim this is an example of motivated stopping: the argument was thought through just far enough to support changing nothing.
We have access to many, many indicators of interestingness for any candidate research problem. A problem can seem interesting only to a single person who understands the background in great depth; it can seem interesting to a small group of researchers in related fields; it can seem interesting to mathematicians broadly; it can seem interesting to computer scientists, to physicists, to biologists, to engineers, to laypeople. It can seem particularly interesting to professional mathematicians, or to novices with new ideas. It can invoke feelings of immediacy, of needing to know the answer; it can simply be fun to work on. Particular countries or cultures or time periods or subfields may have objectively better or worse aesthetics.
If our aim is to use interestingness as a predictor of potential usefulness then all of this variability is an asset. We have a historical record to be scoured; patterns to be evaluated. Understanding these patterns is of critical importance to the quality of our predictions and the efficiency of our research institutions. If the historical record is too opaque, we should at least establish a culture of transparency: make records not only of what work is done, but why it is done. Who did it seem interesting to? How did they feel about the research program; why were they really working on it? In the long term, we can hope to discover whose intuitions were valuable and whose were not; we can understand which aesthetics lead to useful work and which do not.
Over time (if not immediately), we can hope to develop a common understanding of the link between interestingness and future usefulness, and develop institutions which exploit this understanding to produce valuable research.
As I also noted below, I think you're fine in terms of meeting posting standards. And I regularly make propositions while only having e.g. 70-80% certainty, sometimes as low as 40-50%. I find it's a good way to find possible weak points in my argument.
So just to make sure I understand your argument now, is it essentially this?
"The current standards of the scientific community, while possibly imperfect, are good enough that most things that are accepted as legitimate research will be useful. However, if a researcher is uninterested in a topic, even if the topic is highly legitimate, they are unlikely to do a very good job, the end result being that their output will be mostly useless, no matter how well-conceived the original program was. Therefore, researchers should not force themselves to work on problems that are uninteresting."
Let me know if the above is an accurate representation of your views. I believe that I myself agree with the above paragraph, but that this argument, while correct, does not alleviate the social responsibility of researchers to try to optimize the usefulness of their research programs (for reasons that I can explain if you do not think this is true).
Also, I just realized that I attributed the conclusion "researchers do not have a social responsibility to optimize the usefulness of their research programs" to your original argument, even though you gave no indication that this was intended. So I should apologize for that.
I think the main disagreement I have with your translation is that I don't think that "normatively good research" is the same as "research that the scientific community approves of". I believe that the standards of the scientific community can and should be criticized on rational grounds. I anticipate you might ask, given the above, on what is meant by "normatively good research" then, I guess I just mean that which corresponds with intellectual and epistemic virtue. My use of "normative" isn't my own innovation t... (read more)