In a recent post I posed the question: is the common good served by directing research efforts towards theoretical problems which are interesting to researchers?

komponisto defends interesting problems, arguing that researcher's perceptions of interestingness are often better able to predict future usefulness than anyone trying deliberately to determine what will be useful. This is a plausible claim (although I disagree), and I have encountered it a number of times in the last couple of days. This claim was advanced as a defense of the status quo, but if we really believe it then we should certainly try and understand all of its consequences.

When setting out to predict the usefulness of a research program (as I suggest we should), we are not required to do it via deductive arguments which estimate the likelihood of certain applications. We can use all of the data available, including how interesting the problem seems---to us, to other researchers, to lay people, etc. If intelligent observers' notions of interestingness are substantially corellated with future usefulness, potentially in unpredictable ways, then we would be wise to take this information into account. This is precisely what komponisto and others argue, and they conclude that we should support work on the problems an investigator finds most interesting. I claim this is an example of motivated stopping: the argument was thought through just far enough to support changing nothing.

We have access to many, many indicators of interestingness for any candidate research problem. A problem can seem interesting only to a single person who understands the background in great depth; it can seem interesting to a small group of researchers in related fields; it can seem interesting to mathematicians broadly; it can seem interesting to computer scientists, to physicists, to biologists, to engineers, to laypeople. It can seem particularly interesting to professional mathematicians, or to novices with new ideas. It can invoke feelings of immediacy, of needing to know the answer; it can simply be fun to work on. Particular countries or cultures or time periods or subfields may have objectively better or worse aesthetics.

If our aim is to use interestingness as a predictor of potential usefulness then all of this variability is an asset. We have a historical record to be scoured; patterns to be evaluated. Understanding these patterns is of critical importance to the quality of our predictions and the efficiency of our research institutions. If the historical record is too opaque, we should at least establish a culture of transparency: make records not only of what work is done, but why it is done. Who did it seem interesting to? How did they feel about the research program; why were they really working on it? In the long term, we can hope to discover whose intuitions were valuable and whose were not; we can understand which aesthetics lead to useful work and which do not.

Over time (if not immediately), we can hope to develop a common understanding of the link between interestingness and future usefulness, and develop institutions which exploit this understanding to produce valuable research.

New Comment
41 comments, sorted by Click to highlight new comments since:

I think you're forgetting the problem of incentives. Whatever standard procedures for evaluating/predicting usefulness you come up with, if they're actually used to allocate resources and status in practice, people will have the incentive to hack them by designing and presenting their own work to come off as better than it really is. And since people who do research are usually very smart, you'll be faced with a host of extremely smart people trying to outsmart and cheat your metrics, in which many will surely be successful. Goodhart's law, and all that.

This, of course, is not even considering whether the influential people whom you'd have to win over to establish such practices have the incentive to submit their past and present work to such evaluation. Unfortunately, although the problems you point out are very real, there is no straightforward solution for them; almost any attempt at fixing institution is likely to run into difficult and unpredictable problems with perverse incentives.

My goal is not to convince the research community to switch focus or to prompt sweeping institutional changes.

I know a small number of extremely intelligent and otherwise altruistic people who do pure research (and if my life had gone slightly differently it seems like I might have become one). My goal is to convince such people to think seriously about what they are doing with their time.

This could be alleviated by making the standards sufficiently retrospective, e.g. evaluate the usefulness of current work in 100 years (which would probably make it more effective anyways).

We could also test these predictions on historical data, although it might be slightly trickier.

My hypothesis is that what we find interesting is largely the result of subconscious, heuristic (i.e., intuitive) optimization for status, and so interesting problems are useful to the extent that such intuitions are accurate, and the extent that we as a society reward the discoverers of useful ideas with status.

From an individual perspective, part of this problem is an instance of the more general problem of how to improve our intuitions and when to override (or combine) them with explicit reasoning, which I think is a very interesting (and useful!) one. (The post takes a more social perspective, which of course is also very interesting.)

However, one way of getting status is by proving that one has enough to spare to put resources into apparently useless activities.

I don't have a good theory for why some types of uselessness are more likely to lead to status than others, though perhaps it has something to do with the production of supernormal stimuli.

At least in math, one common indication of of interestingness is how much something is connected to other things or shows up in different forms. Thus for example groups are very interesting in the abstract because they show up in many different mathematical contexts. This is a good metric for also determining usefulness since the more things something is connected to in the abstract the more likely it is that one of those connections will be practical.

This is a good metric for also determining usefulness

I am advocating testing assertions like this, as well as possible. Like you say, groups are interesting in the abstract because of their many connections to other branches of mathematics. I am not convinced that research in abstract group theory is useful. I suspect be that basic notions and (very basic) results of group theory are useful, but that further work motivated only by its connection to group theory (as opposed to some other domain where the language of groups can be applied) has little value.

It sounds like a good first step is to develop a reliable way of recognizing in retrospect what was in fact useful, and classifying research projects/publications that way.

Of course, this works better for short-term projects. A fifty-year longitudinal study on heritability of disease risk factors is hard to measure the usefulness of in less than fifty years.

It sounds like a good first step is to develop a reliable way of recognizing in retrospect what was in fact useful, and classifying research projects/publications that way.

This seems like a remarkably sound insight to me. I'm not sure why it hasn't been upvoted more.

Possible second step: Set up a prediction market.

Two properties of interesting problems seemingly overlooked by the above:

1) Interesting problems are Fun

2) Interesting problems motivate, which is closer to the topic of this post. I would not underestimate this motivation, after reading one of the participants on the Manhattan project say something to the extent that "we had to work on the bomb because it was so cool"

I am responding to the particular claim that interesting problems are important because they are likely to be useful in unpredictable ways.

It seems plausible to me that funness could be a mechanism driving usefulness, simply because it would be more motivating and hence would simply be more likely to cause things to get done well.

Yes, I was thinking the same. In cases where there is no order-of-magnitude differences in utility funness might be a good heuristic, since the fun things are more likely to get done.

I think in reality the difference in utility ranges across many orders of magnitude, whereas funness does not change nearly as much. I used to be convinced that research outside of pure math/CS would be incredibly boring, but it turns out that work in basically any field that is trying to solve problems we don't know how to solve is quite fun. Do I think that machine learning is more interesting than synthetic biology? Yes, but not enough that if Paul convinced me that synthetic biology research was more useful at the margins I wouldn't switch.

In the biological sciences, one often finds claims of interestingness or usefulness in the abstracts, introductions, and conclusion sections of research papers. Research may claim to overthrow existing paradigms, for example, or lead to disease cures. Presumably one also finds these claims or promises in research proposals. But I'm not sure how one evaluates research papers and programs for how much interestingness and usefulness they actually deliver.

Some kind of citation metric, presumably. But how do we distinguish between being cited for being interesting vs being cited for being useful?

A citation metric seems like a bad way of evaluating usefulness, but a good measure of another type of interestingness (are papers cited often in the next year likely to contain useful insights?)

To determine usefulness we need to look at something other than publications. We can hope to estimate how the state of modern theory affects modern practice---what ideas or modes of thinking are important, what techniques are used in practice, etc. Looking back, we then have some leverage to understand what research programs helped advance our understanding in a relevant way, or were indirectly necessary for the development of practically important techniques.

We probably want something automatable, though. Maybe look at the flow of key words and phrases (ones that grouped papers tend to share with each other and not with other papers) from the literature of pure science to engineering and industry?

I'm somewhat skeptical of the claim that researcher's level of interest in a given subject is very strongly correlated with the maximum possible benefit to society that could come out of an intensive study of the topic. It may be true, but I think that the primary motivation is just that the problem presents an interesting challenge.

That being said, I do think that scientists and researchers work harder and more efficiently when they enjoy what they're doing. If, as I suspect, interesting-ness is uncorrelated to potential benefits, we should still ensure that interesting projects are funded.

Imagine, in a hypothetical scenario, that there are two different areas of study that, if researched, would produce innovations of equal value to society. This fact, however, is not known to the individual running the research budget. Let's suppose (just for the sake of numbers) that a scientist who is actively curious about a problem will work 20% faster on that problem than an uninterested scientist. In this case, funding the interesting project will be more efficient than funding the uninteresting one. Obviously, we should pursue lots of areas of research, because the uninteresting problem will give us benefits too. However, we should make sure that we don't defund interesting projects in order to pursue lines of inquiry that will not yield results as efficiently.

If, as I suspect, interesting-ness is uncorrelated to potential benefits, we should still ensure that interesting projects are funded.

This is not the correct conclusion of your argument. You are entitled to: "We should not fund uninteresting projects."

If I tend to find useless math problems interesting and useful ones tedious, I can stop doing math and try to find another field (among many alternatives). If I am considering a funding proposal for a problem which is predictably useless, I can simply not award funding.

Depends.

I happen to enjoy riddles and am happy whenever one actually has real life application. Others only like real life problems. Which might make teaching a bit more complicated.

A big problem I see is when important problems are ignored because they are not particulary interesting or worse stigmatized. Norman Borlaugh probably did not enjoy his research too much, but it was necessary. A related problem would be when especially gifted people end up in one field of research, and not in another on a systematic basis.

[-][anonymous]00

Whether I find a problem "interesting" has a lot to do with whether I think I can solve it, or make and measure progress towards solving it. Another way to put it: for a problem to be interesting it has to be easy. Academia is good at training people to find certain classes of problems "interesting" in this sense, and moreover to supply them with a steady source of such problems. That definitely can look like, or maybe it just is, a big intellectual circle-jerk.

But even stipulating that all that brain power going to solve easy made-up problems is pointless, I don't have any sense of how high the opportunity cost is. What are you planning to do with your time instead of math research, and how scaleable do you think those plans are? There are probably more than 1000 new PhDs per year in each of physics, math, computer science, and economics. (Gently avoiding discussion of how many wind up with tenure.) Assuming you don't mean to take them out of "thinking work" altogether how many of them do you think can be put to usefulness?

If researchers are more likely to work on problems they find interesting, we will automatically find that most of the useful research was done by people who found it interesting. We will also find that there is a lot of useless research being done.

I think this is obviously the case, and the data therefore does not give evidence to the hypothesis that interestingness is a good heuristic for usefulness.

we will automatically find that most of the useful research was done by people who found it interesting.

There is still a distinction between problems that were worked on exclusively because they were interesting, and researchers who worked on important problems that they found interesting. For example, presumably Cayley found systems of linear equations interesting. At the same time, he could provide strong practical justification for understanding the solutions of such systems; he did not have to appeal to interestingness to justify his work.

This is a different situation from the one in which someone decides to study matrix algebra because matrices seem like intrinsically interesting objects (which is also something that frequently happens in mathematics).

You're right, and I think my observation strengthens your originial thesis that we should explicitly look for useful problems to research.

[-]Gray-30

I think good research is useful, and bad research is non-useful. Research that the researcher doesn't find interesting is unlikely to be good. Therefore, it is unlikely that research that the researcher doesn't find interesting is going to be useful.

I don't think that interesting research is likely to be useful because researchers are more likely to be interested in useful things. Most of what mathematicians study, for instance, has little obvious practical value.

I'm confused. Are you claiming that anything labeled as research is intrinsically useful? I assume you are not but then I would like to know what your actual claim is (or clarify that this is indeed your claim).

It also seems that for your argument to hold you would have to argue further that the variance in utility is small enough that small/moderate variations in interestingness overweigh any possible variation in utility. Or else argue that interestingness varies much more than utility.

[-]Gray00

As to your first point, my answer is no. What I mean by "good research" is just research that meets general normative scientific criteria for knowledge, I guess we assume that such criteria tends towards truth. I guess in the back of my mind I'm supposing that true statements are more useful than false ones.

As to your second point, I think it is based on the misunderstanding in your first point. Or else, I've misunderstood your second point. :) But I think my first proposition needs some work. For instance, you could imagine a continuum between good research and bad research, and a continuum between useful research and non-useful research. In this case, it wasn't my intent to really specify how much usefulness average quality research has, but only that bad research has very little utility. The difference in utility between good and average research I really don't know.

Also, this is my second post on the site, so hopefully I'm meeting the standards for posting here. If not, I'll just lurk more. I also don't know how much certainty do you guys expect when someone makes a proposition here.

As I also noted below, I think you're fine in terms of meeting posting standards. And I regularly make propositions while only having e.g. 70-80% certainty, sometimes as low as 40-50%. I find it's a good way to find possible weak points in my argument.

So just to make sure I understand your argument now, is it essentially this?

"The current standards of the scientific community, while possibly imperfect, are good enough that most things that are accepted as legitimate research will be useful. However, if a researcher is uninterested in a topic, even if the topic is highly legitimate, they are unlikely to do a very good job, the end result being that their output will be mostly useless, no matter how well-conceived the original program was. Therefore, researchers should not force themselves to work on problems that are uninteresting."

Let me know if the above is an accurate representation of your views. I believe that I myself agree with the above paragraph, but that this argument, while correct, does not alleviate the social responsibility of researchers to try to optimize the usefulness of their research programs (for reasons that I can explain if you do not think this is true).

Also, I just realized that I attributed the conclusion "researchers do not have a social responsibility to optimize the usefulness of their research programs" to your original argument, even though you gave no indication that this was intended. So I should apologize for that.

[-]Gray00

I think the main disagreement I have with your translation is that I don't think that "normatively good research" is the same as "research that the scientific community approves of". I believe that the standards of the scientific community can and should be criticized on rational grounds. I anticipate you might ask, given the above, on what is meant by "normatively good research" then, I guess I just mean that which corresponds with intellectual and epistemic virtue. My use of "normative" isn't my own innovation though, it is the same sense in which logic is a normative science. Logic doesn't describe how people actually think, but how people /should/ think; but this normativity shouldn't be understood exclusively in moral terms. Normativity refers to how well does the subject matter, in this case thinking, relate to it's end. Normatively good research, then, refers to research that best satisfies the purpose or goal of research.

I guess, to be fully clear, I should clarify on what the purpose or goal of research is. You could say that the goal is usefulness, in which case my proposition would be a tautology ("Research that is good at being useful is useful and research that is bad at being useful is not useful."), but I don't think that's the answer. Maybe I'm being an idealist, but I think the primary purpose of research is to satisfy curiosity, without disregarding any other ulterior aims and purposes that actual researchers might have. I think curiosity is one of the few actual drives that people have that points to truth for it's own sake, it represents a person's "will to know"*.

Is what satisfies curiosity also useful? I think if my argument is wrong, this is where it is weak and potentially vulnerable. But I don't think it is obviously wrong. Your concern seems to be that researchers have a responsibility to prefer research that is useful over research that isn't useful, even if it doesn't satisfy the researchers interest as much. But I doubt that it is that easy to determine whether research will be useful /a priori/. C.S. Peirce uses the example of conic sections being useful for Kepler's astronomy, which in turn was useful for Newtonian physics. We're talking about research that had been worked on for generations, and its hard to imagine any of these men "optimizing the usefulness of their research programs". Yet it is hard to imagine work that has had a greater positive affect on our standard of living than these men.

  • I don't mean to say that curiosity is the drive to learn about anything, we know that different people are interested in different things. But I think that whatever a person is interested in, curiosity wouldn't be satisfied at learning false things. I don't mean to imply that curiosity is satisfied at learning anything if it happens to be true.

So you believe that the pursuit of knowledge is inherently virtuous, and you endorse research on those grounds? I.e. research is good to the extent that it reveals truth, and bad to the extent that it reveals falsehoods?

Can you clarify if you also believe that usefulness should be a non-negligible factor in evaluating the virtuousness of a given piece of research (irrespective of other factors which might make it impossible to care about usefulness directly)?

[-]Gray00

Well, first, note that there is a difference between intellectual and moral virtue. When you say "inherently virtuous", I have the awful feeling that you're talking about "moral virtue". I would say that "intellectual virtue" and "rationality" are near-synonyms, at least in the way that "rational" is used on this site. They both seem to me to be a sort of meta-cognition, where you are thinking about thinking, and you're trying to determine what sort of thinking will best take you closer to a given end.

But I don't think that usefulness is an aspect of the normative end of research. When research is useful, it just happens to be that way; in the same way that conic sections just happens to have ended up being a useful science, but if it was studied or not studied on the basis of expected utility, it probably would have been passed over. Generally, I agree with C.S. Peirce about the need to separate theory and practice. Practice can make use of theory, but trying to engage in both at the same introduces prejudices into the theory, and makes the practice more difficult than necessary.

Is this correct then? You believe that developing theory based on practical considerations will lead to bad theory, which in the long term will be bad even from a practical standpoint (because in 200 years we won't have developed the theory we would have needed to in order to efficiently tackle the problems we will be facing in 200 years).

[-]Gray00

I'm still trying understand the standards of this site, while it's not a bad thing, it will take me some time to get used to. Your question "Is this correct then?" I think is asking me how much certainty does my proposition deserve. While I believe it is true, mainly because I trust C.S. Peirce's judgment, I think it fails as a proposition. To me, a proposition is a social thing, I'm not just reporting my own belief, but in a proposition I am suggesting that others reading the proposition /should/ also believe it, and that's why propositions need to be justified. Given that, I should retract my proposition. On other sites the standard was to try out ideas to try to find out which ones are better, so this is different.

Oh I was actually just trying to understand your argument. By "Is this correct" I meant "Do I correctly understand your position". In general what it means to be justified is fairly unclear, I think you have provided a fair amount of justification for your position, assuming I understand it correctly.

This is probably not typical, but whenever I look at an argument and think "that clearly makes no sense" (which was my reaction to your original post, for reasons already explained), my assumption is that I don't understand your position correctly, and I then spend as much time as necessary to be certain that I understand your position before continuing the discussion.

Since I am fairly confident from your response that I now understand your position, I will note that I disagree with it. Based on this and other threads, though, your position is shared by plenty of other people on this site. So in the interests of time I'm not going to get into a lengthy discussion of why I disagree right now, instead I'll write up a short discussion post later that will reach a larger audience. I will note that your assertion is exactly what paulfchristiano is advocating testing: it seems that there is a large divide between people who think theory should be developed with practice in mind, and people who think theory should be developed in the way advocated by C.S. Peirce. Since this is such an important question, we should try to test this proposition; the fact that there has been little effort to test it so far means that (1) we should suspect ourselves of motivated stopping and (2) there is a large marginal benefit to performing the analysis.

[-]Gray10

Oh :) In that case, I think you've summed up my position well. I guess in my mind I have the idea of a researcher trying to "obey two masters rather than one", that is utility and truth. It seems to me that being weighed down by utility concerns would cause someone to ignore certain perfectly rational possibilities because they aren't productive.

Testing the proposition, I think, would be through a historical survey, don't you think? I'll see about summarizing C.S. Peirce's thoughts on this matter for the site.

I'll see about summarizing C.S. Peirce's thoughts on this matter for the site.

I think that would be really interesting!

I downvoted your post because I believe the flaw in your argument, as pointed out by jsteinhardt, is pretty obvious.

[-]Gray00

I think you guys are attacking a strawman. I said "good research" and somehow jsteinhardt translated that as "anything labeled as research".

I agree that I was attacking a strawman. That is why I asked you to clarify your argument.

To answer your question about whether you are meeting the posting standards, I believe that the answer is yes; in particular, your arguments are written clearly, so that it is easy to understand what you are saying, even if I think it is wrong. Think of downvotes as a learning opportunity (these other people think I'm clearly wrong, I should try to understand why). I'm not sure if this is a particularly good metric, though, since I have been heavily downvoted in the past on issues where I am still completely convinced that I am correct.

In this instance I will try to explain what SimonF and I found objectionable about your original post. Essentially, you used extremely broad words ("good", "bad", "research") to describe your stance, in such a way that your claim was obviously false without adding additional qualifiers. I believe that you agree with this, since you said that I was attacking a strawman.

The problem is that you did not provide the necessary qualifiers to clarify what you meant. It was probably obvious to you what you meant (you were quick to clarify when asked), but it was not obvious to me. From your perspective it probably feels like I was just trying to be difficult, but I promise, I actually did not know what your intended argument was, and was honestly trying to determine it.

This problem leads to two separate issues --- first, it slows down the speed of discourse, and can lead to confusion if instead of asking for clarification people add their own (incorrect) qualifiers to your argument. Second, vague statements tend to convey insufficiently many bits of information to back up most claims, so they are a bad way of trying to support a conclusion.

I apologize if that was long-winded, but hopefully someone will find it helpful.

[-]Gray00

I appreciate your post, and I think you've been very hospitable, and I appreciate that as well. While I do have the habit of writing concisely, and believe in writing that way, I also have to admit that my original post was shorter because I was uncertain on how I would be received. By the way, I didn't think that in your post you were being dishonest, in fact rather the opposite. You realized that there was probably some miscommunication and wanted to resolve it, and that's exactly how dialogue should proceed. I felt that my post was interpreted more simply than I had intended because SimonF thought that my argument was obviously wrong. While it may be unsound, I didn't think it was trivially unsound. I guess "strawman" is often an accusation of dishonesty, I just didn't know what better term to use for "a more simple version of my argument is being counterargued than the argument I wanted to present".

But I also want to thank you, again, for being hospitable and welcoming to your community. I'm still uncertain how this will go, I mainly joined because I want to learn more about induction, which is something that I shouldn't have ignored before. I also read through many of the posts in the sequences, and this site comes closest to the "ideal" I've had before about my own life, and what I understand to be intellectual virtue.