I don't mean to be dismissive but it sounds like you're simply not familiar with the research in the areas you're discussing.
Also, on your latter point, doing research is not as simple as 'exploring possibilities, finding a hit, then exploring similar possibilities.' The problem is that research-space is an extremely high-dimensional, bumpy space. Simply finding new avenues of research that would produce results - null or otherwise - is often a non-trivial problem that requires a lot of intelligence and wisdom. For one thing, it would be a waste if the avenue had already been considered before. And when you find some new possibility, actually evaluating that possibility is itself a non-trivial problem that frequently requires a lot of money and time to do. And then when you get the result, it's common for the result to be only slightly significant, and come with a lot of caveats and special conditions, requiring further study just to make sure it's not a fluke.
From the outside, it can seem like science consists of a bunch of 'clueless' workers and the occasional genius who makes an astounding discovery. In reality, everyone is essentially clueless in the grand scheme of things, and most 'major discoveries' are only deemed so years after the fact.
This is a fascinating and complex topic. To make the question tractable, I suggest first clarifying "you". Are we discussing a graduate student selecting a research topic for his/her PhD program? Are we discussing a professor who already has tenure? Are we discussing someone performing R&D in a corporate environment? Are we discussing a 'citizen scientist'? The four individuals I've identified here face very different situations.
I'm not an economist, but my understanding is that there exists a small subset of economists who are challenging the notion that productivity maximization is the proper goal of economics, considering that already realized past economic growth may be seriously damaging this planet's capability to sustain life. This just goes to show that trying to dictate a general goal for any field of study is likely to encourage the emergence of contrarians.
I think it might be interesting to look at how scientists actually choose their fields of research, and how they navigate between what they can get funding to do and what they think is worth doing.
Root-Bernstein says that scientists have done some of their best work when (after doing conventional science for a while), they're tasked with solving a practical problem from a little outside their field.
I reckon your wrong about political science and well most of your post. Next time I recommend getting familiar with what academic work already exists to answer your question of interest. Political science can be desciptive, which is useful too. It doesn't have to be normative.
Things are organised basically like that - description and prediction.
Etc etc. The question of boundaries between these fields of relicted to the history and philosophy of science. Historiographers have looked into it but it's largely academic now.
Research reviews are enjoyable in that they inform you in broad strokes about the current state of a field and give clues as to what future research might be highly impactful. Going through many of these while being scope sensitive would be an excellent project and the results could help lots of people. 80k hours might be interested in a guest blog post if the results were written up well.
Thomas Kuhn argues in his book that one of the reason why physics is a more productive scientific field then social science is because social scientists try to orient their research to solve societies problems while the physicist simply try to make physics progress without much thought about the application of physics to help society progress.
Lots of physics innovations are harmful, and lots are helpful. However social scientific research for the most is helpful.
I can think of examples of physics that is harmful, and physics which is helpful.
Can think of examples of social science which is helpful, but none which is harmful (e.g. yesterday read a paper using survival analysis to predict boko haram's terrorist behaviour which is useful in averting terrorist casualties, and in enhancing understanding).
On the other hand, there's MRI machines (helpful), but also torture devices (harmful)
Would you want to bet that there's no social science research that can be harnessed by repressive governments to keep their subjects in line?
Possible example: the Chinese government is proposing to introduce a "social credit" scheme that gives each citizen a credit rating based on all kinds of things, sometimes (I don't know how credibly) alleged to include reducing your creditworthiness if your friends post politically-disapproved-of things on social media. (The idea being to make people police one another's behaviour.) It's not perfectly clear whether they're actually going to do that, nor where they got the idea from if so -- but it seems like exactly the kind of thing that social science will help them judge the likely effectiveness of. (Is this relevant research? It's hard to tell given the low quality of the text there, which I assume is the result of automatic translation.)
It seems pretty bad to me, and everyone else I've heard talk about the idea has found it pretty chilling. One more way for an already quite totalitarian state to exercise control over its citizens. Of course there may be some selection bias -- maybe the people who talk about it tend to be the people who find it scary.
but also torture devices (harmful)
You don't need much physics knowledge to torture someone. You could say that electric shock based torture needs some phyiscs knowledge about electricity but I'm not sure that it's worse torture than various treatments done in the middle ages.
That's a very interesting essay indeed. Some thoughts:
You see again and again, that it is more than one thing from a good person. Once in a while a person does only one thing in his whole life, and we'll talk about that later, but a lot of times there is repetition. I claim that luck will not cover everything.
I agree, but I'm reminded of the story of the mathematician on great generals. That's not necessarily the case here, but it is something to think about.
Most of you in this room probably have more than enough brains to do first-class work. But great work is something else than mere brains.
Indeed I'm reminded of Scott Alexander's comments on the Good Judgment Project and how although there is a correlation correct people aren't necessarily the smartest people.
Once you get your courage up and believe that you can do important problems, then you can.
Courage is a non-technical term which I hate. I think he means disagreeableness based on his wording which I'd agree that excessive agreeableness is probably more detrimental than excessive disagreeableness, but it seems like maximal disagreeableness would also be bad.
Very clearly they are not because people are often most productive when working conditions are bad. One of the better times of the Cambridge Physical Laboratories was when they had practically shacks - they did some of the best physics ever.
So less funding is better? (Yes, sentence fragment, get over it) That is an interesting idea. Most people, including a couple commenters on this post, would probably disagree with that. The drugmonkey blog spends many words complaining about funding issues. I'm wondering about possible confounds assuming this is a real phenomenon. Is it the lack of funding, the lack of respect, the isolation of having fewer researchers in one area (too many cooks problem)?
What Bode was saying was this: ``Knowledge and productivity are like compound interest.'' Given two people of approximately the same ability and one person who works ten percent more than the other, the latter will more than twice outproduce the former.
I'm not sure how someone could've figured this out. Maybe this is owing to my lack of experience; this is something you can only discover in your 50s or 60s after several decades of seeing people working on difficult problems. I'm not sure I trust this.
On this matter of drive Edison says, ``Genius is 99% perspiration and 1% inspiration.'' He may have been exaggerating, but the idea is that solid work, steadily applied, gets you surprisingly far.
I strongly doubt this is true. If the majority of possibilities are wrong, then improving search algorithms should have a bigger payoff than increasing productivity although there is a proper balance that needs to be found.
They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. These seem like orthagonal ideas; not necessarily opposed ones as he suggests.
And after some more time I came in one day and said, ``If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?'' I wasn't welcomed after that; I had to find somebody else to eat with! … The average scientist, so far as I can make out, spends almost all his time working on problems which he believes will not be important and he also doesn't believe that they will lead to important problems.
Oh! That's why you said I should read this.
I'm way off-task, so I'll come back to this after I finish some work. It is a fascinating read though. Thank you.
Courage is a non-technical term which I hate. I think he means disagreeableness based on his wording which I'd agree that excessive agreeableness is probably more detrimental than excessive disagreeableness, but it seems like maximal disagreeableness would also be bad.
Courage is an emotion. Emotions matter. Don't try to eliminate them for the equation just because you don't like them.
To clarify, that's a talk ("You and Your Research") given by Richard Hamming, a computing pioneer at Bell Labs. Paul Graham is just hosting it.
Never tackle a problem of which you can be pretty sure that (now or in the near future) it will be tackled by others who are, in relation to that problem, at least as competent and well-equipped as you." Edsger W. Dijkstra
I think pluralism is an important goal. It doesn't make sense for everyone in a field to try to achieve the same goal. It's good if different goals get persued.
A lot of scientific breakthrough don't haven because people planned to have a breakthrough. When looking at the scientific revolutions that Kuhn describes they often lead to asking new questions that weren't previously available.
I've been thinking lately about what is the optimal way to organize scientific research both for individuals and for groups. My first idea: research should have a long-term goal. If you don't have a long-term goal, you will end up wasting a lot of time on useless pursuits. For instance, my rough thought process of the goal of economics is that it should be “how do we maximize the productive output of society and distribute this is in an equitable manner without preventing the individual from being unproductive if they so choose?”, the goal of political science should be “how do we maximize the government's abilities to provide the resources we want while allowing individuals the freedom to pursue their goals without constraint toward other individuals?”, and the goal of psychology should be “how do we maximize the ability of individuals to make the decisions they would choose if their understanding of the problems they encounter was perfect?” These are rough, as I said, but I think they go further than the way most researchers seem to think about such problems.
Political science seems to do the worst in this area in my opinion. Very little research seems to have anything to do with what causes governments to make correct decisions, and when they do research of this type, their evaluation of correct decision making often is based on a very poor metric such as corruption. I think this is a major contributor to why governments are so awful, and yet very few political scientists seem to have well-developed theories grounded in empirical research on ways to significantly improve the government. Yes, they have ideas on how to improve government, but they're frequently not grounded in robust scientific evidence.
Another area I've been considering is search parameters of moving through research topics. An assumption I have is that the overwhelming majority of possible theories are wrong such that only a minority of areas of research will result in something other than a null outcome. Another assumption is that correct theories are generally clustered. If you get a correct result in one place, there will be a lot more correct results in a related area than for any randomly chosen theory. There seems like two major methods for searching through the landscape of possibilities. One method is to choose an area where you have strong reason to believe there might be a cluster nearby that fits with your research goals and then randomly pick isolated areas of that research area until you get to a major breakthrough, then go through the various permutations of that breakthrough until you have a complete understanding of that particular cluster area of knowledge. Another method would be to take out large chunks of research possibilities, and to just throw the book at it basically. If you come back with nothing, then you can conclude that the entire section is empty. If you get a hit, you can then isolate the many subcomponents and figure out what exactly is going on. Technically I believe the chunking approach should be slightly faster than the random approach, but only by a slight amount unless the random approach is overly isolated. If the cluster of most important ideas are at 10 to the -10th power, and you isolate variables at 10 to the -100th power, then time will be wasted going back up to the correct level. You have to guess what level of isolation will result in the most important insights.
One mistake I think is to isolate variables, and then proceed through the universe of possibilities systematically one at a time. If you get a null result in one place, it's likely true that very similar research will also result in a null result. Another mistake I often see is researchers not bothering to isolate after they get a hit. You'll sometimes see thousands of studies on the exact same thing without any application of reductionism eg the finding that people who eat breakfast are generally healthier. Clinical and business researchers seem to most frequently make this mistake of forgetting reductionism.
I'm also thinking through what types of research are most critical, but haven't gotten too far in that vein yet. It seems like long-term research (40+ years until major breakthrough) should be centered around the singularity, but what about more immediate research?