By the way, I never understood why it's supposed to be such a "trick" question. "Why aren't you working on them?" the obvious answer is, diminishing returns. If a lot of people (or a lot of "total IQ") already goes into problem X, then adding more to problem X might be less useful than adding more to problem Y, which is less important but also more neglected.
In the context of our community, people might interpret it as something like "why aren't more people working on mitigating X-risk instead of studying academic questions with no known applications", which is a good question, but it's not the same. The key here is the meaning of "important". For most academics, "important" means "acknowledged as important in the academia", or at best "intrinsically interesting". On the other hand, for EA-minded people "important" means "has actual positive influence on the world". This difference in the meaning of "important" seems much more important than blaming people for not choosing the most important question on a scale they already accept.
In his book Thomas Khun's The Structure of Scientific Revolutions, he makes the point that those fields like theoretical physics where scientists persue issues "acknowledged as important in the academia" as opposed to those persuing topics where the answers are of high practical use like economics, nutrition science or social science often make much more progress.
In medicine I think it's great for EA reasons when researchers do basic work that moves our understanding of the human body forward even when that doesn't have direct applications.
For one thing, this observation is strongly confounded by other characteristics that are different between those fields. For another, yes, I know that often something that was studied just for the love of knowledge has tremendous applications later. And yet, I feel that, if your goal is improving the world then there is room for more analysis than, "does it seem interesting to study that". Also what I consider "practical" is not necessarily what is normally viewed as "practical". For example I consider it "practical" to research a question because it may have some profound consequences many decades down the line, even if that's only backed by broad considerations rather than some concrete application.
Relatedly, rereading this post was what prompted me to write this stub post:
I'm fairly concerned with the practice of telling people who "really care about AI safety" to go into AI capabilities research, unless they are very junior researchers who are using general AI research as a place to improve their skills until they're able to contribute to AI safety later. (See Leveraging Academia).
The reason is not a fear that they will contribute to AI capabilities advancement in some manner that will be marginally detrimental to the future. It's also not a fear that they'll fail to change the company's culture in the ways they'd hope, and end up feeling discouraged. What I'm afraid of is that they'll feel pressure to start pretending to themselves, or to others, that their work is "relevant to safety". Then what we end up with are companies and departments filled with people who are "concerned about safety", creating a false sense of security that something relevant is being done, when all we have are a bunch of simmering concerns and concomitant rationalizations.
This fear of mine requires some context from my background as a researcher. I see this problem with environmentalists who "really care about climate change", who tell themselves they're "working on it" by studying the roots of a fairly arbitrary species of tree in a fairly arbitrary ecosystem that won't generalize to anything likely to help with climate change.
My assessment that their work won't generalize is mostly not from my own outside view; it comes from asking the researcher about how their work is likely to have an impact, and getting a response that either says nothing more than "I'm not sure, but it seems relevant somehow", or an argument with a lot of caveats like "X might help with Y, which might help with Z, which might help with climate change, but we really can't be sure, and it's not my job to defend the relevance of my work. It's intrinsically interesting to me, and you never know if something could turn out to be useful that seemed useless at first."
At the same time, I know other climate scientists who seem to have actually done an explicit or implicit Fermi estimate for the probability that they will personally soon discover a species of bacteria that could safely scrub the Earth's atmosphere of excess carbon. That's much better.
I haven't spoken about "love of knowledge". Nutrition scienctists who wants to know what they should eat are also seeking knowledge that they might love. I spoke about research which advances the field.
As far as I understand the research into the ability of people who judge grants to predict how much influence a result would have decades later is very poor. Even estimating effect at the time a paper is published is very hard.
Most published paper turn out to have no practical application. Most nutritition papers try to answer practical questions for which the field currently doesn't have the ability to provide good answers.
In Feymann's cargo cult speech he talked about how Mr. Young did research on how to run psychology research on rats in a way that yields much better results. His research community unfortunately ignored him to the point that it's not possible to locate his paper but if he would have been heard his research would have effects that are much larger then another random rat experiment with dubious methology.
For the field of nutrition science to progress we would likely need a lot of people like Mr. Young who think about how to actually make progress in learning about the field even if that's in the beginning far from practical application.
What is the difference between love of knowledge and "advancing the field"? Most researchers seem to focus on questions that are some combination of (i) interesting personally to them (ii) would bring them fame and (iii) would bring them grants. It would be awfully convenient for them if that is literally the best estimate you could make of what research will ultimately be useful, but I doubt it is case. Some research that "advances the field" is actively harmful (e.g. advancing AI capabilities without advancing understanding, improving the ability to create synthetic pandemics, creating other technologies that are easy to weaponize by bad actors, creating technology that shifts economic incentives towards massive environmental damage...)
Love of knowledge can drive you to engage with questions that aren't addressable with the current tools of a field in a way that brings the field forward.
Work that's advancing the field is work on which other scientists can build.
In physics scientists use significance thresholds that are much more stringent then 5%. If you would tell nutrition researchers that they could only publish findings that have 5 sigma's they would be forced to run studies that are very differently structured. Those studies would provide answers that are a lot less interesting but to the extend that researchers manage to make findings those findings would be reliable and would allow other researchers to build on them.
I'm not saying that this is the only way to move forward for nutrition science but if I think the field would need to think much better about how progress can be made then running the kind of studies that they run currently.
Safety concerns are a valid concern and increase in capability in certain fields like AI might now be desirable for it's own sake.
I think we probably use the phrase "love of knowledge" differently. The way I see it, if you love knowledge then you must engage with questions addressable with the current tools in a way that brings the field forward, otherwise you are not gaining any knowledge, you are just wasting your time or fooling yourself and others. If certain scientists get spurious results because of poor methodology, there is no love of knowledge in it. I also don't think they use poor methodology because of desire for knowledge at all: rather, they probably do it because of the pressure to publish and because of the osmosis of some unhealthy culture in their field.
Or, "it's too hard". Or, "I don't think I am good enough". Or plenty of other excuses that are not necessarily a good reason for not doing the thing.
The point is not to have an answer, but to ask the question and to check.
You are not smarter for having the answer, you are smarter for asking the question.
I agree with the general principle, it's just that, my impression is that most scientists have asked themselves this question and made more or less reasonable decisions regarding it, with respect to the scale of importance prevalent in the academia. From my (moderate amount of) experience, most scientists would love to crack the biggest problem in their field if they think they have a good shot at it.
So, I'm not actually sure. I'm taking at face value that there *was* a guy who went around asking the question, and that it was fairly unusual and provoked weird enough reactions to become somewhat mythological. (Although I wouldn't be that surprised if the mythology turned out to be be false).
But it's not that surprising to me that many people would end up working on some random thing because it was expedient, or without having reflected much on what they should be working on at all. These seems to be the way people are by default.
That's my understanding too, I just wouldn't be that surprised if that story went through a few games of telephone before it reached us.
I think the first version that reached me through the rationality sphere had Hamming asking the all the questions on the same day.
A bit later there was a local rationalist who got a different version of the story through family connection and a Bell labs source. In that story Hamming asked "What's the most important question" in week 1, "What are you working on?" in week 2 and "Why isn't that the same?" in week 3.
I have seen the “Hamming question” concept applied to domains other than science (example 1, example 2, example 3, example 4, example 5, example 6, example 7).
I think that’s a mistake.
First, generally, it’s a mistake for terminology-dilution reasons: if you significantly broaden the scope of a term, you obscure differences between the concept or thing the term originally referred to, and other (variously similar) things; and you integrate assumptions about proper categories, and about similarities between their (alleged) members, into your thinking and writing, without justifying those assumptions or even making them explicit. This degrades the quality of your thought and your communication.
Second, specifically, it’s a mistake because science (i.e., academic or quasi-academic [e.g., corporate] research) differs from other domains (such as those discussed in my examples) in several important ways:
In science, if you’re trained in a field, then there’s no particular reason (other than—in principle, contingent and changeable—practical limitations such as funding) why you can’t work on just about any problem in that field. This is not the case in many other domains.
In science, there is generally no urgency to any particular problems or research areas; if everyone in the field works on one (ostensibly important) problem, but neglects another (ostensibly less important) problem, well, so what? It’ll keep. But in most other domains, if everyone works on one thing and neglects everything else, that’s bad, because all that “everything else” is often necessary; someone has got to keep doing it, even if one particular other thing is, in some sense, “more important”.
In science, you’re (generally) already doing inquiry; the fruit of your work is knowledge, understanding, etc. So it makes sense to ask what the “most important” problem is: presumably, it’s the problem (of those problems which we can currently define in a meaningful way) that, if solved, would yield the most knowledge, the greatest understanding, the most potential for further advancement, etc. But in other fields, where the goals of your efforts are not knowledge but something more concrete, it’s not clear that “most important” has a meaning, because for any goal we identify as “important”, there are always “convergent instrumental goals” as far as the eye can see, explore/exploit tradeoffs, incommensurable values, “goals” which are essentially homeostatic or otherwise have dynamics as their referents, etc., etc.
So while I can see the value of the Hamming question in science (modulo the response linked in my other comment), I should very much like to see an explicit defense and elaboration of applying the concept of the “Hamming question” to other domains.
I actually do basically agree with your first point. I made this stub because this is a concept frequently tossed around that I wanted people to be able to look up on LW easily... rather than because the jargon is optimal-according-to-me. In the most recent CFAR handbook the question is phrased:
"At any given time in our lives, it’s possible (though not always easy!) to answer the question, “What is the most important problem here, and what are the things that are keeping me from working on it?” We refer to this as “asking the Hamming question,” as a nod to mathematician Richard Hamming."
And I think this is fairly importantly a different question than the one Hamming was asking. Moreover, the rationality community will actually need the original Hamming Question from time to time, referring specifically to scientific fields that you have extensive training. (Or, at least, if we didn't need the Actual Science Hamming Question that'd be quite a bad sign). So yeah I think the terminology dilution is pretty important.
Moreover, the rationality community will actually need the original Hamming Question from time to time, referring specifically to scientific fields that you have extensive training. (Or, at least, if we didn’t need the Actual Science Hamming Question that’d be quite a bad sign).
This seems plausible. Has this happened so far?
It happens pretty frequently in the x-risk community, and I think the non-x-risk EA community although I don't keep as close tabs on it.
(I think the question is asked both in terms of literal research done, and infrastructure-that-needs-building. The infrastructure is a bit different from research that Hamming was pointed at, but I think fits more closely within Hamming's original paradigm than the personal development CFAR framing. I think it is fair to generalize the Hamming Question to "in my field of expertise, where I can reasonably expect myself to have a deep understanding of the situation, what are the most important things that need doing, and should I be working on them?")
(My estimate is that there is something on the order of 10-50 people asking the question in the literal research sense in EA space. That estimate is based on a few people literally saying "I asked myself what the most important problems were and how I could work on them", and some reading between the lines of how other people seem to be approaching problems and talking about them)
Based on the other comments, I feel like it is worthwhile to point out that Hamming is talking about how to be a successful scientist, as measured by things like promotions, publications, and reputation.
He is not talking about the impact of the problems themselves. From the quoted section, emphasis mine:
It's not the consequence that makes a problem important, it is that you have a reasonable attack. That is what makes a problem important. When I say that most scientists don't work on important problems, I mean it in that sense.
So it looks like we're trying to apply the question one entire step before where Hamming did. For example there weren't - and if I read Hamming right, still aren't - reasonable attacks to the alignment problem. The prospective consequences are just so great we had to consider what is reasonable in a relative sense, and try anyway.
It feels like rationality largely boils down to the search for a generative rule for reasonable attacks.
This doesn't quite feel right to me. From another section:
Age is another factor which the physicists particularly worry about. They always are saying that you have got to do it when you are young or you will never do it. Einstein did things very early, and all the quantum mechanic fellows were disgustingly young when they did their best work. Most mathematicians, theoretical physicists, and astrophysicists do what we consider their best work when they are young. It is not that they don't do good work in their old age but what we value most is often what they did early. On the other hand, in music, politics and literature, often what we consider their best work was done late. I don't know how whatever field you are in fits this scale, but age has some effect.
But let me say why age seems to have the effect it does. In the first place if you do some good work you will find yourself on all kinds of committees and unable to do any more work. You may find yourself as I saw Brattain when he got a Nobel Prize. The day the prize was announced we all assembled in Arnold Auditorium; all three winners got up and made speeches. The third one, Brattain, practically with tears in his eyes, said, "I know about this Nobel-Prize effect and I am not going to let it affect me; I am going to remain good old Walter Brattain." Well I said to myself, "That is nice." But in a few weeks I saw it was affecting him. Now he could only work on great problems.
So this is clearly not about professional success, because he points to professional success as a thing that kills the kind of greatness he's trying to cultivate in people.
My impression is that he was genuinely pointing at "important" meaning "things that will have an impact", just that tractability matters as much as as importance-if-you-solve-the-problem, which is why "teleportation" isn't a good project.
I read this section completely differently.
He points to thinking about the important problems as causing success. When people change what they are doing, then they don't continue to have it:
In the first place if you do some good work you will find yourself on all kinds of committees and unable to do any more work.
Carrying on from the end of your section:
When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn't the way things go. So that is another reason why you find that when you get early recognition it seems to sterilize you. In fact I will give you my favorite quotation of many years. The Institute for Advanced Study in Princeton, in my opinion, has ruined more good scientists than any institution has created, judged by what they did before they came and judged by what they did after. Not that they weren't good afterwards, but they were superb before they got there and were only good afterwards.
The talk is about things that cause people to do great work. When those causal factors change, the work output also changes. He goes on to cover other things which are about professional success:
Lastly, he is pretty specific about his motivations (emphasis mine):
I think it is very definitely worth the struggle to try and do first-class work because the truth is, the value is in the struggle more than it is in the result. The struggle to make something of yourself seems to be worthwhile in itself. The success and fame are sort of dividends, in my opinion.
So he is specifically talking about professional success in science. But - things like the rationality project and EA are good candidates for other fields to which the advice could be applied, especially in light of how important science is to them.
I agree that Hamming is talking about how to be a successful scientist, but I think "as measured by things like promotions, publications, and reputation" gives the wrong impression: that Hamming's talking about how to optimize for personal success as opposed to overall impact. But the "have a reasonable attack" criterion is necessary for optimizing impact on the world, too, and I don't think Hamming would have changed his advice if he'd been convinced that (e.g.) the way to maximize promotions, publications, and reputation is to get better at self-promotion or to falsify your results or something.
I think that personal success is the correct impression:
I noticed a couple of months later he was made the head of the department. I noticed the other day he was a Member of the National Academy of Engineering. I noticed he has succeeded. I have never heard the names of any of the other fellows at that table mentioned in science and scientific circles.
Notice he doesn't talk about all the amazing things that were solved; he talks about lab positions and Nobel Prizes and getting equations named after himself.
I expect that Hamming would view having an impact on the world as being a good reason to choose going into science instead of law or finance, but once that choice is made being great at science is the reasonable thing to do.
To be clear, I don't think he viewed reputations and promotions as the goal, I believe he viewed them as reasonable metrics that he was on the right track for doing great science.
Rereading the original text, I think he is talking about all three of (1) doing something that has a substantial impact on the world, (2) doing something that brings you major career success, and (3) doing something that turns you into a better scientist and a better person. (The last of those is mostly not very apparent in what he says, but there's this: "I think it is very definitely worth the struggle to try and do first-class work because the truth is, the value is in the struggle more than it is in the result. The struggle to make something of yourself seems to be worthwhile in itself. The success and fame are sort of dividends, in my opinion.")
Yes, the comparative advantage answer is a compelling one, when it's not an excuse based on motivated cognition.
Quoted from the 2016 cfar handbook:
Richard Hamming was a mathematician at Bell Labs from the 1940’s through the 1970’s who liked to sit down with strangers in the company cafeteria and ask them about their fields of expertise. At first, he would ask mainly about their day-to-day work, but eventually, he would turn the conversation toward the big, open questions—what were the most important unsolved problems in their profession? Why did those problems matter?
What kinds of things would change when someone in the field finally broke through? What new potential would that unlock? After he’d gotten them excited and talking passionately, he would ask one final question: “So, why aren’t you working on that?”
Hamming didn’t make very many friends with this strategy, but he did inspire some of his colleagues to make major shifts in focus, rededicating their careers to the problems they felt actually mattered.
[Hamming] did inspire some of his colleagues to make major shifts in focus, rededicating their careers to the problems they felt actually mattered.
Do you have more info on this? I’d be very curious to hear about some specific examples!
Yesterday I read through Hamming's talk, "You and Your Research", which explores his overall philosophy. This anecdote I think is most relevant (I'm probably going to edit this into the main post)
Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, "Do you mind if I join you?" They can't say no, so I started eating with them for a while. And I started asking, "What are the important problems of your field?" And after a week or so, "What important problems are you working on?" And after some more time I came in one day and said, "If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?" I wasn't welcomed after that; I had to find somebody else to eat with! That was in the spring.
In the fall, Dave McCall stopped me in the hall and said, "Hamming, that remark of yours got underneath my skin. I thought about it all summer, i.e. what were the important problems in my field. I haven't changed my research," he says, "but I think it was well worthwhile." And I said, "Thank you Dave," and went on. I noticed a couple of months later he was made the head of the department. I noticed the other day he was a Member of the National Academy of Engineering. I noticed he has succeeded. I have never heard the names of any of the other fellows at that table mentioned in science and scientific circles. They were unable to ask themselves, "What are the important problems in my field?"
If you do not work on an important problem, it's unlikely you'll do important work. It's perfectly obvious. Great scientists have thought through, in a careful way, a number of important problems in their field, and they keep an eye on wondering how to attack them. Let me warn you, "important problem" must be phrased carefully. The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn't work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It's not the consequence that makes a problem important, it is that you have a reasonable attack. That is what makes a problem important. When I say that most scientists don't work on important problems, I mean it in that sense. The average scientist, so far as I can make out, spends almost all his time working on problems which he believes will not be important and he also doesn't believe that they will lead to important problems.
That seems like a startlingly weak anecdote (especially so given that it’s the only one we’ve seen). From this quote, it seems like Hamming—contrary to the claim Elo quoted—in fact inspired none of his colleagues to “make major shifts in focus” or to “rededicat[e] their careers to the problems they felt actually mattered”.
The one colleague who was, allegedly, inspired by Hamming’s questions in some way, explicitly said (we are told) that he did not shift his research focus! He ended up being successful… which Hamming attributes to his own influence, for… some reason. (The anecdotal evidence provided for this causal sort-of-claim is almost textbook poor; it’s literally nothing more than post hoc, ergo propter hoc…)
Do we have any solid evidence, at all, that there is any concrete, demonstrable benefit, or even consequence, to asking the “Hamming question”? Any case studies (with much more detail, and more evidential support, than the anecdote quoted above)? So far, it seems to me that the significance attached to this “Hamming question” concept has been far, far out of proportion to its verified usefulness…
Edit: Corrected wording to make it clear Elo was quoting a source.
[for clarity, we were both quoting other sources]
My opinion is that from trying the exercises several times over the course of the last few years, it's a valuable tool to help me see what I'm ignoring or what I need to deal with.
[for clarity, we were both quoting other sources]
Indeed, my apologies—I read hastily, and didn’t spot the quoting without the quotation styling. I’ve corrected the wording in the grandparent.
We encourage participants to occasionally ask “the Hamming question.”
Checking in on the match between your beliefs and your actions is a rea- sonable thing to do a few times a year. It can lead to increased motivation, positive shifts to better strategies, and a clearer sense of where your deepest priorities lie.
Sometimes the most important question has less importance (say, 20 percent of total) than the sum of less important questions (say, 8x10=80 for 8 smaller problems). For example, if everybody will work on AI safety, some smaller x-risks could be completely neglected.
The commons effect of existential risks may complicate that example. (Shorter-term existential risks make longer-term existential risks less impactful until the shorter-term ones are solved.)
This is a stub post, mostly existing so people can easily link to a post explaining what the Hamming question is. If you would like to write a real version of this post, ping me and I'll arrange to give you edit rights to this stub.
For now, I am stealing the words from Jacobian's event post:
A transcript of Hamming's extensive 1986 talk "You and your research", touches upon a several elements of Hamming's philosophy, and includes this anecdote about the canonical "Hamming Question":
Vika Krakovna wrote up a report about how CFAR applies the technique in some of their workshops: