A quick example of how paper reading works in my research:
2017: Cyclegan comes out, and produces cool pictures of zebras and horses. I skim the paper because it seems cool, file away the concept, but don't make an effort to replicate the results because in my experience GANs are obnoxious to train
2018: "Which Training Methods for GANs do actually Converge?" comes out, but even though it contains the crucial insight to making GAN's trainable, I don't read it because it's not very popular- I never see it
2019: Stylegan comes out, and cites "Which Training Methods for GANs do actually Converge?" I read both papers, mostly forget stylegan because it seems like a "we have big gpu do good science" paper, but am very impressed with "Which Training Methods for GANs do actually Converge?" and take a day or two to replicate it.
2020?: Around this time I also read all of gwern's anime training exploits, and update my priors towards "maybe large gans are actually trainable."
2022: I need to convert unlabeled dxa images into matching radiographs as part of a larger project. I'm generally of the opinion that GANs aren't actually useful, but the problem matches the problem solved by cyclegan exactly, and I'm out of options. I initally try the open source cyclegan codebase, but as expected it's wildly unstable and miserable. I recall that "Which Training Methods for GANs do actually Converge?" had pretty strong theory backing up gradient penalties on the discriminator, and I was able to replicate their experiments, so I dust off that replicating code, verify that it still works, add a cycle consistency loss, and am able to translate my images. Image translator in hand, I slog back into the larger problem.
--
What does this have to do with the paper reading cargo cult?
- Papers that you can replicate by downloading a code base are useful, but papers that you can replicate from the text without seeing code are solid gold. If there are any paper reading clubs out there that ask the presenter to replicate the results without looking at the author's code, I would love to join- not just because the replication is valuable, but because it would narrow down the kinds of papers presented in a valuable way.
- Reading all the most hyped GAN papers, which is basically what I did, would probably not get me an awesome research result in the field of GANs. However, it served me pretty well as a researcher in an adjacent field. In particular, the obscure but golden insight eventually filtered its way into the citations of the hyped fluffy flagship paper. For alignment research, hanging out in a few paper reading groups that are distantly related to alignment should be useful, even if an alignment research group isn't useful.
- I had to read so many papers to come across 3 useful ones for this problem. However, I retain the papers that haven't been useful yet- there's decent odds that I've already read the paper that I'll need to overcome the next hurdle.
- This type of paper reading, where I gather tools to engineer with, initially seems less relevant for fundamental concepts research like alignment. However, your general relativity example suggests that Einstein also had a tool gathering phase leading up to relativity, so ¯\_(ツ)_/¯
If there are any paper reading clubs out there that ask the presenter to replicate the results without looking at the author's code, I would love to join
This is something that I would be interested in as well. I've been attempting to reproduce MQTransformer: Multi-Horizon Forecasts with Context Dependent and Feedback-Aware Attention from scratch, but I am finding it difficult, partially due to my present lack of experience with reproducing DL papers. The code for MQTransformer is not available, at least to my knowledge. Also, there are several other papers which use LSTMs or Transformers architectures for forecasting that I hope to reproduce and/or employ for use on Metaculus API data in the coming few months. If reproducing ML papers from scratch and replicating their results (especially DL for forecasting) sounds interesting (perhaps I could publish these reproductions w/ additional tests in ReScience C) to anyone, please DM me, as I would be willing to collaborate.
Hi there - one of the authors of MQTransformer here. Feel free to send us an email and we can help you with this! (Our emails should be on the paper - if you cant find it, let us know here and we'll add it)
This is great; thank you! I will send an email in the coming month. Also, I suppose a quick clarification, but what's the relation between: MQTransformer: Multi-Horizon Forecasts with Context Dependent and Feedback-Aware Attention and MQTransformer: Multi-Horizon Forecasts with Context Dependent Attention and Optimal Bregman Volatility
Looking forward to it!
There's no difference in the actual model (or its architecture) - but we realized that the "trades" (this can be made more precise if you'd like) MQT would be a martingale against encompass a large class of volatility definitions, so we gave an example of a novel volatility measure (or a trade) that isn't the classical definition and showed MQT works well against it (Theorem 8.1 and Eqn 14).
>This type of paper reading, where I gather tools to engineer with, initially seems less relevant for fundamental concepts research like alignment. However, your general relativity example suggests that Einstein also had a tool gathering phase leading up to relativity, so shrugs.
As an advisor used to remark that working on applications can lead to directions related to more fundamental research. How it can happen is something like this: 1. Try to apply method to domain; 2. Realize shortcomings of method; 3. Find & attempt solutions to address shortcoming; 4. If shortcoming isn't well-addressed or has room for improvement despite step 3 then you _might_ have a fundamental problem on hand. Note that while this provides direction, it doesn't guarantee that the direction is a one that is solve-able in the next t months.
excellent comment. Not everyone needs to push the envelope of the field they read papers on. Applications are just as important (collectively even more so!) as the foundational theory, and replication work already is the most major step/hurdle towards an application, even if it's a toy, on a more applied field/problem.
I wouldn't mind that kind of reading club, either :)
In support of this, I remember Geoff Hinton saying at his Turing award lecture that he strongly advised new grad students not to read the literature before trying, for months, to solve the problem themselves.
Two interesting consequences of the "unique combination of facts" model of invention:
I agree with these. Related, if you work in a team I think it is far more important that you read papers no-one else in your team has read, than reading papers that everyone in the team has read. Put that way it is obvious, but many research groups welcome new members with a well-meaning folder of 30+ pdfs which they claim will be useful.
I used to think this, but if you're a well-calibrated Bayesian, then updating on new papers shouldn't cause you to produce worse research insights because you shouldn't overconfidently buy into falsities that are holding the field back.
I've found by and large that knowing what ground others have already trodden is very helpful. I've tried not knowing what people work on, and usually I end up reinventing the wheel poorly in slow motion instead of finding any new insights. When I critically analyze the literature I get much more mileage.
I have come across various people (including my past self) who meet up regularly to study, e.g., alignment forum posts and discuss them. This helps people bond over their common believes, fears, and interests, which I think is good, but in no way is this ever going to lead anyone to find a solution to the alignment problem. In this post I'll reason why this doesn't help, and what I think we should do instead.
The cult
Reading good papers can be fun. You learn something interesting and, if the topic is hard but well presented by the authors, you get a kick from finally understanding something complicated. But is what you learned actually useful for the problem at hand? What is the question that drove you to read this paper in such detail?
Yes, you need to regularly skim papers for fun, so you get an idea of what's out there and where to look when you need something. You also need to absorb terminology and good writing practice, so you can communicate your own research. Yet, I believe that fun-reading should only occupy a tiny fraction of your time, as you have more important things to do (see next section).
Despite its relative unimportance, paper reading groups tend to focus a lot on this fun-reading aspect. They are more of a social gathering than a mechanism to boost progress. Here is a typical pattern of Cargo Cult paper reading:
Step 3 has some merit (if done right) but everything else is Cargo Cult. At best, it helps you stay up to date with the most mainstream papers in your field. But to do this, you don't need to read all of these papers in detail.
The democratic paper selection progress is in fact the opposite of what you should do. Most of the time, it will promote whatever is most mainstream and, consequently, promote directions that most other people are already working on. This is a way to imitate research - not to do it. Hence, a Cargo Cult.
Actual science
To drive scientific progress means to do something that nobody else has ever done before[1]. This means that your idea or line of research tends to seem strange to others (at first sight). At the same time, it also tends to seem obvious to you - it's just the natural next step when you take seriously what you've learnt so far.
Before I properly reconcile "strange" and "obvious" here, let me warn you of a trap: It is very easy to have an idea that seems obvious to you, but strange to others, when you are delusional. Especially when you are good at arguing, you can easily make yourself believe that you are right and everybody else is just not seeing it. Beware that trap.
The reason why your discovery/invention/insight will appear obvious to you is because it is a (short sequence of) small inferential step(s) from what you already know, and most of what you already know is widely accepted by the community. Ideally you also have more than one line of inference from common knowledge to your new discovery.
The reason why your discovery/invention/insight often appears strange to others is because you start reasoning from a particular subset of common knowledge. That is, while each fact that you start from is known by some people, only you know this particular combination of facts, and (critically!) you have dense knowledge about them[2].
Some examples:
While all of this progress was made possible by building of other people's insights, the actual step of progression comes from people pursuing odd perspectives and sticking to the facts. What we see as breakthroughs now are all gradual, small steps that didn't follow the mainstream.
So instead of reading papers for fun, start at the beginning. What is it that you know right now. How would you frame the alignment problem? You probably don't see a clear path from wherever you are to a solution (if you do, please tell!). But you can work towards it. If you don't know the shape of the maze, you can at least try to walk towards the center where the trophy is kept.
The point is not that this way you will succeed. The point is that this way somebody has a chance to succeed. We cover a lot more territory when each of us works towards the goal from their current starting position, instead of most of us working towards the starting positions of a select few and then trying to continue from there.
If you don't immediately see a direction that might be fruitful, think about it for five minutes. After that, if all directions that you can think of still seem equally improbable to lead to any useful result, pick a simpler problem and practice.
As my former supervisor liked to say: Where does the thing want to go? Play with it, and you'll learn.
Caveats
Please keep gathering
Having said all this, I want to highlight that gathering and discussing alignment research is generally good. Not only are social gatherings essential for our mental health, but you also still have that third step in the paper reading process, where we help each other out to understand something. We may also inspire each other to creative thought, or tear down bad ideas.
Sometimes, you do need to read a lot
Sometimes you find that a whole branch of, e.g., mathematics exists, that seems very useful to what you are doing. For example, (if I remember correctly) Einstein realised that differential geometry was something he needed to understand, so he could formulate his intuitions mathematically. Hence, he learned it. Practiced and made it his own. In such situations it is very useful to work through a good book.
Note that this is different from fun-reading. Here, you already have a research direction of your own. This direction requires you to expand or solidify your knowledge on a specific subject, and somebody happened to have written a book/paper about it.
This is not new
I think nearly everything that I've written here has appeared in one way or another in the Sequences. I just can't find it and it might be useful for some to have it framed with the paper reading group phenomenon as an example.
One could argue that I should add "and published it" here. But then dangerous things should not be published and you can also see things as "doing research for yourself". I am avoiding this complication here, as it is essentially off topic.
I use the term "fact" here because it is easy to visualise sets and subsets when you talk about concrete things like "facts". But dense knowledge is not a collection of facts.
Technically, you have to apply the curl operator to transform a subset of the equations into wave form. Doing this was way harder back then, because vector notation and curl operators had not been invented yet.