Not clear what do you mean by "for AI alignment". "What leads to quicker and deeper insights?" is still not a good enough question, because there may be different purposes that you wish to apply these insights to. Some possible options: 1) find flaws in models apparently held by people at large labs (OpenAI etc.) and convince them to change course of action; 2) demonstrate your insights during interviews to be hired by OpenAI/DeepMind/etc. 3) start an AI alignment startup for a specific idea; 4) start an alignment startup without a specific alignment idea (while you still need to be able to distinguish good from bad ideas when selecting projects, hiring, etc.); 5) work on AI governance, policy design, or startup which is not overtly about AI alignment but attach to x-risk models in important ways; etc. These different pragmatic leads to a different optimal ratio of "explore vs. exploit" and different blend of the topics and disciplines for study.
I suspect that you are close to have goal 1), but I become convinced recently that this is a very ineffectual goal because it's close to impossible to "convince" large labs (where billions of dollars are already involved, which notoriously makes changing people's minds much more difficult) in anything from the outside. So I don't even want to discuss the optimal ratio for this goal.
For goal 2), you should learn which deep base knowledge models the hiring manager cherishes and learn those. E.g., if the hiring manager likes ethics or epistemology, or philosophy of science, you better learn some of them and demonstrate your knowledge of these models during the interview. But if the hiring manager is not very knowledgeable about these themselves, this deep knowledge will be to no avail.
Then, if you are already at a large lab, it's too context dependent: organisational politics, tactical goals such as completing a certain team project, the models that you teammates already possess, all play the role in deciding what, when, and how you should learn at an organisation.
For goal 3), the bias for "greater depth first" should definitely be higher than for 4). But for 4) you should have some other exceptional skills or resources to offer (which is offtopic for this question though).
For 5), pretty clearly you should mostly backchain.
Thank you, this makes sense currently!
(Right now I'm on Pearl's Causality)