Short AI takeoff timelines seem to leave no time for some lines of alignment research to become impactful. But any research rebalances the mix of currently legible research directions that could be handed off to AI-assisted alignment researchers or early autonomous AI researchers whenever they show up. So even hopelessly incomplete research agendas could still be used to prompt future capable AI to focus on them, while in the absence of such incomplete research agendas we'd need to rely on AI's judgment more completely. This doesn't crucially depend on giving significant probability to long AI takeoff timelines, or on expected value in such scenarios driving the priorities.
Potential for AI to take up the torch makes it reasonable to still prioritize things that have no hope at all of becoming practical for decades (with human effort). How well AIs can be directed to advance a line of research further (and the quality of choices about where they should be directed) depends on how well these lines of research are already understood. Thus it can be important to make as much partial progress as possible in developing (and deconfusing) them in the years (or months!) before AI takeoff. This notably seems to concern agent foundations / decision theory, in contrast with LLM interpretability or AI control, which more plausibly have short term applications.
In this sense current human research, however far from practical usefulness, forms the data for aligning the early AI-assisted or AI-driven alignment research efforts. The judgment of human alignment researchers who are currently working makes it possible to formulate more knowably useful prompts for future AIs (possibly in the run-up to takeoff) that nudge them in the direction of actually developing even preliminary theory into practical alignment techniques.
That seems correct, but I don't think any of those aren't useful to investigate with AI, despite the relatively higher bar.