Some options I can think of:
It's also possible that some of this work can be sped up by putting researchers working on the problems in contact with each other. My understanding is that it's generally more effective for people to bounce their ideas off of other smart people than to work alone.
My guess is that making small teams consisting of a skilled mathematician, a skilled programmer, a skilled ML theorist, and a skilled manager would be a good way to make progress. Make a hundred or a thousand such groups, in the assumption that maybe 1% of them will pay off.
I think this is a good idea, but doesn't quite feel like an answer to the question (at least as I understood it). i.e. "get a bunch of serial researchers working in parallel, hope one of them succeeds", which I think So8res articulated in AI alignment researchers don't (seem to) stack.
I do think small teams with a few different skillsets working together is probably a good way to go in many cases. Your comment here reminds me of Wentworth's team structure in MATS Models, although that only had 3 people.
Things this question is assuming, for the sake of discussion: The hardest parts of AI alignment are theoretical. Those parts will be critical for getting AI alignment right. The biggest bottlenecks to theoretical AI alignment, are "serial" work, as described in this Nate Soares post. For quick reference: is the kind that seems to require "some researcher retreat to a mountain lair for a handful of years" in a row.
Examples Soares gives are "Einstein's theory of general relativity, [and] Grothendieck's simplification of algebraic geometry".
The question: How can AI alignment researchers parallelize this work?
I've asked a version of this question before, without realizing that this is a core part of it.
This thread is for brainstorming, collecting, and discussing techniques for taking the "inherently" serial work of deep mathematical and theoretical mastery... and making it parallelizable.
I am aware this could seem impossible, but sometimes seemingly-impossible things are worth brainstorming about, just in case, whenever (as is true here) we don't know it's impossible.