Things this question is assuming, for the sake of discussion: The hardest parts of AI alignment are theoretical. Those parts will be critical for getting AI alignment right. The biggest bottlenecks to theoretical AI alignment, are "serial" work, as described in this Nate Soares post. For quick reference: is the kind that seems to require "some researcher retreat to a mountain lair for a handful of years" in a row.

Examples Soares gives are "Einstein's theory of general relativity, [and] Grothendieck's simplification of algebraic geometry".

The question: How can AI alignment researchers parallelize this work?

I've asked a version of this question before, without realizing that this is a core part of it.

This thread is for brainstorming, collecting, and discussing techniques for taking the "inherently" serial work of deep mathematical and theoretical mastery... and making it parallelizable.

I am aware this could seem impossible, but sometimes seemingly-impossible things are worth brainstorming about, just in case, whenever (as is true here) we don't know it's impossible.

New Answer
New Comment

2 Answers sorted by

Brendan Long

97

Some options I can think of:

  • Optimize your researcher's single-threaded performance by offloading as many unnecessary tasks as possible to different cores workers. For example, mow the researcher's lawn for them, cook for them, etc.
  • Speed up any learning parts by providing experts (i.e. if you want to know "Why is X?", find an expert on X to answer questions about it instead of needing your researcher to track down the answer from less-personalized sources). Or spin off worker threads have assistants fetch and correlate personalized descriptions even if they're not experts.

It's also possible that some of this work can be sped up by putting researchers working on the problems in contact with each other. My understanding is that it's generally more effective for people to bounce their ideas off of other smart people than to work alone.

I definitely wonder about the relative-effectiveness of e.g. paying on-call tutors about specific higher maths areas + CS, to help promising (or established) alignment researchers.

Nathan Helm-Burger

40

My guess is that making small teams consisting of a skilled mathematician, a skilled programmer, a skilled ML theorist, and a skilled manager would be a good way to make progress. Make a hundred or a thousand such groups, in the assumption that maybe 1% of them will pay off.

I think this is a good idea, but doesn't quite feel like an answer to the question (at least as I understood it). i.e. "get a bunch of serial researchers working in parallel, hope one of them succeeds", which I think So8res articulated in AI alignment researchers don't (seem to) stack.

I do think small teams with a few different skillsets working together is probably a good way to go in many cases. Your comment here reminds me of Wentworth's team structure in MATS Models, although that only had 3 people.

5Nathan Helm-Burger
Yeah, so, my experience from working in academia says to me that the odds of finding two researchers with a similar frame on a novel problem and good social chemistry such that they add to each other's productivity is something like between 1/200 and 1/1000, even after filtering for 'competent researchers interested in the general topic'. So I'm not at all surprised that the results of getting about 10 new researchers working on alignment has not found a match yet. From my experience working in industry, I think that a big failing of the attempts I've seen at organizing research groups is undervaluing a good manager. Having someone who is 'people-oriented' to coach and coordinate is important for preventing burnout, and for keeping several 'research-oriented' people focused on working together on a given task instead of wandering off in different directions.
2Nathan Helm-Burger
Also, I'm hopeful that a separate approach of deliberately 'cyborg'-ing researchers by getting them proficient at and using the latest SoTA models, and making SoTA models specifically fine-tuned for the purpose of assisting in research could help speed up individual researchers. Maybe having the AI able to do the research all on its own means it's already too dangerous, but I don't think that that holds for 'useful enough to be a good tool'.