Give me feedback! :)
I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:
I expect mech interp to be particularly easy to automate at scale. If mech interp has capabilities externalities (e.g., uncovering useful learned algorithms or "retargeting the search"), this could facilitate rapid performance improvements.
It seems plausible to me that if AGI progress becomes strongly bottlenecked on architecture design or hyperparameter search, a more "genetic algorithm"-like approach will follow. Automated AI researchers could run and evaluate many small experiments in parallel, covering a vast hyperparameter space. If small experiments are generally predictive of larger experiments (and they seem to be, a la scaling laws) and model inference costs are cheap enough, this parallelized approach might be be 1) computationally affordable and 2) successful at overcoming the architecture bottleneck.
Apr 18, 11:59 pm PT :)
Hi! Yes, MATS is always open to newbies, though our bar has raised significantly over the last few years. AISF is great, but I would also recommend completing ARENA or ML4Good courses if you are pursuing a technical project, or completing an AI gov research project.
It seems plausible to me that if AGI progress becomes strongly bottlenecked on architecture design or hyperparameter search, a more "genetic algorithm"-like approach will follow. Automated AI researchers could run and evaluate many small experiments in parallel, covering a vast hyperparameter space.
LISA's current leadership team consists of an Operations Director (Mike Brozowski) and a Research Director (James Fox). LISA is hiring for a new CEO role; there has never been a LISA CEO.
How fast should the field of AI safety grow? An attempt at grounding this question in some predictions.
Ah, that's a mistake. Our bad.
Why does the AI safety community need help founding projects?