I was aware of a couple of these, but most are new to me. Obviously, published papers (even if this is comprehensive) represent only a fraction of what is happening and, likely, are somewhat behind the curve.
And it's still fairly surprising how much of this there is.
Also, from MIT CSAIL and Meta: Gradient Descent: The Ultimate Optimizer
Working with any gradient-based machine learning algorithm involves the tedious task of tuning the optimizer's hyperparameters, such as its step size. Recent work has shown how the step size can itself be optimized alongside the model parameters by manually deriving expressions for "hypergradients" ahead of time.
We show how to automatically compute hypergradients with a simple and elegant modification to backpropagation. This allows us to easily apply the method to other optimizers and hyperparameters (e.g. momentum coefficients). We can even recursively apply the method to its own hyper-hyperparameters, and so on ad infinitum. As these towers of optimizers grow taller, they become less sensitive to the initial choice of hyperparameters. We present experiments validating this for MLPs, CNNs, and RNNs.
Thanks for putting this together Thomas. Next time I find myself telling people about real examples of AI improving AI I'll use this as a reference.
In July, I made a post about AI being used to increase AI progress, along with this spreadsheet that I've been updating throughout the year. Since then, I have run across more examples, and had others submit examples (some of which were published before the date I made my original post).
2022 has included a number of instances of AI increasing AI progress. Here is the list. In each entry I also credit the person who originally submitted the paper to my list.
I'm writing this fairly quickly so I'm not going to add extensive commentary beyond what I said in my last post, but I'll point out here two things:
Did I miss any? You can submit more here.