> Re: "Extrapolating GPT-N performance" and "Revisiting ‘Is AI Progress Impossible To Predict?’" sections of google doc
Read Section 6 titled "The Limit of the Predictability of Scaling Behavior" of "Broken Neural Scaling Laws" paper:
https://arxiv.org/abs/2210.14891
Have ya’ll ever considered doing a robust form of strategic foresight in order to have plans for scenarios that fits into a “cone of plausibility”? I think we can learn stuff from forecasting, but I also think the foresight approach is underrated if done well. The main point is that you have thought-through as many plausible scenarios as possible and how everything interacts with one another (technical, socio-economic, geo-political, etc.) and have plans for a variety of scenarios. Even if not of the scenarios happen, I expect you will learn a lot and be much more ready for the scenario that does.
I’m not sure if it’s something you’ve looked into. But if you haven’t, here’s a blog post I wrote that gives a little bit of an overview: https://medium.com/@thibo.jacques/helping-organizations-survive-disasters-and-potentially-avoid-them-altogether-df9a4e835a90
I don’t really do that type of stuff anymore, but would be willing to chat if you have questions.
Overview