Thanks Neel, we agree that we misinterpreted this. We've removed the claim.
For anyone who'd like to see questions of this type on Metaculus as well, there's this thread. For certain topics (alignment very much included), we'll often do the legwork of operationalizing suggested questions and posting them on the platform.
Side note: we're working on spinning up what is essentially an AI forecasting research program; part of that will involve predicting the level of resources allocated to, and the impact of, different approaches to alignment. I'd be very glad to hear ideas from alignment researchers as to how to best go about this, and how we can make its outputs as useful as possible. John, if you'd like to chat about this, please DM me and we can set up a call.
Nice work. A few comments/questions:
We'd probably try something along the lines you're suggesting, but there are some interesting technical challenges to think through.
For example, we'd want to train the model to be good at predicting the future, not just knowing what happened. Under a naive implementation, weight updates would probably go partly towards better judgment and forecasting ability, but also partly towards knowing how the world played out after the initial training cutoff.
There are also questions around IR; it seems likely that models will need external retrieval mechanisms to forecast well for the next few years at least, and we'd want to train something that's natively good at using retrieval tools to forecast, rather than relying purely on its crystalised knowledge.