https://www.elilifland.com/. You can give me anonymous feedback here. I often change my mind and don't necessarily endorse past writings.
I'll say various facts as best as I can recall and allow you and others to decide how bad/deceptive the time horizon prediction graph was.
I'm kind of split about this critique, since the forecast did end up as good propaganda if nothing else. But I do now feel that the marketing around it was kind of misleading, and we probably care about maintaining good epistemics here or something.
I'm interested in you expanding on which parts of the marketing were misleading. Here are some quick more specific thoughts:
Not-very-charitably put, my impression now is that all the technical details in the forecast were free parameters fine-tuned to support the authors' intuitions, when they weren't outright ignored. Now, I also gather that those intuitions were themselves supported by playing around with said technical models, and there's something to be said about doing the math, then burning the math and going with your gut. I'm not saying the forecast should be completely dismissed because of that.
I tried not to just fine-tune the parameters to support my existing beliefs, though I of course probably implicitly did to some extent. I agree that the level of free parameters is a reason to distrust our forecasts.
FWIW, my and Daniel's timelines beliefs have both shifted some as a result of our modeling. Mine initially got shorter then got a bit longer due to the most recent update, Daniel moved his timelines longer to 2028 in significant part because of our timelines model.
... But "the authors, who are smart people with a good track record of making AI-related predictions, intuitively feel that this is sort of right, and they were able to come up with functions whose graphs fit those intuitions" is a completely different kind of evidence compared to "here's a bunch of straightforward extrapolations of existing trends, with non-epsilon empirical support, that the competent authors intuitively think are going to continue".
Mostly agree. I would say we have more than non-epsilon empirical support though because of METR's time horizons work and RE-Bench. But I agree that there are a bunch of parameters estimated that don't have much empirical support to rely on.
But if I did interpret the forecast as being based on intuitively chosen but non-tampered straightforward extrapolations of existing trends, I think I would be pretty disappointed right now.
I don't agree with the connotation of "non-tampered," but otherwise agree re: relying on straightforward extrapolations. I don't think it's feasible to only rely on straightforward extrapolations when predicting AGI timelines.
You should've maybe put a "these graphs are for illustrative purposes only" footnote somewhere, like this one did.
I think "illustrative purposes only" would be too strong. The graphs are the result of an actual model that I think is reasonable to give substantial weight to in one's timelines estimates (if you're only referring to the specific graph that I've apologized for, then I agree we should have moved more in that direction re: more clear labeling).
I don't feel that "this is the least-bad forecast that exists" is a good defence. Whether an analysis is technical or vibes-based is a spectrum, but it isn't graded on a curve.
I'm not sure exactly how to respond to this. I agree that the absolute level of usefulness of the timelines forecast also matters, and I probably think that our timelines model is more useful than you do. But also I think that the relative usefulness does matter quite a bit on the decision of whether to release and publicize model. I think maybe this critique is primarily coupled with your points about communication issues.
[Unlike the top-level comment, Daniel hasn't endorsed this, this is just Eli.]
Thanks titotal for taking the time to dig deep into our model and write up your thoughts, it's much appreciated. This comment speaks for Daniel Kokotajlo and me, not necessarily any of the other authors on the timelines forecast or AI 2027. It addresses most but not all of titotal’s post.
Overall view: titotal pointed out a few mistakes and communication issues which we will mostly fix. We are therefore going to give titotal a $500 bounty to represent our appreciation. However, we continue to disagree on the core points regarding whether the model’s takeaways are valid and whether it was reasonable to publish a model with this level of polish. We think titotal’s critiques aren’t strong enough to overturn the core conclusion that superhuman coders by 2027 are a serious possibility, nor to significantly move our overall median (edit: I now think it's plausible that changes made as a result of titotal's critique will move our median significantly). Moreover, we continue to think that AI 2027’s timelines forecast is (unfortunately) the world’s state-of-the-art, and challenge others to do better. If instead of surpassing us, people simply want to offer us critiques, that’s helpful too; we hope to surpass ourselves every year in part by incorporating and responding to such critiques.
Clarification regarding the updated model
My apologies about quietly updating the timelines forecast with an update without announcing it; we are aiming to announce it soon. I’m glad that titotal was able to see it.
A few clarifications:
Most important disagreements
I'll let titotal correct us if we misrepresent them on any of this.
Other disagreements
Mistakes that titotal pointed out
In accordance with our bounties program, we will award $500 to titotal for pointing these out.
Communication issues
There were several issues with communication that titotal pointed out which we agree should be clarified, and we will do so. These issues arose from lack of polish rather than malice. 2 of the most important ones:
Relatedly, titotal thinks that we made our model too complicated, while I think it's important to make our best guess for how each relevant factor affects our forecast.
Sorry for the late reply.
If we divide the inventing-ASI task into (A) “thinking about and writing algorithms” versus (B) “testing algorithms”, in the world of today there’s a clean division of labor where the humans do (A) and the computers do (B). But in your imagined October 2027 world, there’s fungibility between how much compute is being used on (A) versus (B). I guess I should interpret your “330K superhuman AI researcher copies thinking at 57x human speed” as what would happen if the compute hypothetically all went towards (A), none towards (B)? And really there’s gonna be some division of compute between (A) and (B), such that the amount of (A) is less than I claimed? …Or how are you thinking about that?
I'm not 100% sure what you mean, but my guess is that you mean (B) to represent the compute used for experiments? We do project a split here and the copies/speed numbers are just for (A). You can see our projections for the split in our compute forecast (we are not confident that they are roughly right).
Re: the rest of your comment, makes sense. Perhaps the place I most disagree is that if LLMs will be the thing discovering the new paradigm, they will probably also be useful for things like automating alignment research, epistemics, etc. Also if they are misaligned they could sabotage the research involved in the paradigm shift.
Oh I misunderstood you sorry. I think the form should have post-2023, not sure about the website because it adds complexity and I'm skeptical that it's common that people are importantly confused by it as is.
Whew, a critique that our takeoff should be faster for a change, as opposed to slower.
Fun fact: AI-2027 estimates that getting to ASI might take the equivalent of a 100-person team of top human AI research talent working for tens of thousands of years.
(Calculation details: For example, in October 2027 of the AI-2027 modal scenario, they have “330K superhuman AI researcher copies thinking at 57x human speed”, which is 1.6 million person-years of research in that month alone. And that’s mostly going towards inventing ASI, I think. Did I get that right?)
This depends on how large you think the penalty is for parallelized labor as opposed to serial. If 330k parallel researchers is more like equivalent to 100 researchers at 50x speed than 100 researchers at 3,300x speed, then it's more like a team of 100 researchers working for (50*57)/12=~250 years.
Also of course to the extent you think compute will be an important input, during October they still just have a month's worth of total compute even though they're working for 250-25,000 subjective years.
I’m curious why ASI would take so much work. What exactly is the R&D labor supposed to be doing each day, that adds up to so much effort? I’m curious how people are thinking about that, if they buy into this kind of picture. Thanks :)
I'm imagining that there's a mix of investing tons of effort into optimizing experimenting ideas, implementing and interpreting every experiment quickly, as well as tons of effort into more conceptual agendas given the compute shortage, some of which bear fruit but also involve lots of "wasted" effort exploring possible routes, and most of which end up needing significant experimentation as well to get working.
(My own opinion, stated without justification, is that LLMs are not a paradigm that can scale to ASI, but after some future AI paradigm shift, there will be very very little R&D separating “this type of AI can do anything importantly useful at all” and “full-blown superintelligence”. Like maybe dozens or hundreds of person-years, or whatever, as opposed to millions. More on this in a (hopefully) forthcoming post.)
I don't share this intuition regarding the gap between the first importantly useful AI and ASI. If so, that implies extremely fast takeoff, correct? Like on the order of days from AI that can do important things to full-blown superintelligence?
Currently there are hundreds or perhaps low thousands of years of relevant research effort going into frontier AI each year. The gap between importantly useful AI and ASI seems larger than a year of current AI progress (though I'm not >90% confident in that, especially if timelines are <2 years). Then we also need to take into account diminishing returns, compute bottlenecks, and parallelization penalties, so my guess is that the required person-years should be at minimum in the thousands and likely much more. Overall the scenario you're describing is maybe (roughly) my 95th percentile speed?
I'm curious about your definition for importantly useful AI actually. Under some interpretations I feel like current AI should cross that bar.
I'm uncertain about the LLMs thing but would lean toward pretty large shifts by the time of ASI; I think it's more likely LLMs scale to superhuman coders than to ASI.
I think it's not worth getting into this too much more as I don't feel strongly about the exact 1.05x, but I feel compelled to note a few quick things:
Also one more thing I'd like to pre-register: people who fill out the survey who aren't frontier AI researchers will generally report higher speedups because their work is generally less compute-loaded and sometimes more greenfieldy or requiring less expertise, but we should give by far the most weight to frontier AI researchers.
Yup feel free to make that change, sounds good
The timelines model didn't get nearly as many reviews as the scenario. We shared the timelines writeup with all of the people who we shared the later drafts of the scenario with, but I think almost none of them looked at the timelines writeup.
We also asked a few people to specifically review the timelines forecasts, most notably a few FutureSearch forecasters who we then added as a final author. However, we mainly wanted them to estimate the parameter values and didn't specifically ask them for feedback on the underlying modeling choices (though they did form some opinions, for example they liked benchmark and gaps much more than time horizon extension; also btw the superexponential plays a much smaller role in benchmarks and gaps). No one brought up the criticisms that titotal did.
In general the timelines model certainly got way less effort than the scenario, probably about 5% as much effort. Our main focus was the scenario as we think that it's a much higher value add.
I'm been pretty surprised at to how much quality-weighted criticisms have focused on the timelines model relative to the scenario, and wish that it was more tilted toward the scenario (and also toward the takeoff model, which IMO is more important than the timelines model but has gotten much less attention). To be clear I'm still very glad that these critiques exist if the alternative is that they didn't exist and nothing replaced them.