Failures in technology forecasting? A reply to Ord and Yudkowsky
In The Precipice, Toby Ord writes: > we need to remember how quickly new technologies can be upon us, and to be wary of assertions that they are either impossible or so distant in time that we have no cause for concern. Confident denouncements by eminent scientists should certainly give us reason to be sceptical of a technology, but not to bet our lives against it - their track record just isn’t good enough for that. I strongly agree with those claims, think they’re very important in relation to estimating existential risk,[1] and appreciate the nuanced way in which they’re stated. (There’s also a lot more nuance around this passage which I haven’t quoted.) I also largely agree with similar claims made in Eliezer Yudkowsky’s earlier essay There's No Fire Alarm for Artificial General Intelligence. But both Ord and Yudkowsky provide the same set of three specific historical cases as evidence of the poor track record of such “confident denouncements”. And I think those cases provide less clear evidence than those authors seem to suggest. So in this post, I’ll: * Quote Ord and/or Yudkowsky’s descriptions of those three cases, as well as one case mentioned by Yudkowsky but not Ord * Highlight ways in which those cases may be murkier than Ord and Yudkowsky suggest * Discuss how much we could conclude about technology forecasting in general from such a small and likely unrepresentative sample of cases, even if those cases weren’t murky I should note that I don’t think that these historical cases are necessary to support claims like those Ord and Yudkowsky make. And I suspect there might be better evidence for those claims out there. But those cases were the main evidence Ord provided, and among the main evidence Yudkowsky provided. So those cases are being used as key planks supporting beliefs that are important to many EAs and longtermists. Thus, it seems healthy to prod at each suspicious plank on its own terms, and update incrementally. Case: Rutherford and at
Glad to hear that!
I do feel excited about this being used as a sort of "201 level" overview of AI strategy and what work it might be useful to do. And I'm aware of the report being included in the reading lists / curricula for two training programs for people getting into AI governance or related work, which was gratifying.
Unfortunately we did this survey before ChatGPT and various other events since then, which have majorly changed the landscape of AI governance work to be done, e.g. opening various policy windows. So I imagine people reading this report today may feel it has some odd omissions / vibes. But I still think it serves as a good 201 level overview despite that. Perhaps we'll run a followup in a year or two to provide an updated version.