This is a linkpost for a new paper called Preparing for the Intelligence Explosion, by Will MacAskill and Fin Moorhouse. It sets the high-level agenda for the sort of work that Forethought is likely to focus on.
Some of the areas in the paper that we expect to be of most interest to EA Forum or LessWrong readers are:
- Section 3 finds that even without a software feedback loop (i.e. “recursive self-improvement”), even if scaling of compute completely stops in the near term, and even if the rate of algorithmic efficiency improvements slow, then we should still expect very rapid technological development — e.g. a century’s worth of progress in a decade — once AI meaningfully substitutes for human researchers.
- A presentation, in section 4, of the sheer range of challenges that an intelligence explosion would pose, going well beyond the “standard” focuses of AI takeover risk and biorisk.
- Discussion, in section 5, of when we can and can’t use the strategy of just waiting until we have aligned superintelligence and relying on it to solve some problem.
- An overview, in section 6, of what we can do, today, to prepare for this range of challenges.
Here’s the abstract:
AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges.
These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making.
We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are aligned: we should be preparing, now, for the disorienting range of developments an intelligence explosion would bring.
I think it's something of a trend relating to a mix of 'tools for thought' and imitation of some websites (LW2, Read The Sequences, Asterisk, Works in Progress & Gwern.net in particular), and also a STEM meta-trend arriving in this area: you saw this in security vulnerabilities where for a while every major vuln would get its own standalone domain + single-page website + logo + short catchy name (eg. Shellshock, Heartbleed). It is good marketing which helps you stand out in a crowded ever-shorter-attention-span world.
I also think part of it is that it reflects a continued decline of PDFs as the preferred 'serious' document format due to preferring Internet-native things with mobile support. (Adobe has, in theory, been working on 'reflowable' PDFs and other fixes, but I've seen little evidence of that anywhere.)
Most of these things would have once been released as giant doorstop whitepaper-book PDFs. (And you can see that some things do poorly because they exist only as PDFs - the annual Stanford AI report would probably much more read if they had a better HTML story. AFAIK it exists only as giant PDFs everyone intends to read but never get around to doing so, and so everyone only sees a few graphs copied out of it and put in media articles or social media squibs.) Situational Awareness, for example, a few years ago would've definitely been a PDF of some sort. But, PDFs suck on mobile, and now everyone is on mobile.
If you release something as a PDF rather than a semi-competent responsive website which is readable on mobile without opening a separate app & rotating my phone & constantly thumbing up & down a two-column layout designed when cellphones required a car to be attached to, you cut your readership at least in half. I wish I didn't have to support mobile or dark-mode, but I can see in my analytics that it's at least half my readers, and I notice that almost every time someone screenshots Gwern.net on social media, it is from the mobile version (and as often as not, the dark-mode too). Nor are these trash readers - many of them are elite readers, especially of the sort who are creating virality or referencing it or creating downstream readers in various ways. (Ivanka Trump was tweeting SA; do you think she and everyone else connected to the Trump Administration are sitting down at their desktop PC and getting in a few hours of solid in-depth reading? Probably not...) People will even exclusively use the Arxiv HTML versions of papers, despite the fact that the LaTeX->HTML pipeline has huge problems like routinely silently deleting large fractions of papers (so many problems I gave up a while ago filing bug reports on it).
Having a specialized website can be a PITA in the long run, of course, but if you design it right, it should be largely fire-and-forget, and in any case, in many of these releases (policy advocacy, security vulns), the long run is not important.
(I don't think reasoning/coding models have yet had too much to do with this trend, as they tend to either be off-the-shelf or completely bespoke. They are not what I would consider 'high-effort': the difference between something like SA and Gwern.net is truly vast; the former is actually quite simple and any 'fancy' appearance is more just its clean minimalist design and avoiding web clutter. At best, as tireless patient superhumanly knowledgeable consultants, LLMs might remove some friction and enable people unsure if they can make a whole website on their own, and thus cause a few more at the margin. But many of these predate coding LLMs entirely and I'm fairly sure Leopold didn't need much, if any, LLM assistance to do the SA website, as he is a bright guy good at coding and the website is simple.)