Quick book review of "If Anyone Builds It, Everyone Dies" (cross-post from X/twitter & bluesky):
Just read the new book If Anyone Builds It, Everyone Dies. Upshot: Recommended! I ~90% agree with it.
The authors argue that people are trying to build ASI (superintelligent AI), and we should expect them to succeed sooner or later, even if they obviously haven’t succeeded YET. I agree. (I lean “later” more than the authors, but that’s a minor disagreement.)
(It sounds like sci-fi, but remember that every technology is sci-fi until it’s invented!)
They further argue that we should expect people to accidentally make misaligned ASI, utterly indifferent to whether humans live or die, even its own creators. They have a 3-part disjunctive argument:
* (A) Nobody today has a plausible plan to make ASI that is not egregiously misaligned. It’s an inherently hard technical problem. Current approaches are not on track.
* (B) Even if (A) were not true, there are things about the structure of the problem that make it unlikely we would solve it, e.g.:
* (B1) Like space probes, you can’t do perfectly realistic tests in advance. No test environment is exactly like outer space. And many problems are unfixable from the ground. Likewise, if ASI has an opportunity to escape control, that’s a new situation, and there’s no do-over.
* (B2) Like nuclear reactors, building ASI will involve fast-moving dynamics, narrow margins for error, and self-amplification, but in a much more complicated and hard-to-model system.
* (B3) Like computer security, there can be adversarial dynamics where the ASI is trying to escape constraints, act deceptively, cover its tracks, and find and exploit edge cases.
* (C) EVEN IF (A) & (B) were not issues, we’re still on track to fail because AI companies & researchers are not treating this as a serious problem with billions of lives on the line. For example, in the online supplement, the authors compare AI culture to other endeavors with lives at stak