Nice summaries. One point I think it's worth making about the old worries about eugenics and drug-controlled-populations is that these things were experimented with by various governments around the early 1900s and it was pressure from society, influenced by writers like these, that got us to the better world we currently live in. So I wouldn't call them so much as bad predictions as successful warnings. Similarly, if the AGI doomers end up being wrong and humanity comes safely through the invention of superhuman AGI, I expect that humanity will owe the doomsayers a great deal for warning us in time to do something about the problem.
Yes, and similarly, William Crookes warning about a fertilizer shortage in 1898 was correct. Sometimes disaster truly is up ahead and it's crucial to change our course. What makes the difference IMO is between saying “this disaster will happen and there's nothing we can do about it” vs. “this disaster will happen unless we recant and turn backwards” vs. “this disaster might happen so we should take positive steps to make sure it doesn't.”
This is a monthly feature. As usual, I’ve omitted recent blog posts and such, which you can find in my links digests.
John Gall, The Systems Bible (2012), aka Systemantics, 3rd ed. A concise, pithy collection of wisdom about “systems”, mostly human organizations, projects, and programs. A classic, and recommended, although I found it a mixed bag. There is much wisdom in here, but also a lot of cynicism and little to no epistemic rigor: less like a serious writer trying to convince you of something, and more like a crotchety old man lecturing to you from his armchair. He throws out examples dripping with snark, but they felt under-analyzed to me. At one point he casually dismisses basically all of psychiatry. But if you can get past all of that, or if you just go into it knowing what to expect, there are a lot of deep lessons, e.g:
or:
For a shorter and more serious treatment of some of the same topics, see “How Complex Systems Fail” (which I covered in a previous reading list).
I’m still perusing Matt Ridley’s How Innovation Works (2020). One story I enjoyed was, at long last, an answer to the question of why we waited so long for the wheeled suitcase, invented by Bernard Sadow in 1970. People love to bring up this example in the context of “ideas behind their time” (although in my opinion it’s not a very strong example because it’s a relatively minor improvement). Anyway, it turns out that the need for wheels on suitcases was far from obvious:
Also, as often (always?) happens in the history of invention, Sadow was not the first; Ridley lists five prior patents going back to 1925.
So why did we wait so long?
Another bit I found very interesting was this take on the introduction of agriculture:
Ridley concludes:
Contrast with Jared Diamond’s view of agriculture as “the worst mistake in the history of the human race.”
Kevin Kelly, “Protopia” (2011). Kelly doesn’t like utopias: “I have not met a utopia I would even want to live in.” Protopia is a concept he invented as an alternative:
Virginia Postrel would likely agree with this dynamic, rather than static, ideal for society. David Deutsch would agree that solutions generate new problems, which we then solve in turn. And John Gall (see above) would agree that such a system would never be fully working; it would always have some broken parts that needed to be fixed in a future iteration.
J. B. S. Haldane, “Daedalus: or, Science and the Future” (1923); Bertrand Russell, “Icarus: or, the Future of Science” (1924), written in response; and Charles T. Rubin, “Daedalus and Icarus Revisited” (2005), a commentary on the debate. Haldane was a biologist; Wikipedia calls him “one of the founders of neo-Darwinism.” Both Haldane’s and Russell’s essays speculate on the future, what science and technology might bring, and what that might do for and to society.
In the 1920s we can already see somber, dystopian worries about the future. Haldane writes:
(Butler’s “horrible vision” is the one expressed in “Darwin Among the Machines,” which I mentioned earlier, and in his novel Erewhon; it is the referent of the term “Butlerian jihad.”)
And here’s Russell:
Both of them comment on eugenics, Russell being quite cynical about it:
Both also spoke of the ability to manipulate people’s psychology by the control of hormones. Here’s Haldane:
And Russell:
Today, forced sterilization is a moral taboo, but we do have embryo selection to prevent genetic diseases. Nor do we have “the emotions desired by our rulers,” despite Russell’s assertion that such control is “scarcely possible to doubt”; rather, understanding of the physiology of emotion has lead to the field of psychiatry and treatments for depression, anxiety, and other problems.
In any case, Rubin summarizes:
But Rubin criticizes both authors:
Joseph Tainter, The Collapse of Complex Societies (1990). Another classic. Have only just gotten into it,. There’s a good summary of the book in Clay Shirky’s article, below.
The introduction gives a long list of examples of societal collapse, from around the world. One pattern I notice is that all the collapses are very old: most of them are ancient; the more recent ones are all from the Americas, and even those all happened before Columbus. Tainter says that the collapses of modern empires (e.g., the British) could be added to the list, but that in these cases “the loss of empire did not correspondingly entail collapse of the home administration.” This is more evidence, I think, for my hypothesis that we are actually more resilient to change now than in the past.
Clay Shirky, “The Collapse of Complex Business Models” (2010?) Shirky riffs on Tainter’s Collapse of Complex Societies (see above) to talk about what happens to business models based on complexity when they are disrupted by some radically simpler model. Contains this anecdote:
P. W. Anderson, “More is Different: Broken symmetry and the nature of the hierarchical structure of science” (1972). On the phenomena that emerge from complexity:
Jacob Steinhardt, “More Is Different for AI” (2022). A series of posts with some very reasonable takes on AI safety, inspired in part by Anderson’s article above. I liked this view of the idea landscape:
Hubinger et al, “Risks from Learned Optimization in Advanced Machine Learning Systems” (2021). Or see this less formal series of posts. Describes the problem of “inner optimizers” (aka “mesa-optimisers”), a potential source of AI misalignment. If you train an AI to optimize for some goal, by rewarding it when it does better at that goal, it might evolve within its own structure an inner optimizer that actually has a different goal. By a rough analogy, if you think of natural selection as an optimization process that rewards organisms for reproduction, that system evolved human beings, who have our own goals that we optimize for, and we don’t always optimize for reproduction (in fact, when we can, we limit our own fertility).
DeepMind, “Specification gaming: the flip side of AI ingenuity” (2020). AIs behaving badly:
Here are dozens more examples.
Various articles about AI alignment on Arbital, including:
Jacob Steinhardt on statistics:
As perhaps a rebuttal, see also Eliezer Yudkowsky’s “Toolbox-thinking and Law-thinking” (2018):