Over the last few years, effective altruism has gone through a rise-and-fall story arc worthy of any dramatic tragedy.
The pandemic made them look prescient for warning about global catastrophic risks, including biosafety. A masterful book launch put them on the cover of TIME. But then the arc reversed. The trouble started with FTX, whose founder Sam Bankman-Fried claimed to be acting on EA principles and had begun to fund major EA efforts; its collapse tarnished the community by association with fraud. It was bad for EA if SBF was false in his beliefs; it was worse if he was sincere. Now we’ve just watched a major governance battle over OpenAI that seems to have been driven by concerns about AI safety of exactly the kind long promoted by EA.
SBF was willing to make repeated double-or-nothing wagers until FTX exploded; Helen Toner was apparently willing to let OpenAI be destroyed because of a general feeling that the organization was moving too fast or commercializing too much. Between the two of them, a philosophy that aims to prevent catastrophic risk in the future seems to be creating its own catastrophes in the present. Even Jaan Tallinn is “now questioning the merits of running companies based on the philosophy.”
On top of that, there is just the general sense of doom. All forms of altruism gravitate towards a focus on negatives. EA’s priorities are the relief of suffering and the prevention of disaster. While the community sees the potential of, and earnestly hopes for, a glorious abundant technological future, it is mostly focused not on what we can build but on what might go wrong. The overriding concern is literally the risk of extinction for the human race. Frankly, it’s exhausting.
So I totally understand why there has been a backlash. At some point, I gather, someone said, hey, we don’t want effective altruism, we want “effective accelerationism”—abbreviated “e/acc” (since of course we can’t just call it “EA”). This meme has been frequent in my social feeds lately.
I call it a meme and not a philosophy because… well, as far as I can tell, there isn’t much more to it than memes and vibes. And hey, I love the vibe! It is bold and ambitious. It is terrapunk. It is a vision of a glorious abundant technological future. It is about growth and progress. It is a vibe for the builder, the creator, the discoverer, the inventor.
But… it also makes me worried. Because to build the glorious abundant technological future, we’re going to need more than vibes. We’re going to need ideas. A framework. A philosophy. And we’re going to need just a bit of nuance.
We’re going to need a philosophy because there are hard questions to answer: about risk, about safety, about governance. We need good answers to those questions in part because mainstream culture is so steeped in fears about technology that the world will never accept a cavalier approach. But more importantly, we need good answers because one of the best features of the glorious abundant technological future is not dying, and humanity not being subject to random catastrophes, either natural or of our own making. In other words, safety is a part of progress, not something opposed to it. Safety is an achievement, something actively created through a combination of engineering excellence and sound governance. Our approach can’t just be blind, complacent optimism: “pedal to the metal” or “damn the torpedos, full speed ahead.” It needs to be one of solutionism: “problems are real but we can solve them.”
You will not find a bigger proponent of science, technology, industry, growth, and progress than me. But I am here to tell you that we can’t yolo our way into it. We need a serious approach, led by serious people.
The good news is that the intellectual and technological leaders of this movement are already here. If you are looking for serious defenders and promoters of progress, we have Eli Dourado in policy, Bret Kugelmass or Casey Handmer in energy, Ben Reinhardt investing in nanotechnology, Raiany Romanni advocating for longevity, and many many more, including the rest of the Roots of Progress fellows.
I urge anyone who values progress to take the epistemic high road. Let’s make the best possible case for progress that we can, based on the deepest research, the most thorough reasoning, and the most intellectually honest consideration of counterarguments. Let’s put forth an unassailable argument based on evidence and logic. The glorious abundant technological future is waiting. Let’s muster the best within ourselves—the best of our courage and the best of our rationality—and go build it.
Followup thoughts based on feedback
- Many people focused on the criticism of EA in the intro, but this essay is not a case against EA or against x-risk concerns. I only gestured at EA criticism in order to acknowledge the motivation for a backlash against it. This is really about e/acc. (My actual criticism of EA is longer and more nuanced and I have not yet written it up)
- Some people suggested that my reading of the OpenAI situation is wrong. That is quite possible. It is my best reading based on the evidence I've seen, but there are other interpretations and outsiders don't really know. If so, it doesn't change my points about e/acc.
- The quote from the Semafor article may not accurately represent Jaan Tallinn's views. A more careful reading suggests that Tallinn was criticizing self-governance schemes, rather than criticizing EA as a philosophy underlying governance.
Thanks all.
Ok, so your belief is that however low the odds are, it's the only hope. And the odds are pretty low. I thought of a "dumbest possible frequentist algorithm" to estimate the odds.
The dumbest algorithm is to simply ask how many times outcome A vs B happens. For example, if the question is "how likely is a Green party candidate to be elected president", and it's never happened, and there have been 10 presidential elections since the founding of the green party, then we know the odds are under 10 percent. Obviously the odds are much lower than that, 2 party winner take all system makes the actual odds about 0, but say you are a Green party supporter - even you have to admit, based on this evidence, it isn't likely.
And "humans failed to build useful weapons technology for concern about its long term effects". Well, as far as I know, bioweapons research was mostly done for non replicating bioweapons. Anthrax can't spread from person to person. The replicating ones would affect everyone, they aren't specific enough. Like developing a suicide nuke as a weapon.
So it's happened 1 time? And humans have developed how many major weapons in human history? Even if we go by category and only count major ones there's bronze age, iron age, those spear launchers, roman phalanxes, horse archers, cannon, muskets, castles, battleships, submarines, aircraft, aircraft carriers, machine guns, artillery, nukes, ballistic missiles, cruise missiles, SAMs, stealth aircraft, tanks..at 21 categories and I am bored.
To steelman your argument would you say the odds are under 5 percent? Because AI isn't just a weapon, it lets you make better medicine and mass produce housing and consumer goods and find criminals and so on. Frankly there is almost no category of human endeavor an AI won't help with, vs like a tank where you can't use it for anything but war.
So would you say, in your model, it works out to :
5 percent chance of multilateral AI slowdown. In these futures, what's the odds of surviving here? If it's 50 percent, then 2.5 percent survival here.
95 percent chance of arms race, where you think only 2 percent of these futures humans survive in. Then 1.9 percent survival here.
This how you see it?
Nukes have x-risk but humans couldn't help but build them
In today's world some powers are currently in a weaker position and AI offers them an opportunity to move to dominance.
AI development has a tremendous amount of inherent resilience, much more than physical world tech. Each cluster of AI accelerators is interchangeable. Model checkpoints can be stored at many geographic locations. If you read the Gemini model card, they mention developing full determinism. This means someone could put a bomb on a tpuv5 cluster and the Google sysadmins could resume training, possibly autonomously.
The bottlenecks are in the chip fabrication tooling.