“There are a thousand hacking at the branches of evil to one who is striking at the root”
Henry David Thoreau, Walden
The world is full of problems: Pain, poverty, illness, war, pollution, to pick a few among thousands. Many of us feel like we need to Do Something about these problems. There’s just one problem (sorry): There are so many problems that focusing on one specific problem feels wrong. Why choose that one? How dare you ignore all the other problems? Do those people suffering not matter to you? Can’t you hear their screams?
One response is the effective altruist’s shut up and multiply: loosely, find a framework that lets you rank problems against each other, and then work on the most important problems. Yes, war is hell, but solving it is not tractable or neglected. So you buy anti-malarial bednets and try your best to ignore the screams.
Another is to try to identify the root cause of all the problems, and work on that. Some popular candidates include metascience, “healing our collective trauma”, “awakening from the meaning crisis”, “solving intelligence and using it to solve everything else”.
Sometimes you can justify this on effective altruist grounds, but it’s also very neat psychologically: you don’t have to ignore the screams. Whenever a new problem comes to your attention, you can say “yes, my work is helping with this also”. You can channel that empathy into whatever you were already doing. And afterwards, you can sleep easy, knowing that you helped.
This is awfully convenient, and you might be tempted to dismiss it as cope, as a way to feel good about yourself without ever doing any real, object-level, devil-is-in-the-details kind of work. And yet… this meta-strategy has worked at least once before, and worked so well as to dwarf all the object-level charity ever wrought by human hands and hearts. I’m referring to Francis Bacon’s invention of science.
Quoting the historian and novelist Ada Palmer’s beautiful essay On Progress and Historical Change:
Then in the early seventeenth century, Francis Bacon invented progress.
If we work together — said he — if we observe the world around us, study, share our findings, collaborate, uncover as a human team the secret causes of things hidden in nature, we can base new inventions on our new knowledge which will, in small ways, little by little, make human life just a little easier, just a little better, warm us in winter, shield us in storm, make our crops fail a little less, give us some way to heal the child on his bed. We can make every generation’s experience on this Earth a little better than our own. […] Let us found a new method — the Scientific Method — and with it dedicate ourselves to the advancement of knowledge of the secret causes of things, and the expansion of the bounds of human empire to the achievement of all things possible.
There are many caveats: the Great Man theory of history is flawed, Bacon was just a spokesman for a broader community of British proto-scientists, science would have probably emerged anyway at some point, most progress comes from tinkerers and engineers rather than scientists proper. Nonetheless… has anyone ever been more right about anything than Francis Bacon was about the importance of the scientific method?
We now have an existence proof of a successful meta-intervention. Francis Bacon identified the root cause of most 17th century problems (our collective ignorance about the natural world) and worked to address them by developing and evangelizing the scientific method. What is the equivalent today?
Some say: still ignorance about the natural world! We need to double down on Science! If we just understood biology better we could cure all diseases, develop ever more nutritious foods, and find ways to arrest ecosystem collapse and environmental degradation. If we just understood physics better we could unlock new abundant sources of energy, solve poverty and climate change, and travel to the stars.
I don’t buy it. Science has already taught us how to be healthy: eat whole foods, exercise a lot, spend time outside, live in community. Yet we do less and less of all these things. Physics has already given us nuclear power, an incredibly cheap, reliable, safe, low-emission source of energy. We don’t use it because it’s politically unpopular. Our biologists and ecologists already know quite well how to protect ecosystems, as our national parks show. We just don’t care enough to do it.
Maybe the bottleneck is in distributing our scientific understanding more broadly? This is the position of those who see lack of education as the root of all problems. I don’t buy this one either. Our best analyses of education suggest that schools are ineffective at disseminating scientific knowledge. Even the knowledge that does get disseminated isn’t acted on: how many people at this point don’t know that regular exercise is the number one health intervention in the developed world?
All these examples suggest the bottleneck has something to do with people and their interactions. Perhaps we’re reached diminishing returns on natural science, but human and social sciences are still incredibly valuable. If we could only understand psychology and sociology and economics well enough, we could design interventions that convince people to act in accordance with their best interests, and that of society as a whole.
This feels compelling, but I can’t help but notice we’ve sunk hundreds of billions of dollars and millions of our brightest minds into this research over the last century, and yet… are we actually better at organizing our social life in accordance with human flourishing than the Victorians, the Qing, or the pre-conquest Lakota? In some ways yes, in some ways no. If we ignore the differences in material prosperity (largely due to our better science and engineering, not social science), I’d call it a wash at best. Admittedly, natural science took a long time to pay off: Bacon wrote Novum Organum in 1620 and the Royal Society was founded in 1660; the British economy started growing rapidly only around 1820. Perhaps all this psychology and social science research will pay off in the end. But I’m not holding my breath.
A final meta-intervention I’ll bring up to dismiss is the one exemplified by DeepMind’s founding mission statement: “solve intelligence. use it to solve everything else”. This has been the ideology of Silicon Valley for the last year or so, and seems poised to become the ideology of financial capitalism and the military-industrial complex as a whole. Instead of solving object-level problems, we are funneling all our surplus capital, our best analytical minds, and (soon) all our surplus energy towards increasing the supply of raw silicon-based intelligence in the hopes that it will solve our problems for us. I’ll address this more fully in a future essay, but briefly I have the same reaction to it as Fouché had to the murder of the Duc D’Enghien: “It is worse than a crime; it is a mistake”. Intelligence is merely an accelerant; it will amplify both the best and the worst trends in society; more intelligence helps only if you believe we are on a good trajectory on net, and our main problem is that we’re not getting there faster enough.
Enough criticism - what do I think the real root of all evil is? As you might have guessed from the above, I believe it’s our inability to understand and cooperate with each other at scale. There are different words for the thing we need more of: trust. social fabric. asabiyyah. attunement. love. But all these words are deceptively cozy, suggesting we just need to Retvrn to our tribal and relational instincts and all will be okay. This is a dangerous illusion. Our old social technologies for cooperation did not scale to the complexity and enormity of the modern world, and were replaced by global capital markets and massive state and corporate bureaucracies. We need to find the principles underlying these cozy words, and find a way to make them scale. Much like Bacon and Boyle built a Knowledge Machine that takes in irrational argumentative academics and produces scientific knowledge; much like Hinton and Schmidhuber and Sutskever built a Learning Machine that takes in oceans of data and computation and produces raw intelligence; we need to build a Cooperation Machine that takes in atomized people and raw intelligence and produces mutual understanding and harmonious collective action.
Thanks to Richard Ngo for his inimitable combination of encouragement and skepticism that helped sharpen the ideas in this essay.
It seems to me that the next level will be one of the following, or its combination:
In some sense, these are all just different strategies for "becoming smarter", the main difference is between creating individuals that are smarter (AI, mutants, cyborgs), creating more of the smart individuals (eugenics, ems), or improving cooperation between existing smart individuals (niceness, prediction markets).
In current situation, I see the problem as the following:
The first point is obvious. Maybe not if you live in the Bay Area, but I assume that everywhere else the lack of smart and sane people is visible and painful. I have no idea how to build a nicer society with stupid and insane people. Democracy selects for ideas that appeal to many, i.e. to the stupid and insane majority. Dictatorship selects for people who are not nice. My best guess would be to bet on creating a smart and sane subculture, and hope that it will inspire other people. But outside of Bay Area we probably don't have enough people to start it, and within Bay Area most people are on drugs or otherwise crazy.
The second point... I wish there was a short explanation I could point at, but the concept is approximately in the direction of "you are not hiring the top 1%" and "the asshole filter". It's a combination of "less desirable people circulate more (because they have to)" and "naive defenses against bad people are actually more likely to discourage good ones (because they respect your intent to be left alone) and less likely to discourage bad ones (because predators have a motive to get to you)". As a result, our interactions with random people are more likely to be unpleasant than the statistics of the population would suggest.
The fourth point... I often feel like there should be some mechanism for "people reviewing people", where we could get information about other people in ways more efficient and reliable than gossip. Otherwise, trust can't scale. But when I imagine such review system, there are many obvious problems with no obvious solutions. For starters, people lie. If I say "avoid him, he punched me for no reason", nothing prevents the other person to write exactly the same thing about me, even if that did not happen. A naive solution might be to expect that people will reciprocate both good and bad reviews, and thus treat all relations as directionless; we cannot know whether X or Y is the bad guy, but we can know that X and Y don't like each other. Then look at the rest of the graph, and if X has conflicts with dozen different people, and Y only has a conflict with X, then probably X is the bad guy. Except, nope, maybe Y just asked all his friends to falsely accuse X. Also, maybe Y has a lot of money and threatens to sue everyone who gives him a bad rating in the system.
It's bad that getting power is positively correlated with wanting power, rather than with being competent, nice, and sane. But that's the natural outcome; there would have to be some force acting in the opposite direction to get a different outcome.
People instinctively hate those who have power, but there are a few problems with the instinct. First, the world is complicated -- if someone magically offered me a position to rule something, I would be quite aware that I am not competent enough to take it. (If the options were either me, or a person who ... (read more)