A big part of understanding the culture of futility is understanding how traumatic it is when the bad guys win. When SBF, the Luke Skywalker of crypto, and CZ, the Darth Vader of crypto, go head to head and CZ emerges victorious. Then CZ says "Ha! serves you right for being an idiotic do-gooder" and everyone cheers.
Didn't we actually learn that they were both bad guys? I find this example confusing.
If we're about to get a trevorpost about how SBF was actually good and we only think otherwise due to narrative manipulation and toxic ingroup signalling dynamics I'm here for it
Upvoted (I did not downvote). For some reason my most appreciated posts were the ones where I just transcribed a bunch of scribbled notes and ordered them at the last minute. I'm not sure why, but it's the kind of thing that happens with a genetically diverse intelligent species, so I just roll with it.
I defer to the EA people on SBF stuff (this was just a suboptimal illustrative example). I don't defer to EA adjacent people about modern persuasion technology, because they're clueless idiot disaster monkeys who unconditionally surrender to whatever infamously manipulative hypercomputer they see their friends spending 5 hours a day looking at. Which is the kind of thing that happens with a primate species that barely evolved enough general intelligence to build civilization and then stopped, so I just roll with it.
Do you think you could explain your thesis in a way that would make sense to someone who had never heard of "the EA, rationalist, and AI safety communities"? ("Moloch"? "Dath ilan"? Am I supposed to know who these people are?) You allude to "knowledge of decision theory or economics", but it's not clear what the specific claim or proposal is here.
Seems to me that many people do not want to coordinate on things -- this may be a cultural thing, with everyone exposed to memes like "today they want you to coordinate on singing a song together, but tomorrow they will try to make you join a mass suicide... better resist the coordination while you still can".
But even without this baggage... the problem is, why should people coordinate on the thing you want, rather than e.g. on the very opposite of it? Coordination itself is just a tool, not a goal. If people start coordinating better on e.g. violently spreading their religion, you probably won't be happy. So maybe this pushback against coordination is actually a public good -- most people are stupid, they would probably coordinate on stupid things, the fewer of those the better.
There are still some ways to coordinate people, for example you can pay them, but those are more difficult.
EDIT:
Uhm, this was too extreme. I actually believe that coordinating on small things is good (such as neighbors deciding to build a playground together for their kids) and is even kind of necessary for a healthy democracy. It is just the mass movements, especially mass movements of idiots coordinated online, that I am afraid of.
Miscellaneous thoughts:
If I was focused on furthering co-ordination, I'd take a step back and actually try to further co-ordination and see what issues I face. I'd try to build a small research team focused on a research project and see what irrational behavior and incentives I notice, and try to figure out systemic fixes. I'd try to create simple game theoretic models of interactions between people working towards making something happen and see what issues may arise.
I think CFAR was recently funding projects focused on furthering group rationality. You should contact CFAR, talk to some people thinking about this.
Strong upvoted. I read this lightly as I am currently pulling an all-nighter, and will read it more deeply and give a proper response in 24-36 hours.
Some of the first people to try to get together and have a really big movement to enlighten and reform the world was the Counter Culture movement starting in the 60′s
The first? Like, in the history of the world?
Potential piece of a coordination takeoff:
An easy to use app which allows people to negotiate contracts in a transparently fair way, by using an LDT solution to the Ultimatum Game (probably the proposed solution in that link is good-enough, despite being unlikely to be fully-optimal).
Part of the problem here is not just the implementation, but of making it credible to people who don't/can't understand the math. I tried to solve a similar problem with my website bayescalc.io where a large part of the goal was not just making use of Bayes' theorem accessible, but to make it credible by visually showing what it's doing as much as possible in an easy to understand way (not sure how well I succeeded, unfortunately).
Another important factor is that ease-of-use and a frictionless design. I believe Manifold Markets has succeeded because this turns out to be more important than even having proper financial incentives.
An easy to use app which allows people to negotiate contracts in a transparently fair way, by using an LDT solution to the Ultimatum Game (probably the proposed solution in that link is good-enough, despite being unlikely to be fully-optimal)
Writing up the contracts (especially around all the caveats that they might not have noticed) seems like it would be harder than just reading contracts (I'm an exception, I write faster than I read). Have you thought of integrating GPT/Claude as assistants? I don't know about current tech, but like many other technologies, that integration will scale well in the contingency scenario where publicly available LLMs keep advancing.
Part of the problem here is not just the implementation, but of making it credible to people who don't/can't understand the math. I tried to solve a similar problem with my website bayescalc.io where a large part of the goal was not just making use of Bayes' theorem accessible, but to make it credible by visually showing what it's doing as much as possible in an easy to understand way (not sure how well I succeeded, unfortunately).
I think this can be done with a website, but not the current one. Have you tried reading yudkowsky's projectlawful? The main character's math lessons gave me the impression of something that actually succeeds at demonstrating, to business school types (maybe not politicians), why math and bayesianism is something that works for them.
Another important factor is that ease-of-use and a frictionless design. I believe Manifold Markets has succeeded because this turns out to be more important than even having proper financial incentives.
This is a really interesting thing, it's not just about making each button intuitive, it's about making the whole enchilada intuitive for a wide variety of neurotypes. Now that I think about it, manifold really was a feat of engineering here, although I don't know how well it would work for people who, unlike me, don't know what getting ahead of markets is like. But generally, it's just a lot of optimization power, and it's probably way more time-effective to reach out to them and ask them how they did it (e.g. what books they read) than to try to find ease-of-use resources (e.g. books) with google search.
Writing up the contracts (especially around all the caveats that they might not have noticed) seems like it would be harder than just reading contracts (I'm an exception, I write faster than I read). Have you thought of integrating GPT/Claude as assistants? I don't know about current tech, but like many other technologies, that integration will scale well in the contingency scenario where publicly available LLMs keep advancing.
I'd consider the success of Manifold Markets over Metaculus to be mild evidence against this.
And to be clear, I do not currently intend to build the idea I'm suggesting here myself (could potentially be persuaded, but I'd be much happier to see someone else with better design and marketing skills make it).
I think this can be done with a website, but not the current one. Have you tried reading yudkowsky's projectlawful? The main character's math lessons gave me the impression of something that actually succeeds at demonstrating, to business school types (maybe not politicians), why math and bayesianism is something that works for them.
Heh, that scene was the direct inspiration for my website. I'm curious what specific things you think can be done better.
I feel like it's not very clear here what type of coordination is needed.
How strong does coordination need to become before we can start reaching take off levels? And how material does that coordination need to be?
Strong coordination, as I'm defining here, is about how powerfully the coordination constrains certain actions.
Material coordination, as I'm defining here, is about on what level the coordination "software" is running. Is it running on your self(i.e. it's some kind of information that's been coded into the algorithm that runs on your brain, examples being the trained beliefs in nihilism you refer to or decision theories)? Is it running on your brain(i.e. Neuralink, some kind of BCI)? Is it running on your body, or official/digital identity? Is it running on a decentralized crypto protocol, or as contracts witnessed by a governing body?
The difficult part of coordination is actions, deciding what to do is mostly solved through prediction markets, research, and good voting theory.
Decision theory didn't take off because it's "law thinking" but better decisionmaking in practice needs "rule thinking". And the mathematical formalisms early on actually weren't very complete or meaningful?
There were and are market-economics-knowing people who tried very hard to get the world to a better place. They're called developmental economists. Turns out that stuff is actually pretty hard, but people are making progress.
People strongly prefer the good guys in charge,
Most people in fact just want their bad guys in charge instead, so they can do unto others.
Your central point, that relatively little work has gone into academic study of coordination, seems really important.
I hope that reading Dath Ilan isn't necessary, because that's a hell of an entry cost. Shouldn't there be an easier way to describe the possibilities and payoffs of better coordination? Surely there's some existing work out there.
As I see it, the arc of history bends toward better coordination. But it does so sporadically and slowly on average.
I'd have little fear for the future if it wasn't for AGI x-risk. That's a hard coordination problem, and the main one I worry about.
I have not properly read that "Moloch" essay, but I think I get the message. The world ruled by Moloch is one in which negative-sum games prevail, causing essential human values to be neglected or sacrificed. Nonetheless, one does not get to rule without at least espousing the values of one's civilization or one's generation. The public abandonment of human values therefore has to be justified in terms of necessary evils - most commonly, because there are amoral enemies, within and without.
The other form of abandonment of value that corrupts the world, mostly boils down to the machiavellian pursuit of self-interest - the self-interest of an individual, a clique, a class. To explain this, you don't even need to suppose that society is trapped in a malign negative-sum equilibrium. You just need to remember that the pursuit of self-interest is actually a natural thing, because subjective goods are experienced by individuals. Humans do also have a natural attraction to certain intersubjective goods, but "omnisubjective" goods like universal love, or perpetual peace among all nations, are radical utopian ideas, that aren't even conceivable without prior cultural groundwork. But that groundwork has already existed for thousands of years:
It's important to remember that the culture we grew up in is deeply nihilistic at its core...
The pursuit of a better world is as old as history. Think of the "Axial Age" in which several world religions - which include universal moralities - came into being. Every civilization has a notion of good. Every modern political philosophy involves some kind of ideal. Every significant movement and institution had people in it thinking of how to do good or minimize harm. Even cynical egoistical cliques that wield power, must generally claim to be doing so, for the sake of something greater than themselves.
I'm pretty sure that the entire 20th century came and went with nearly none of them spending an hour a week thinking about solving the coordination problems facing the human race, so that the world could be better for them and their children.
You appear to be talking about game theorists and economists, saying they were captured by military and financial elites respectively, and led to use their knowledge solely in the interest of those elites? This seems to me profoundly wrong. After World War 2, the whole world was seeking peace, justice, freedom, prosperity. The economists and game theorists, of the West at least, were proposing pathways to those outcomes, within the framework of western ideology, and in the context of decolonization and the cold war. The main rival to the West was Communism, which of course had its own concept of how to make a better world; and then you had all the nonaligned postcolonial nationalisms, for whom having the sovereign freedom to decide their own destinies was something new, that they pursued in a spirit of pragmatic solidarity.
What I'm objecting to is the idea that ideals have counted for nothing in the governance of the world, except to camouflage the self-interest of ruling cliques. Metaphorically, I don't believe that the world is ruled by a single evil god, Moloch. While there is no shortage of cold or depraved individuals in the circles of power, the fact is that power usually requires a social base of some kind, and sometimes it is achieved by standing for what that base thinks is right. Also, one can lose power by being too evil... Moloch has to share power with other "gods", some of them actually mean well, and their relative share of power waxes and wanes.
I think a far more profound critique of "Moloch theory" could be written, emphasizing its incompleteness and lopsidedness when it's treated as a theory of everything.
As for new powers of coordination, I would just say that completely shutting Moloch out of the boardroom and the war room, is not a panacea. It is possible to coordinate on a mistaken goal. And hypercoordination itself could even become Moloch 2.0.
I think the most likely source for a coordination singularity is crypto, not prediction markets.
PMs will not get you out of bad NEs.
Interesting perspective!
I would be interested in hearing answers to "what can we do about this?". Sinclair has a couple of concrete ideas - surely there are more.
Let me also suggest that improving coordination benefits from coordination. Perhaps there is little a single person can do, but is there something a group of half a dozen people could do? Or two dozens? "Create a great prediction market platform" falls into this category, what else?
Concrete steps towards removing language barriers:
- promote idea that letting languages die is good actually
- improve translation speed, offline-capability, and UI
- create great products that take advantage of auto-translating non-english internets, social media, or traditional media
- accelerate capabilities of LLMs
Concrete steps towards free banking
- Fintech startup that issues VISA cards backed by your liquid investment portfolio, that autosells to pay for things
- Write code for crypto projects
More pie in the sky
- Design new social media that is fun and meaningful rather than divisive or draining
- Create the one true religion
- Stop tipping
Plausible theory.
In the scenario where a breakthrough leads to a coordination takeoff, what implications do you think that would have for alignment/AI safety research?
It's important to remember that the culture we grew up in is deeply nihilistic at its core. People expect Moloch, assume Moloch as a given, even defer to Moloch. If you read enough about business and international affairs (not news articles, those don't count, not for international affairs at least, I don't know about business), and then read about dath ilan, it becomes clear that our world is ruled by Moloch cultists who nihilistically optimized for career advancement.
Humans are primates; we instinctively take important concepts and turn them into dominance/status games, including that concept itself; resulting in many people believing that important concepts do not exist at all.
So it makes sense that Moloch would be an intensely prevalent part of our civilization, even ~a century after decision theory took off and ~4 centuries after mass literacy took off.
Some of the first people to try to get together and have a really big movement to enlighten and reform the world was the Counter Culture movement starting in the 60's, which overlapped with the Vietnam Antiwar movement and the Civil Rights movement.
The Counter Culture movement failed because they were mainly a bunch of inept teens and 20-somethings; not just lacking knowledge of decision theory or economics or Sequence-level understanding of heuristics/biases, but also because they lived in a world where social psychology and thinking-about-society were still in infancy. Like the European Enlightenment and the French Revolution before them, they started out profoundly confused about the direction to aim for and the correct moves to make (see Anna Salamon's Humans are not Automatically Strategic).
The Antiwar movement permanently damaged the draft-based American military apparatus, permanently made western culture substantially more cosmopolitan than the conformist 1950s, but their ignorance and ineptitude and blunders were so immense that they shrank the Overton window on people coming together and choosing to change the world for the better.
As soon as lots of people acquired an incredibly primitive version of the understandings now held by the EA, rationalist, and AI safety communities, those people started the Counter Culture movement of the 1960s in order to raise the sanity waterline above the deranged passivity of the 1950s conformist culture. And they botched it so hard, in so many ways, that everyone now cringes at the memory; the Overton window on changing the world was fouled up, perhaps intractably. Major governments and militaries also became predisposed to nip similar movements in the bud, such as the use of AI technology to psychologically disrupt groups of highly motivated people.
Since then, there hasn't been a critical mass behind counter culture or societal reform, other than Black Lives Matter, the Women's March, Occupy Wall Street, and the Jan 6th Riots, which only got that many people due to heavily optimizing for memetic spread among the masses via excessively simple messages, and prevailing on already-popular sentiment such post-2008 anger at banking institutions, and likely only getting that far due to the emergence of the social media paradigm (which governments are incentivized to hijack).
Game theory didn't take off until the 1950s, when it was basically absorbed by the US military, just like how economics was absorbed by the contemporary equivalent of Wall Street (and remains absorbed to this day). I'm pretty sure that the entire 20th century came and went with nearly none of them spending an hour a week thinking about solving the coordination problems facing the human race, so that the world could be better for them and their children. Even though virtually all of them would prefer to live in a world where all of the decision theorists, economists, and mathematicians spent an hour a week thinking about practical ways to solve humanity's coordination problems and kill off Moloch one piece at a time. Everyone prefers to live in a world not dominated by nihilism and nihilists.
I think that's never been tried for real, and therefore it can happen in the 21st century. I think that gets things like prediction markets invented from scratch, and I don't know how many discoveries on the level of prediction markets are required for a coordination takeoff to become feasible (it's plausible that prediction markets alone could be enough once the wheat-to-chaff ratio passes a specific critical mass).
I think that could get us to a world like dath ilan, not exactly the same obviously, but a world where people found galaxy-brained ways to facilitate enough elite cooperation to back a world they prefer (e.g. for their kids). It's a question of how many discoveries are needed for a critical mass, or how far each discovery needs to be implemented, e.g. a sufficient wheat-to-chaff ratio for prediction markets to pass a critical mass specific to prediction markets, or for readers of the Sequences to write the CFAR handbook and for the readers of the CFAR handbook to write the next iteration and so on.
A big part of understanding the culture of futility is understanding how traumatic it is when the bad guys win. When SBF, the Luke Skywalker of crypto, and CZ, the Darth Vader of crypto, go head to head and CZ emerges victorious. Then CZ says "Ha! serves you right for being an idiotic do-gooder" and everyone cheers.
And this is what happens every time. The bad guys are predisposed to victory. This happened to everyone every time, because our civilization is moloch-dominated, even though nobody wants it to be that way (by definition of Moloch). This kind of thing shapes people's identities in extraordinarily powerful ways.
The point is that it feels futile to everyone in these intense ways, but it's not, because the 21st century will be transformative, and the pace and outcome of each transformation are each their own technical details (like prediction markets reaching a critical mass in their wheat-to-chaff ratio, or LLM chatbot therapists being sufficiently competent at advising and empowering people in the ways described in Anna Salamon's Humans are not Automatically Strategic).
People strongly prefer the good guys in charge, it's just that they've been clueless about how to even begin to make this possible, and this clueless idiocy has been consistent. And the triumph of nihilism has also been consistent, until now.
There are probably fewer than 500 people on earth currently prepared to think about a coordination takeoff at a level similar to us, who read stuff on the same level as the Sequences and FDT/UDT and tuning cognitive strategies/feedbackloopfirst rationality and CFAR's stuff and dath ilan stuff in order to even begin to get a sense of what is possible, and that number of people can go up surprisingly fast.
I wouldn't call this optimism; instead, I would call it doubting that there is a >90% chance that the culture of nihilism will maintain its grip on elite culture throughout the 2020s.
It's important to note that people already thought about this. Maybe as early as Nietzsche and Marx, although like Sigmund Freud, Nietzche and Marx started out without the shoulders of giants to stand on, let alone a dying planet where the correct policies are fairly obvious (e.g. universal cryonics so that everyone everywhere realizes they have skin in the game, instead of mandatory guaranteed death right now, where they shrug and say "everything sucks lol" because there's nothing else to say) and it's just a strategic question of getting people to pass those obviously-correct policies.
Decision theory really did take off in the 1950s; if aliens were watching during the 1950s and were following this logic, they might have predicted good odds that the 1950s decision theory takeoff would have been enough for the monkeys to start getting good at working together. If not that, then they might have predicted it for when the collective action problem started getting taught to every single social science major. Sufficiently intelligent aliens will know what it looks like when a critical mass of agents, in any species of genetically diverse intelligent life, looks evil in the face and rejects it. If they were observing, they would immediately be able to see that our world revolves around the fact that such a thing hasn't happened yet (and might never).
However, a coordination takeoff is all about acceleration (the acceleration of humans). It makes sense to think that reading dath ilan, a world where Moloch was vanquished, is needed to provide a frame of reference for a firm understanding of the intensity of Moloch in our civilization. Are there even a dozen people on a single college campus anywhere (e.g. Berkeley) who have read about dath ilan? Is a single one of those dozen seriously thinking about what a coordination takeoff would look like, instead of skilling up for alignment research?
If the answer to both of those questions is no, then reaching the critical mass for a coordination takeoff might be low hanging fruit.