The spiritual world is rife with bad communities and I've picked up a trick for navigating them. Many of the things named in this post could broadly be construed under the heading of "weird power dynamics." Isolation creates weird power dynamics, poor optionality creates weird power dynamics, and drugs, and skewed gender ratios, and etc etc.
When I spot a weird power dynamic I name it out loud to the group. A lot of bad groups will helpfully kick me out themselves. I naturally somewhat shy away from such actions of course, but an action that reliably loses me status points with exactly the people I don't want to be around is great.
It's the emperor's clothes principle: That which can be destroyed by being described by a beginner should be. And the parable illustrates something important about how it works. It needs to be sincere, not snark, criticism, etc.
I feel like this seems helpful for something else, but I don't think it super accurately predicts which environments will give rise to more extremist behavior.
Like, I am confident that the above strategy would not work very well if you point out the "weird power dynamics" of any of the world's largest religious communities, or any of the big corporations, or much of academia. Those places have tons of "weird power dynamics", but they don't give rise to extremist behavior. I expect all of those places to react very defensively and might kick you out if you point out all the weird power dynamics, but also, those power dynamics, while being "weird" will also still have been selected heavily to produce a stable configuration, and generally not cause people to go and do radical things.
(Asking to "taboo X" is a common request on LessWrong and the in-person rationality community, requesting to replace the specific word with an equivalent but usually more mechanistic definition for the rest of the conversation. See also: Rationalist Taboo)
My default model before reading this post was: some people are very predisposed to craziness spirals. They're behaviorally well-described as "looking for something to go crazy about", not necessarily in a reflectively-endorsed sense, but in the sense that whenever they stumble across something about which one could go crazy (like e.g. lots of woo-stuff), they'll tend to go into a spiral around it.
"AI is likely to kill us all" is definitely a thing in response to which one can fall into a spiral-of-craziness, so we naturally end up "attracting" a bunch of people who are behaviorally well-described as "looking for something to go crazy about". (In terms of pattern matching, the most extreme examples tend to be the sorts of people who also get into quantum suicide, various flavors of woo, poorly executed anthropic arguments, poorly executed acausal trade arguments, etc.)
Other people will respond to basically-the-same stimuli by just... choosing to not go crazy (to borrow a phrase from Nate). They'll see the same "AI is likely to kill us all" argument and respond by doing something useful, or just ignoring it, or doing something useless but symbolic and not thinking too hard about it. ...
My default model had been "a large cluster of the people who are able to use their reasoning to actually get involved in the plot of humanity, have overridden many schelling fences and absurdity heuristics and similar, and so are using their reasoning to make momentous choices, and just weren't strong enough not to get some of it terribly wrong". Similar to the model from reason as memetic immune disorder.
I also think this fits the FTX situation quite well. My current best model of what happened at an individual psychological level was many people being attracted to FTX/Alameda because of the potential resources, then many rounds of evaporative cooling as anyone who was not extremely hardcore according to the group standard was kicked out, with there being a constant sense of insecurity for everyone involved that came from the frequent purges of people who seemed to not be on board with the group standard.
While a lot of this post fits with my model of the world (the threat of exile is something I can viscerally feel change what my beliefs are), the FTX part as-written is sufficiently non-concrete to me that I can't tell if it fits or doesn't fit with reality.
Things I currently believe about FTX/Alameda (including from off-the-internet information):
Yeah, FTX seems like a totally ordinary financial crime. You don't need utilitarianism or risk neutrality to steal customer money or take massive risks.
LaSota and Leverage said that they had high standards and were doing difficult things, whereas SBF said that he was doing the obvious things a little faster, a little more devoted to EV.
I suggest a more straightforward model: taking ideas seriously isn't healthy. Most of the attempts to paint SBF as not really an EA seem like weird reputational saving throws when he was around very early on and had rather deep conviction in things like the St. Petersburg Paradox...which seems like a large part of what destroyed FTX. And Ziz seemed to be one of the few people to take the decision theoretical "you should always act as if you're being simulated to see what sort of decision agent you are" idea seriously...and followed that to their downfall. I read the Sequences, get convinced by the arguments within, donate a six figure sum to MIRI...and have basically nothing to show for it at pretty serious opportunity costs. (And that's before considering Ziz's pretty interesting claims about how MIRI spent donor money.)
In all of these cases, the problem was individual confidence in ideas, not social effects.
My model is instead that the sort of people who are there to fit in aren't the people who go crazy; there are plenty of people in the pews who are there for the church but not the religion. The MOPs and Sociopaths seem to be much, much saner than the Geeks. If that's right, ra...
What made Charles Manson's cult crazy in the eyes of the rest of society was not that they (allegedly) believed that was a race war was inevitable, and that white people needed to prepare for it & be the ones that struck first. Many people throughout history who we tend to think of as "sane" have evangelized similar doctrines or agitated in favor of them. What made them "crazy" was how nonsensical their actions were even granted their premises, i.e. the decision to kill a bunch of prominent white people as a "false flag".
Likewise, you can see how Lasota's "surface" doctrine sort of makes sense, I guess. It would be terrible if we made an AI that only cared about humans and not animals or aliens, and that led to astronomical suffering. The Nuremberg trials were a good idea, probably for reasons that have their roots in acausally blackmailing people not to commit genocide. If the only things I knew about the Zizcult were that they believed we should punish evildoers, and that factory farms were evil, I wouldn't call them crazy. But then they go and (allegedly) waste Jamie Zajko's parents in a manner that doesn't further their stated goals at all and makes no tactical sense to any...
But then they go and (allegedly) waste Jamie Zajko's parents in a manner that doesn't further their stated goals at all and makes no tactical sense to anyone thinking coherently about their situation.
And yet that seems entirely in line with the "Collapse the Timeline" line of thinking that Ziz advocated.
Ditto for FTX, which, when one business failed, decided to commit multi-billion dollar fraud via their other actually successfully business, instead of just shutting down alameda and hoping that the lenders wouldn't be able to repo too much of the exchange.
And yet, that seems like the correct action if you sufficiently bullet bite expected value and the St. Petersberg Paradox, which SBF did repeatedly in interviews.
And yet, that seems like the correct action if you sufficiently bullet bite expected value and the St. Petersberg Paradox, which SBF did repeatedly in interviews.
I am not making an argument that the crime was +EV but SBF was dealt a bad hand. The EV of turning your entire business into the second largest ponzi scheme ever in order to save the smaller half is pretty apparently stupid, and ran an overwhelming chance of failure. There is no EV calculus where the SBF decision is a good one except maybe one in which he ignores externalities to EA and is simply trying to support his status, and even then I hardly understand it.
And yet that seems entirely in line with the "Collapse the Timeline" line of thinking that Ziz advocated.
Right, it is possible that something like this was what they told themselves, but it's bananas. Imagine you're Ziz. You believe the entire lightcone is at risk of becoming a torture zone for animals at the behest of Sam Altman and Demis Hassabis. This threat is foundational to your worldview and is the premier cassus belli for action. Instead of doing anything about that, you completely ignore this problem to go on the side quest of enacting retributive j...
My understanding of your point is that Mason was crazy because his plans didn't follow from his premise and had nothing to do with his core ideas. I agree, but I do not think that's relevant.
I am pushing back because, if you are St. Petersberg Paradox-pilled like SBF and make public statements that actually you should keep taking double or nothing bets, perhaps you are more likely to make tragic betting decisions and that's because of you're taking certain ideas seriously. If you have galaxy brained the idea of the St. Petersberg Paradox, it seems like Alameda style fraud is +EV.
I am pushing back because, if you believe that you are constantly being simulated to see what sort of decision agent you are, you are going to react extremely to every slight and that's because you're taking certain ideas seriously. If you have galaxy brained the idea that you're being simulated to see how you react, killing Jamie's parents isn't even really killing Jamie's parents, it's showing what sort of decision agent you are to your simulators.
In both cases, they did X because they believe Y which implies X seems like a more parsimonious explanation for their behaviour.
(To be clear: I endorse neither of these ideas, even if I was previously positive on MIRI style decision theory research.)
I am pushing back because, if you are St. Petersberg Paradox-pilled like SBF and make public statements that actually you should keep taking double or nothing bets, perhaps you are more likely to make tragic betting decisions and that's because of you're taking certain ideas seriously. If you have galaxy brained the idea of the St. Petersberg Paradox, it seems like Alameda style fraud is +EV.
This is conceding a big part of your argument. You’re basically saying, yes, SBF’s decision was -EV according to any normal analysis, but according to a particular incorrect (“galaxy-brained”) analysis, it was +EV.
(Aside: what was actually the galaxy-brained analysis that’s supposed to have led to SBF’s conclusion, according to you? I don’t think I’ve seen it described, and I suspect this lack of a description is not a coincidence; see below.)
There are many reasons someone might make an error of judgement—but when the error in question stems (allegedly) from an incorrect application of a particular theory or idea, it makes no sense to attribute responsibility for the error to the theory. And as the mistake in question grows more and more outlandish (and more and more disconnected from any re...
If people inevitably sometimes make mistakes when interpreting theories, and theory-driven mistakes are more likely to be catastrophic than the mistakes people make when acting according to "atheoretical" learning from experience and imitation, then unusually theory-driven people are more likely to make catastrophic mistakes. In the absence of a way to prevent people from sometimes making mistakes when interpreting theories, this seems like a pretty strong argument in favor of atheoretical learning from experience and imitation!
This is particularly pertinent if, in a lot of cases where more sober theorists tend to say, "Well, the true theory wouldn't have recommended that," the reason the sober theorists believe that is because they expect true theories to not wildly contradict the wisdom of atheoretical learning from experience and imitation, rather than because they've personally pinpointed the error in the interpretation.
("But I don't need to know the answer. I just recite to myself, over and over, until I can choose sleep: It all adds up to normality.")
And that's even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money. (Even while sober people with sublinear utility functions shake their heads at the allegedly foolish spectacle of the bankruptcy in 89% of possible worlds.)
I think the causality runs the other way though; people who are crazy and grandiose are likely to come up with spurious theories to justify actions they wanted to take anyway. Experience and imitation shows us that non-crazy people successfully use theories to do non-crazy things all the time, so much so that you probably take it for granted.
But if group members are insecure enough, or if there is some limited pool of resources to divide up that each member really wants for themselves, then each member experiences a strong pressure to signal their devotion harder and harder, often burning substantial personal resources.
To add to this: if the group leaders seem anxious or distressed, then one of the ways in which people may signal devotion is by also being anxious and distressed. This will then make everything worse - if you're anxious, you're likely to think poorly and fixate on what you think is wrong without necessarily being able to do any real problem-solving around it. It also causes motivated reasoning about how bad everything is, so that one could maintain that feeling of distress.
In various communities there's often a (sometimes implicit, sometimes explicit) notion of "if you're not freaked out by what's happening, you're not taking things seriously enough". E.g. to take an example from EA/rationalist circles, this lukeprog post, while not quite explicitly saying that, reads to me as coming close (I believe that Luke only meant to say that it's good for people to take action, but the way it's phrased, it implie...
I feel like consequentialists are more likely to go crazy due to not being grounded in deontological or virtue-ethical norms of proper behavior. It's easy to think that if you're on track to saving the world, you should be able to do whatever is necessary, however heinous, to achieve that goal. I didn't learn to stop seeing people as objects until I leaned away from consequentialism and toward the anarchist principle of unity of means and ends (which is probably related to the categorical imperative). E.g. I want to live in a world where people are respected as individuals, so I have to respect them as individuals - whereas maximizing individual-respect might lead me to do all sorts of weird things to people now in return for some vague notion of helping lots more future people.
I think drugs and non-standard lifestyle choices are a contributing factor. Messing with ones biology / ignoring the default lifestyle in your country to do something very non-standard is riskier and less likely to turn out well than many imagine.
Everyone, read also the comments at EA website, the top one makes a great point:
while EA/rationalism is not a cult, it contains enough ingredients of a cult that it’s relatively easy for someone to go off and make their own.
To avoid derailing the debate towards the definition of cult etc., let me paraphrase it as:
EA/rationalism is not an evil project, but it is relatively easy for someone to start an evil project by recruiting within the EA/rationalist ecosystem. (As opposed to starting an evil project somewhere else.)
This is how.
EA/rationalism in general [...] lacks enforced conformity and control by a leader. [...]
However, what seems to have happened is that multiple people have taken these base ingredients and just added in the conformity and charismatic leader parts. You put these ingredients in a small company or a group house, put an unethical or mentally unwell leader in charge, and you have everything you need for an abusive [...] environment. [...] This seems to have happened multiple times already.
I didn't spend much time thinking about it, but I suspect that the lack of "conformity and control" in EA/rationalism may actually be a weakness, from this perspective. Wh...
People sometimes say that cult members tend to have conflicts that lead to them joining the cult. Recently I've been wondering if this is an underrated aspect of cultishness.
Let's take the LaSota crew as an example. As I understand, they were militant vegans.
And if I understand correctly, vegans are concerned about the dynamic where, in order to obtain animal flesh to eat, people usually hire people to raise lots of animals in tiny indoor spaces, sometimes cutting off body parts such as beaks if they are too irritable. Never letting them out to live freely. Taking their children away shortly after they give birth. Breeding them to grow rapidly but also often having genetic disorders that cause a lot of pain and disability. And basically just having them live like that until they get killed and butchered.
And from what I understand, society often tries to obscure this. People get uncomfortable and try to change the subject when you talk about it. People might make laws to make it hard to film and share what's going on. People come up with convoluted denials of animals having feelings. And so on.
(I am not super certain about the above two paragraphs because I haven't investigated it m...
[note: I'm not particularly EA, beyond the motte of caring about others and wanting my activities to be effective. ]
I think this is basically correct. EA tends to attract outliers who are susceptible to claims of aggrandizement - telling themselves and being told they're the heroes in the story. It reinforces this with contrarian-ness, especially on dimensions with clever, math-sounding legible arguments behind them. And then reinforces that "effective" is really about the biggest numbers you can plausibly multiple your wild guesses out to.
Until recently, it was all circulating in a pile of free money driven by related insanity of crypto and tech investment, which seemed to have completely forgotten that zero interest rates were unlikely to continue forever, and actually producing stuff would eventually be important.
[ epistemic status for next section: provocative devil's advocate argument ]
The interesting question is "sure, it's crazy, but is it wrong?" I suspect it is wrong - the multiplicative factors into the future are extremely tenuous. But in the event that this level of commitment and intensity DOES cause alignment to be solved in time, it's arguable that all the insanity is worth it. If your advice makes the efforts less individually harmful, but also a little bit less effective, it could be a net harm to the universe.
I'm focusing on the aspects specific to rationalism and effective altruism that could lead to people who nominally are part of the rationality community being crazy at a higher rate than one would expect. From your post, I got the following list:
I may be missing some but these are all the aspects that stood out to me. From my perspective, the #1 most important cause of the craziness that sometimes occurs in nominally rationalist communities is that rationalists reject tradition. This kind of falls under the fewer norms category but I don't think 'fewer norms' really captures it.
A lot of people will naturally do crazy things without the strict social rules and guidelines that humans have operated with for hundreds of years. The same rules that have been slowly eroding since 1900. And nominally rationalist communities are kind of at the forefront of eroding those social rules. Rationalists accept as normal ideas like polyamory, group homes (in adulthood as a long term situation), drug use, atheism, mysticism, brain-hacking, transgenderism, sadomasochism, and a whole slew of other...
It’s not clear to me what “crazy” means in this post & how it relates to something like raising the sanity waterline. A clearer idea of what you mean by crazy would, I think, dissolve the question.
The first one seems like it would describe most people, e.g. many, many people repeatedly drink enough alcohol to predictably acutely regret it later.
The second would seem to exclude incurable cases, and I don’t see how to repair that defect without including ordinary religious people.
The third would also seem to include ordinary religious people.
I think these problems are also problems with the OP’s frame. If taken literally, the OP is asking about a currently ubiquitous or at least very common aspect of the human condition, while assuming that it is rare, intersubjectively verified by most, and pathological.
My steelman of the OP’s concern would be something like “why do people sometimes suddenly, maladaptively, and incoherently deviate from the norm?”, and I think a good answer would take into account ways in which the norm is already maladaptive and incoherent, such that people might legitimately be sufficiently desperate to accept that sort of deviance as better for them in expectation than whatever else was happening, instead of starting from the assumption that the deviance itself is a mistake.
If it’s hard to see how apparently maladaptive deviance might not be a mistake, consider a North Korean Communist asking about attempted defectors - who observably often fail, end up much worse off, and express regret afterwards - “why do our people sometimes turn crazy?”. From our perspective out here it’s easy to see what the people asking this question are missing.
This still leaves me confused about why these people made such terrible mistakes. Many people can look at their society and realize how it is cognitively distorting and tricking them into evil behavior. It seems aggressively dumb to then decide that personally murdering people you think are evil is straightforwardly fine and a good strategy, or that you have psychic powers and should lock people in rooms.[1] I think there are more modest proposals, like seasteading or building internet communities or legalizing prediction markets, that have a strong shot of fixing a chunk of the insanity of your civilization without leaving you entirely out in the wilderness, having to rederive everything for yourself and leading you to shooting yourself in the foot quite so quickly.
I expect all North Korean defectors will get labeled evil and psychotic by the state. Like a sheeple, I don't think all such ones will be labeled this way by everyone in my personal society, though I straightforwardly acknowledge that a substantial fraction will. I think there were other options here that were less... wantonly dysfunctional.
Or stealing billions of dollars from people. But to honestly be, that one
I think part of what happens in these events is that they reveal how much disorganized or paranoid thought went into someone's normal persona. You need to have a lot of trust in the people around you to end up with a plan like seasteading or prediction markets - and I notice that those ideas have been around for a long time without visibly generating a much saner & lower-conflict society, so it does not seem like that level of trust is justified.
A lot of people seem to navigate life as though constantly under acute threat and surveillance (without a clear causal theory of how the threat and surveillance are paid for), expecting to be acutely punished the moment they fail to pass as normal - so things they report believing are experienced as part of the act, not the base reality informing their true sense of threat and opportunity. So it's no wonder that if such people get suddenly jailbroken without adequate guidance or space for reflection, they might behave like a cornered animal and suddenly turn on their captors seemingly at random.
For a compelling depiction of how this might feel from the inside, I strongly recommend John Carpenter's movie They Live (1988), whi...
I have different hypotheses / framings. I will offer them. If you wish to discuss any of them in more detail, please reach out to me via email or PM. Happy to converse. !
//
Mythical/Archetypal take:
There are large-scale, old, and powerful egregores fighting over the minds of individuals and collectives. They are not always very friendly to human interests or values. In some cases, they are downright evil. (I'd claim the Marxist egregore is a pretty destructive one.)
The damage done by these egregores is multigenerational. It didn't start wi...
fwiw I think stealing money from mostly-rich-people in order to donate it isn't obviously crazy. Decouple this claim from anything FTX did in particular, since I know next to nothing about the details of what happened there. From my perspective, it could be they were definite villains or super-ethical risk-takers (low prior).
Thought I'd say because I definitely feel reluctance to say so. I don't like this feeling, and it seems like good anti-bandwagon policy to say a thing when one feels even slight social pressure to shut up.
I personally know more than one person for whom the majority of their life savings were stolen from them, who put it into FTX in part because of the trust Sam had in the EA ecosystem. I think there's a pretty strong schelling line (supported and enforced by the law) against theft, such that even if it is worth it on naive utilitarian terms I am strongly in favor of punishing and imprisoning anyone who does so, so that people can work together safe in the knowledge that all the resources they've worked hard to earn won't be straightforwardly taken from them.
(In this comment I'm more trying to say "massive theft should be harshly punished regardless of intention" than say "I know the psychology behind why SBF, Caroline Ellison, and others, stole everyone's money".)
My honest opinion is that Ziz got several friends of mine killed. So i dont exactly have a high opinion of her. But I have never heard of Ziz referring to themselves as LaSota. Its honestly toxic not to use people's preferred names. Its especially toxic if they are trans but the issue isn't restricted to trans people. So Id strongly prefer people refer to Ziz as Ziz.
I think this position has some merit, though I disagree. I think Ziz is a name that is hard to Google and get context on, and also feels like it's chosen with intimidation in mind. "LaSota" is me trying to actively be neutral and not choose a name that they have actively disendorsed, but while also making it a more unique identifier, not misgendering them (like their full legal name would), and not contributing to more bad dynamics by having a "cool name for the community villain", which I really don't think has good consequences.
I think when it comes to people who get people killed, it's justified to reveal all the names they go by in the interest of public safety, even if they don't like it.
Probably be grounded in more than one social group. Even being part of two different high-intensity groups seems like it should reduce the dynamics here a lot.
Worked well for me!
Eric Chisholm likes to phrase this principle as "the secret to cults is to be in at least two of them".
- Don’t put yourself into positions of insecurity. […]
This seems like it points in the wrong direction to me. I'd instead say something like "look for your own insecurities and then look closely the ones you find". But the current thing you've said sounds like "avoid wherever your insecurities might manifest (because they're fixed)".
[How to resolve insecurities? Coherence Therapy.]
I think there's a commonly-held belief that a feeling of belonging is something that we can get from other people, but I think this is a misconception. Stable confidence doesn't co...
Good breakdown of one of the aspects in all this. The insecurity/desperation topic is a really hard one to navigate well, but I agree it's really important.
Hard because when someone feels like an outsider, a group of other likeminded outsiders will naturally want to help them and welcome them, and it can be an uncomplicated good to do so. Important because if someone has only one source of to supply support, resources, social needs, etc, they are far more likely to turn desperate or do desperate things to maintain their place in the community.
Does this mea...
Hot take: to the extent that EAs and rationalists turn crazy, part of the problem involves that some of their focuses include existential risk + having very low discount rates for the future.
To explain more, I think that utilitarianism is maybe a part of the problem, but it's broader than that. The bigger problem is once you fundamentally believe that we will all die of something, and your group can control the chances of being extinct, that's a fast road to craziness, given that most of these existential risks probably wouldn't materialize anyway, and imp...
Seems like the forces that turn people crazy are the same ones that lead people to do anything good and interesting at all. At least for EA, a core function of orgs/elites/high status community members is to make the kind of signaling you describe highly correlated with actually doing good. Of course it seems impossible to make them correlate perfectly, and that’s why setting with super high social optimization pressure (like FTX) are gonna be bad regardless.
But (again for EA specifically) I suspect the forces you describe would actually be good to increas...
...To any individual in a group, it can easily be the case that they think the group standard seems dumb, but in a situation of risk aversion, the important part is that you do things that look to everyone like the kind of thing that others would think is part of the standard. In practice this boils down to a very limited kind of reasoning where you do things that look vaguely associated with whatever you think the standard is, often without that standard being grounded in much of any robust internal logic. And doing things that are inconsistent with the actu
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
Most social groups will naturally implement an "in-group / out-group" identifier of some kind and associated mechanisms to apply this identifier on their members. There are a few dynamics at play here:
I think I might have a promising and better intervention for preventing individuals EAs and Rationalists from “turning crazy”. What would you want to do with it?
Epistemic status: This is a pretty detailed hypothesis that I think overall doesn’t add up to more than 50% of my probability mass on explaining datapoints like FTX, Leverage Research, the LaSota crew etc., but is still my leading guess for what is going on. I might also be really confused about the whole topic.
Since the FTX explosion, I’ve been thinking a lot about what caused FTX and, relatedly, what caused other similarly crazy- or immoral-seeming groups of people in connection with the EA/Rationality/X-risk communities.
I think there is a common thread between a lot of the people behaving in crazy or reckless ways, that it can be explained, and that understanding what is going on there might be of enormous importance in modeling the future impact of the extended LW/EA social network.
The central thesis: "People want to fit in"
I think the vast majority of the variance in whether people turn crazy (and ironically also whether people end up aggressively “normal”) is dependent on their desire to fit into their social environment. The forces of conformity are enormous and strong, and most people are willing to quite drastically change how they relate to themselves, and what they are willing to do, based on relatively weak social forces, especially in the context of a bunch of social hyperstimulus (lovebombing is one central example of social hyperstimulus, but also twitter-mobs and social-justice cancelling behaviors seem similar to me in that they evoke extraordinarily strong reactions in people).
My current model of this kind of motivation in people is quite path-dependent and myopic. Even if someone could leave a social context that seems kind of crazy or abusive to them and find a different social context that is better, with often only a few weeks of effort, they rarely do this (they won't necessarily find a great social context, since social relationships do take quite a while to form, but at least when I've observed abusive dynamics, it wouldn't take them very long to find one that is better than the bad situation in which they are currently in). Instead people are very attached, much more than I think rational choice theory would generally predict, to the social context that they end up in, with people very rarely even considering the option of leaving and joining another one.
This means that I currently think that the vast majority of people (around 90% of the population or so) are totally capable of being pressured into adopting extreme beliefs, being moved to extreme violence, or participating in highly immoral behavior, if you just put them into a social context where the incentives push in the right direction (see also Milgram and the effectiveness of military drafts).
In this model, the primary reason for why people are not crazy is because social institutions and groups that drive people to extreme action tend to be short lived. The argument here is an argument from selection, not planning. Cults that drive people to extreme action die out quite quickly since they make enemies, or engage in various types of self-destructive behavior. Moderate religions that include some crazy stuff, but mostly cause people to care for themselves and not go crazy, survive through the ages and become the primary social context for a large fraction of the population.
There is still a question of how you end up with groups of people who do take pretty crazy beliefs extremely seriously. I think there are a lot of different attractors that cause groups to end up with more of the crazy kind of social pressure. Sometimes people who are more straightforwardly crazy, who have really quite atypical brains, end up in positions of power and set a bunch of bad incentives. Sometimes it’s lead poisoning. Sometimes it’s sexual competition. But my current best guess for what explains the majority of the variance here is virtue-signaling races combined with evaporative cooling.
Eliezer has already talked a bunch about this in his essays on cults, but here is my current short story for how groups of people end up having some really strong social forces towards crazy behavior.
I think the central driver in this story is the same central driver that causes most people to be boring, which is the desire to fit in. Same force, but if you set up the conditions a bit differently, and add a few additional things to the mix, you get pretty crazy results.
Applying this model to EA and Rationality
I think the primary way the EA/Rationality community creates crazy stuff is by the mechanism above. I think a lot of this is just that we aren’t very conventional and so we tend to develop novel standards and social structures, and those aren’t selected for not-exploding, and so things we do explode more frequently. But I do also think we have a bunch of conditions that make the above dynamics more likely to happen, and also make the consequences of the above dynamics worse.
But before I go into the details of the consequences, I want to talk a bit more about the evidence I have for this being a good model.
Now, I think a bunch of EA and Rationality stuff tends to make the dynamics here worse:
Now one might think that because we have a lot of smart people, we might be able to avoid the worst outcomes here, by just not enforcing extreme standards that seem pretty crazy. And indeed I think this does help! However, I also think it’s not enough because:
Social miasma is much dumber than the average member of a group
I think a key point to pay attention to in what is going on in these kind of runaway signaling dynamics is: “how does a person know what the group standard is?”.
And the short answer to that is “well, the group standard is what everyone else believes the group standard is”. And this is the exact context in which social miasma dynamics come into play. To any individual in a group, it can easily be the case that they think the group standard seems dumb, but in a situation of risk aversion, the important part is that you do things that look to everyone like the kind of thing that others would think is part of the standard. In practice this boils down to a very limited kind of reasoning where you do things that look vaguely associated with whatever you think the standard is, often without that standard being grounded in much of any robust internal logic. And things that are inconsistent with the actual standard upon substantial reflection do not actually get punished, as long as they look like the kind of behavior that looks like it was generated by someone trying to follow the standard.
(Duncan gives a bunch more gears and details on this in his “Common Knowledge and Social Miasma” post: https://medium.com/@ThingMaker/common-knowledge-and-miasma-20d0076f9c8e)
How do people avoid turning crazy?
Despite me thinking the dynamics above are real and common, there are definitely things that both individuals and groups can do to make this kind of craziness less likely, and less bad when it happens.
First of all, there are some obvious things this theory predicts:
There are a lot of other dynamics that I think are relevant here, and I think there are a lot more things one can do to fight against these dynamics, and there are also a ton of other factors that I haven’t talked about (willingness to do crazy mental experiments, contrarianism causing active distaste for certain forms of common sense, some people using a bunch of drugs, high price of Bay Area housing, messed up gender-ratio and some associated dynamics, and many more things). This is definitely not a comprehensive treatment, but it feels like currently one of the most important pieces for understanding what is going on when people in the extended EA/Rationality/X-Risk social network turn crazy in scary ways.