Epistemic status: This is a pretty detailed hypothesis that I think overall doesn’t add up to more than 50% of my probability mass on explaining datapoints like FTX, Leverage Research, the LaSota crew etc., but is still my leading guess for what is going on. I might also be really confused about the whole topic.

Since the FTX explosion, I’ve been thinking a lot about what caused FTX and, relatedly, what caused other similarly crazy- or immoral-seeming groups of people in connection with the EA/Rationality/X-risk communities. 

I think  there is a common thread between a lot of the people behaving in crazy or reckless ways, that it can be explained, and that understanding what is going on there might be of enormous importance in modeling the future impact of the extended LW/EA social network.

The central thesis: "People want to fit in"

I think the vast majority of the variance in whether people turn crazy (and ironically also whether people end up aggressively “normal”) is dependent on their desire to fit into their social environment. The forces of conformity are enormous and strong, and most people are willing to quite drastically change how they relate to themselves, and what they are willing to do, based on relatively weak social forces, especially in the context of a bunch of social hyperstimulus (lovebombing is one central example of social hyperstimulus, but also twitter-mobs and social-justice cancelling behaviors seem similar to me in that they evoke extraordinarily strong reactions in people). 

My current model of this kind of motivation in people is quite path-dependent and myopic. Even if someone could leave a social context that seems kind of crazy or abusive to them and find a different social context that is better, with often only a few weeks of effort, they rarely do this (they won't necessarily find a great social context, since social relationships do take quite a while to form, but at least when I've observed abusive dynamics, it wouldn't take them very long to find one that is better than the bad situation in which they are currently in).  Instead people are very attached, much more than I think rational choice theory would generally predict, to the social context that they end up in, with people very rarely even considering the option of leaving and joining another one. 

This means that I currently think that the vast majority of people (around 90% of the population or so) are totally capable of being pressured into adopting extreme beliefs, being moved to extreme violence, or participating in highly immoral behavior, if you just put them into a social context where the incentives push in the right direction (see also Milgram and the effectiveness of military drafts). 

In this model, the primary reason for why people are not crazy is because social institutions and groups that drive people to extreme action tend to be short lived. The argument here is an argument from selection, not planning. Cults that drive people to extreme action die out quite quickly since they make enemies, or engage in various types of self-destructive behavior. Moderate religions that include some crazy stuff, but mostly cause people to care for themselves and not go crazy, survive through the ages and become the primary social context for a large fraction of the population. 

There is still a question of how you end up with groups of people who do take pretty crazy beliefs extremely seriously. I think there are a lot of different attractors that cause groups to end up with more of the crazy kind of social pressure. Sometimes people who are more straightforwardly crazy, who have really quite atypical brains, end up in positions of power and set a bunch of bad incentives. Sometimes it’s lead poisoning. Sometimes it’s sexual competition. But my current best guess for what explains the majority of the variance here is virtue-signaling races combined with evaporative cooling

Eliezer has already talked a bunch about this in his essays on cults, but here is my current short story for how groups of people end up having some really strong social forces towards crazy behavior. 

  1. There is a relatively normal social group.
  2. There is a demanding standard that the group is oriented around, which is external to any specific group member. This can be something like “devotion to god” or it can be something like the EA narrative of trying to help as many people as possible. 
  3. When individuals signal that they are living their life according to the demanding standard, they get status and respect. The inclusion criterion in the group is whether someone is sufficiently living up to the demanding standard, according to vague social consensus.
  4. At the beginning this looks pretty benign and like a bunch of people coming together to be good improv theater actors or something, or to have a local rationality meetup. 
  5. But if group members are insecure enough, or if there is some limited pool of resources to divide up that each member really wants for themselves, then each member experiences a strong pressure to signal their devotion harder and harder, often burning substantial personal resources.
  6. People who don’t want to live up to the demanding standard leave, which causes evaporative cooling and this raises the standards for the people who remain. Frequently this also causes the group to lose critical mass. 
  7. The preceding steps cause a runaway signaling race in which people increasingly devote their resources to living up to the group's extreme standard, and profess more and more extreme beliefs in order to signal that they are living up to that extreme standard

I think the central driver in this story is the same central driver that causes most people to be boring, which is the desire to fit in. Same force, but if you set up the conditions a bit differently, and add a few additional things to the mix, you get pretty crazy results. 

Applying this model to EA and Rationality

I think the primary way the EA/Rationality community creates crazy stuff is by the mechanism above. I think a lot of this is just that we aren’t very conventional and so we tend to develop novel standards and social structures, and those aren’t selected for not-exploding, and so things we do explode more frequently. But I do also think we have a bunch of conditions that make the above dynamics more likely to happen, and also make the consequences of the above dynamics worse. 

But before I go into the details of the consequences, I want to talk a bit more about the evidence I have for this being a good model. 

  1. Eliezer wrote about something quite close to this 10+ years ago and derived it from a bunch of observations of other cults, before really our community had shown much of any of these dynamics, so it wins some “non-hindsight bias” points.
  2. I think this fits the LaSota crew situation in a lot of detail. A bunch of insecure people who really want a place to belong find the LaSota crew, which offers them a place to belong, but comes with (pretty crazy) high standards. People go crazy trying to demonstrate devotion to the crazy standard.
  3. I also think this fits the FTX situation quite well. My current best model of what happened at an individual psychological level was many people being attracted to FTX/Alameda because of the potential resources, then many rounds of evaporative cooling as anyone who was not extremely hardcore according to the group standard was kicked out, with there being a constant sense of insecurity for everyone involved that came from the frequent purges of people who seemed to not be on board with the group standard.
  4. This also fits my independent evidence from researching cults and other more extreme social groups, and what the dynamics there tend to be. One concrete prediction of this model is that the people who feel most insecure tend to be driven to the most extreme actions, which is borne out in a bunch of cult situations. 

Now, I think a bunch of EA and Rationality stuff tends to make the dynamics here worse: 

  1. We tend to attract people who are unwelcome in other parts of the world. This includes a lot of autistic people, trans people, atheists from religious communities, etc.
  2. The standards that we have in our groups, especially within EA, have signaling spirals that pass through a bunch of possibilities that sure seem really scary, like terrorism or fraud (unlike e.g. a group of monks, who might have signaling spirals that cause them to meditate all day, which can be individually destructive but does not have a ton of externalities). Indeed, many of our standards directly encourage *doing big things* and *thinking worldscale*.
  3. We are generally quite isolationist, which means that there are fewer norms that we share with more long-lived groups which might act as antibodies for the most destructive kind of ideas (importantly, I think these memes are not optimized for not causing collateral damage in other ways; indeed, many stability-memes make many forms of innovation or growth or thinking a bunch harder, and I am very glad we don’t have them).
  4. We attract a lot of people who are deeply ambitious (and also our standards encourage ambition), which means even periods of relative plenty can induce strong insecurities because people’s goals are unbounded, they are never satisfied, and marginal resources are always useful.

Now one might think that because we have a lot of smart people, we might be able to avoid the worst outcomes here, by just not enforcing extreme standards that seem pretty crazy. And indeed I think this does help! However, I also think it’s not enough because: 

Social miasma is much dumber than the average member of a group

I think a key point to pay attention to in what is going on in these kind of runaway signaling dynamics is: “how does a person know what the group standard is?”. 

And the short answer to that is “well, the group standard is what everyone else believes the group standard is”. And this is the exact context in which social miasma dynamics come into play. To any individual in a group, it can easily be the case that they think the group standard seems dumb, but in a situation of risk aversion, the important part is that you do things that look to everyone like the kind of thing that others would think is part of the standard. In practice this boils down to a very limited kind of reasoning where you do things that look vaguely associated with whatever you think the standard is, often without that standard being grounded in much of any robust internal logic. And things that are inconsistent with the actual standard upon substantial reflection do not actually get punished, as long as they look like the kind of behavior that looks like it was generated by someone trying to follow the standard.

(Duncan gives a bunch more gears and details on this in his “Common Knowledge and Social Miasma” post: https://medium.com/@ThingMaker/common-knowledge-and-miasma-20d0076f9c8e

How do people avoid turning crazy? 

Despite me thinking the dynamics above are real and common, there are definitely things that both individuals and groups can do to make this kind of craziness less likely, and less bad when it happens. 

First of all, there are some obvious things this theory predicts: 

  1. Don’t put yourself into positions of insecurity. This is particularly hard if you do indeed have world-scale ambitions. Have warning flags against desperation, especially when that desperation is related to things that your in-group wants to signal. Also, be willing to meditate on not achieving your world-scale goals, because if you are too desperate to achieve them you will probably go insane (for this kind of reason, and also some others).
  2. Avoid groups with strong evaporative cooling dynamics. As part of that, avoid very steep status gradients within (or on the boundary of) a group. Smooth social gradients are better than strict in-and-out dynamics.
  3. Probably be grounded in more than one social group. Even being part of two different high-intensity groups seems like it should reduce the dynamics here a lot. 
  4. To some degree, avoid attracting people who have few other options, since it makes the already high switching and exit costs even higher.
  5. Confidentiality and obscurity feel like they worsen the relevant dynamics a lot, since they prevent other people from sanity-checking your takes (though this is also much more broadly applicable). For example, being involved in crimes makes it much harder to get outside feedback on your decisions, since telling people what decisions you are facing now exposes you to the risk of them outing you. Or working on dangerous technologies that you can't tell anyone about makes it harder to get feedback on whether you are making the right tradeoffs (since doing so would usually involve leaking some of the details behind the dangerous technology). 
  6. Combat general social miasma dynamics (e.g. by running surveys or otherwise collapsing a bunch of the weird social uncertainty that makes things insane). Public conversations seem like they should help a bunch, though my sense is that if the conversation ends up being less about the object-level and more about persecuting people (or trying to police what people think) this can make things worse. 

There are a lot of other dynamics that I think are relevant here, and I think there are a lot more things one can do to fight against these dynamics, and there are also a ton of other factors that I haven’t talked about (willingness to do crazy mental experiments, contrarianism causing active distaste for certain forms of common sense, some people using a bunch of drugs, high price of Bay Area housing, messed up gender-ratio and some associated dynamics, and many more things). This is definitely not a comprehensive treatment, but it feels like currently one of the most important pieces for understanding what is going on when people in the extended EA/Rationality/X-Risk social network turn crazy in scary ways.

New Comment
109 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The spiritual world is rife with bad communities and I've picked up a trick for navigating them. Many of the things named in this post could broadly be construed under the heading of "weird power dynamics." Isolation creates weird power dynamics, poor optionality creates weird power dynamics, and drugs, and skewed gender ratios, and etc etc.

When I spot a weird power dynamic I name it out loud to the group. A lot of bad groups will helpfully kick me out themselves. I naturally somewhat shy away from such actions of course, but an action that reliably loses me status points with exactly the people I don't want to be around is great.

It's the emperor's clothes principle: That which can be destroyed by being described by a beginner should be. And the parable illustrates something important about how it works. It needs to be sincere, not snark, criticism, etc.

I feel like this seems helpful for something else, but I don't think it super accurately predicts which environments will give rise to more extremist behavior. 

Like, I am confident that the above strategy would not work very well if you point out the "weird power dynamics" of any of the world's largest religious communities, or any of the big corporations, or much of academia. Those places have tons of "weird power dynamics", but they don't give rise to extremist behavior. I expect all of those places to react very defensively and might kick you out if you point out all the weird power dynamics, but also, those power dynamics, while being "weird" will also still have been selected heavily to produce a stable configuration, and generally not cause people to go and do radical things.

9Aleksi Liimatainen
Seems to me that those weird power dynamics have deleterious effects even if countervailing forces prevent the group from outright imploding. It's a tradeoff to engage with such institutions on their own terms and these days a nontrivial number of people seem to choose not to.
4romeostevensit
I agree this does not carve the same shape as your post, I thought it was worth mentioning in this context and am curious what other techniques people might have stumbled upon.
3ChristianKl
One key difference is the living arrangement. In all the three groups you mentioned in the OP, living together and working together went hand in hand. 
4habryka
It's pretty normal for religious communities and universities to live and work in the same place, so this correlation doesn't feel super strong to me.
-1M. Y. Zuo
It seems there might be some confusion around what counts as 'weird power dynamics' between you and the parent. I would say that regardless of how weird the dynamics may appear from the outside, if the organization persists generation after generation, and even grows in influence, then it cannot be that weird in actuality.
9Matt Goldenberg
Can we taboo weird here? What are you trying to say about power dynamics that last a long time? Mod edit by Raemon: I've locked a downstream thread, but, copied Matt's last comment back up to this comment which seemed to be trying to restate his question and get the conversation back on track:
-2M. Y. Zuo
I cannot taboo another LW user's word choices?  To clarify if you are confused, I'm not 'habryka', nor am I a mod, nor has that user made any arrangements with me.

(Asking to "taboo X" is a common request on LessWrong and the in-person rationality community, requesting to replace the specific word with an equivalent but usually more mechanistic definition for the rest of the conversation. See also: Rationalist Taboo)

A moderator has deactivated replies on this comment
-2M. Y. Zuo
Which would make sense if this was my conversation, if I had first mentioned the word and then you responded. But it doesn't make sense to ask me when it's the other way around. I think 'Matt  Goldenberg' must have gotten confused into thinking I was someone else.
4Matt Goldenberg
No, I was specifically confused about your use of it, and your understanding of the OP.
-4M. Y. Zuo
To 'Taboo' a word implies intentionally avoiding it's use in the subsequent replies. It doesn't make sense to ask me to prevent 'habryka' from using a certain word in the future, because I don't possess the authority to force 'habyrka' to do anything or not do anything. Are you confused about what 'taboo' means?
8gjm
In this context I don't think it does mean "prevent it being used in subsequent replies", it means "please rephrase that thing you just said but without using that specific word". You said (I paraphrase): if an organization prospers in the longish term, then its power dynamics can't really be very weird even if they look like it. Matt doesn't see how that follows and suspects that either he isn't understanding what you mean by "weird" or else you're using it in a confused way somehow. He thinks that if either of those is true, it'll be helpful if you try to be more explicit about exactly what property of an organization you're saying is inconsistent with its prospering for generations. None of that requires you to stop other people using the word "weird" -- it's enough if you stop using it -- though if you make the effort Matt's suggesting and it seems helpful then habryka and/or romeostevensit might choose to follow suit, since you've suggested that they might be miscommunicating because of different unstated meanings of "weird". (I am to some extent guessing what Matt thinks and wants, but at the very least the foregoing is a possible thing he might be saying, that makes sense of his request that you taboo "weird" without any implication that you're supposed to stop other people using it.)
-8M. Y. Zuo

My default model before reading this post was: some people are very predisposed to craziness spirals. They're behaviorally well-described as "looking for something to go crazy about", not necessarily in a reflectively-endorsed sense, but in the sense that whenever they stumble across something about which one could go crazy (like e.g. lots of woo-stuff), they'll tend to go into a spiral around it.

"AI is likely to kill us all" is definitely a thing in response to which one can fall into a spiral-of-craziness, so we naturally end up "attracting" a bunch of people who are behaviorally well-described as "looking for something to go crazy about". (In terms of pattern matching, the most extreme examples tend to be the sorts of people who also get into quantum suicide, various flavors of woo, poorly executed anthropic arguments, poorly executed acausal trade arguments, etc.)

Other people will respond to basically-the-same stimuli by just... choosing to not go crazy (to borrow a phrase from Nate). They'll see the same "AI is likely to kill us all" argument and respond by doing something useful, or just ignoring it, or doing something useless but symbolic and not thinking too hard about it. ... (read more)

My default model had been "a large cluster of the people who are able to use their reasoning to actually get involved in the plot of humanity, have overridden many schelling fences and absurdity heuristics and similar, and so are using their reasoning to make momentous choices, and just weren't strong enough not to get some of it terribly wrong". Similar to the model from reason as memetic immune disorder.

5habryka
I don't think Sam believed that AI was likely to kill that many people, or if it did, that it would be that bad (since the AI might also have conscious experiences that are just as valuable as the human ones). I also think Leverage didn't really have much of an AI component. I think the LaSota crew maybe has a bit more of that, but I also feel like none of their beliefs are very load-bearing on AI, so I feel like this model doesn't predict reality super well.
8evhub
I think he at least pretended to believe this, no? I heard him say approximately this when I attended a talk/Q&A with him once.
5habryka
Huh, I remember talking to him about this, and my sense was that he thought the counterfactual of unaligned AI compared to the counterfactual of whatever humanity would do instead, was relatively small (compared to someone with a utilitarian mindset deciding on the future), though also of course that there were some broader game-theoretic considerations that make it valuable to coordinate with humanity more broadly.  Separately, his probability on AI Risk seemed relatively low, though I don't remember any specific probability. Looking at the future fund worldview prize, I do see 15% as the position that at least the Future Fund endorsed, conditional on AI happening by 2070 (which I think Sam thought was plausible but not that likely), which is a good amount, so I think I must be misremembering at least something here.

I also think this fits the FTX situation quite well. My current best model of what happened at an individual psychological level was many people being attracted to FTX/Alameda because of the potential resources, then many rounds of evaporative cooling as anyone who was not extremely hardcore according to the group standard was kicked out, with there being a constant sense of insecurity for everyone involved that came from the frequent purges of people who seemed to not be on board with the group standard.

While a lot of this post fits with my model of the world (the threat of exile is something I can viscerally feel change what my beliefs are), the FTX part as-written is sufficiently non-concrete to me that I can't tell if it fits or doesn't fit with reality.

Things I currently believe about FTX/Alameda (including from off-the-internet information):

  • There was a fair amount of lying to investors from the start.
  • From the start it was very chaotic, with terrible security practices and most employees not knowing the net balance of the company within like a factor of 4x, or whether net worth was increasing or decreasing from week to week.
  • Massive amounts of legal risk constantly being taken
... (read more)

Yeah, FTX seems like a totally ordinary financial crime. You don't need utilitarianism or risk neutrality to steal customer money or take massive risks.

LaSota and Leverage said that they had high standards and were doing difficult things, whereas SBF said that he was doing the obvious things a little faster, a little more devoted to EV.

1Jonas V
I think SBF rarely ever fired anyone, so "kicked out" seems wrong, but I heard that people who weren't behaving in the way SBF liked (e.g., recklessly risk-taking) got sidelined and often left on their own because their jobs became unpleasant or they had ethical qualms, which would be consistent with evaporative cooling.
4habryka
Huh, this doesn't match with stories that I heard. Maybe there wasn't much formal firing, but my sense is many people definitely felt like they were fired, or pushed out of the group.  Separately from the firing, the consistent thing that I have heard is that at FTX there was a small inner circle consisting of between 5-15 people. It was usually pretty clear who was in there, though there were always 2-3 people who were kind of ambiguously entering it or being pushed out, and being out of the inner circle would mean you lost most of the power over the associated organization and ecosystem.
[-]iceman21-14

I suggest a more straightforward model: taking ideas seriously isn't healthy. Most of the attempts to paint SBF as not really an EA seem like weird reputational saving throws when he was around very early on and had rather deep conviction in things like the St. Petersburg Paradox...which seems like a large part of what destroyed FTX. And Ziz seemed to be one of the few people to take the decision theoretical "you should always act as if you're being simulated to see what sort of decision agent you are" idea seriously...and followed that to their downfall. I read the Sequences, get convinced by the arguments within, donate a six figure sum to MIRI...and have basically nothing to show for it at pretty serious opportunity costs. (And that's before considering Ziz's pretty interesting claims about how MIRI spent donor money.)

In all of these cases, the problem was individual confidence in ideas, not social effects.

My model is instead that the sort of people who are there to fit in aren't the people who go crazy; there are plenty of people in the pews who are there for the church but not the religion. The MOPs and Sociopaths seem to be much, much saner than the Geeks. If that's right, ra... (read more)

[-]lc4814

What made Charles Manson's cult crazy in the eyes of the rest of society was not that they (allegedly) believed that was a race war was inevitable, and that white people needed to prepare for it & be the ones that struck first. Many people throughout history who we tend to think of as "sane" have evangelized similar doctrines or agitated in favor of them. What made them "crazy" was how nonsensical their actions were even granted their premises, i.e. the decision to kill a bunch of prominent white people as a "false flag".

Likewise, you can see how Lasota's "surface" doctrine sort of makes sense, I guess. It would be terrible if we made an AI that only cared about humans and not animals or aliens, and that led to astronomical suffering. The Nuremberg trials were a good idea, probably for reasons that have their roots in acausally blackmailing people not to commit genocide. If the only things I knew about the Zizcult were that they believed we should punish evildoers, and that factory farms were evil, I wouldn't call them crazy. But then they go and (allegedly) waste Jamie Zajko's parents in a manner that doesn't further their stated goals at all and makes no tactical sense to any... (read more)

But then they go and (allegedly) waste Jamie Zajko's parents in a manner that doesn't further their stated goals at all and makes no tactical sense to anyone thinking coherently about their situation.

And yet that seems entirely in line with the "Collapse the Timeline" line of thinking that Ziz advocated.

Ditto for FTX, which, when one business failed, decided to commit multi-billion dollar fraud via their other actually successfully business, instead of just shutting down alameda and hoping that the lenders wouldn't be able to repo too much of the exchange.

And yet, that seems like the correct action if you sufficiently bullet bite expected value and the St. Petersberg Paradox, which SBF did repeatedly in interviews.

[-]lc2512

And yet, that seems like the correct action if you sufficiently bullet bite expected value and the St. Petersberg Paradox, which SBF did repeatedly in interviews.

I am not making an argument that the crime was +EV but SBF was dealt a bad hand. The EV of turning your entire business into the second largest ponzi scheme ever in order to save the smaller half is pretty apparently stupid, and ran an overwhelming chance of failure. There is no EV calculus where the SBF decision is a good one except maybe one in which he ignores externalities to EA and is simply trying to support his status, and even then I hardly understand it.

And yet that seems entirely in line with the "Collapse the Timeline" line of thinking that Ziz advocated.

Right, it is possible that something like this was what they told themselves, but it's bananas. Imagine you're Ziz. You believe the entire lightcone is at risk of becoming a torture zone for animals at the behest of Sam Altman and Demis Hassabis. This threat is foundational to your worldview and is the premier cassus belli for action. Instead of doing anything about that, you completely ignore this problem to go on the side quest of enacting retributive j... (read more)

My understanding of your point is that Mason was crazy because his plans didn't follow from his premise and had nothing to do with his core ideas. I agree, but I do not think that's relevant.

I am pushing back because, if you are St. Petersberg Paradox-pilled like SBF and make public statements that actually you should keep taking double or nothing bets, perhaps you are more likely to make tragic betting decisions and that's because of you're taking certain ideas seriously. If you have galaxy brained the idea of the St. Petersberg Paradox, it seems like Alameda style fraud is +EV.

I am pushing back because, if you believe that you are constantly being simulated to see what sort of decision agent you are, you are going to react extremely to every slight and that's because you're taking certain ideas seriously. If you have galaxy brained the idea that you're being simulated to see how you react, killing Jamie's parents isn't even really killing Jamie's parents, it's showing what sort of decision agent you are to your simulators.

In both cases, they did X because they believe Y which implies X seems like a more parsimonious explanation for their behaviour.

(To be clear: I endorse neither of these ideas, even if I was previously positive on MIRI style decision theory research.)

[-]dxu192

I am pushing back because, if you are St. Petersberg Paradox-pilled like SBF and make public statements that actually you should keep taking double or nothing bets, perhaps you are more likely to make tragic betting decisions and that's because of you're taking certain ideas seriously. If you have galaxy brained the idea of the St. Petersberg Paradox, it seems like Alameda style fraud is +EV.

This is conceding a big part of your argument. You’re basically saying, yes, SBF’s decision was -EV according to any normal analysis, but according to a particular incorrect (“galaxy-brained”) analysis, it was +EV.

(Aside: what was actually the galaxy-brained analysis that’s supposed to have led to SBF’s conclusion, according to you? I don’t think I’ve seen it described, and I suspect this lack of a description is not a coincidence; see below.)

There are many reasons someone might make an error of judgement—but when the error in question stems (allegedly) from an incorrect application of a particular theory or idea, it makes no sense to attribute responsibility for the error to the theory. And as the mistake in question grows more and more outlandish (and more and more disconnected from any re... (read more)

If people inevitably sometimes make mistakes when interpreting theories, and theory-driven mistakes are more likely to be catastrophic than the mistakes people make when acting according to "atheoretical" learning from experience and imitation, then unusually theory-driven people are more likely to make catastrophic mistakes. In the absence of a way to prevent people from sometimes making mistakes when interpreting theories, this seems like a pretty strong argument in favor of atheoretical learning from experience and imitation!

This is particularly pertinent if, in a lot of cases where more sober theorists tend to say, "Well, the true theory wouldn't have recommended that," the reason the sober theorists believe that is because they expect true theories to not wildly contradict the wisdom of atheoretical learning from experience and imitation, rather than because they've personally pinpointed the error in the interpretation.

("But I don't need to know the answer. I just recite to myself, over and over, until I can choose sleep: It all adds up to normality.")

And that's even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money. (Even while sober people with sublinear utility functions shake their heads at the allegedly foolish spectacle of the bankruptcy in 89% of possible worlds.)

I think the causality runs the other way though; people who are crazy and grandiose are likely to come up with spurious theories to justify actions they wanted to take anyway. Experience and imitation shows us that non-crazy people successfully use theories to do non-crazy things all the time, so much so that you probably take it for granted.

6philh
But of course no human financier has a utility function, let alone one that can be expressed only in terms of money, let alone one that's linear in money. So in this hypothetical, yes, there is an error. (SBF said his utility was linear in money. I think he probably wasn't confused enough to think that was literally true, but I do think he was confused about the math.)
1Noosphere89
This is related to a very important point: Without more assumptions, there is no way to distinguish via outcomes the following 2 cases: irrationality while pursuing your values and being rational but having very different or strange values. (Also, I dislike the implication that it all adds up to normality, unless something else is meant or it's trivial, since you can't define normality without a context.)
3Noosphere89
Eh, I'm a little concerned in general, because this, without restrictions could be used to redirect blame away from the theory, even in cases where the implementation of a theory is evidence against the theory. The best example is historical non-capitalist societies, especially communist societies where communists claimed when responding to criticism roughly said that the communist societies weren't truly communist, and thus communism could still work if they were truly communist. This is the best example of this phenomenon, but I'm sure there's other examples of this phenomenon.
2Lukas_Gloor
I don't think so.  At the very least, it seems debatable. Biting the bullet in the St Petersburg paradox doesn't mean taking negative-EV bets. House of cards stuff ~never turns out well in the long run, and the fallout from an implosion also grows as you double down. Everything that's coming to light about FTX indicates it was a total house of cards. Seems really unlikely to me that most of these bets were positive even on fanatically risk-neutral, act utilitarian grounds. Maybe I'm biased because it's convenient to believe what I believe (that the instrumentally rational action is almost never "do something shady according to common sense morality.") Let's say it's defensible to see things otherwise. Even then, I find it weird that because Sam had these views on St Petersburg stuff, people speak as though this explains everything about FTX epistemics. "That was excellent instrumental rationality we were seeing on display by FTX leadership, granted that they don't care about common sense morality and bite the bullet on St Petersburg." At the very least, we should name and consider the other hypothesis, on which the St Petersburg views were more incidental (though admittedly still "characteristic"). On that other hypothesis, there's a specific type of psychology that makes people think they're invincible, which leads to them taking negative bets on any defensible interpretation of decision-making under uncertainty.
1Noosphere89
Who were you responding to, since I didn't make the argument that you were responding to.
2Lukas_Gloor
Oh, I was replying to Iceman – mostly this part that I quoted:   (I think I've seen similar takes by other posters in the past.) I should have mentioned that I'm not replying to you.  I think I took such a long break from LW that I forgot that you can make subthreads rather than just continue piling on at the end of a thread.  
9ChristianKl
It sounds to me like they thought that Jamie would inherit a significant amount of money if they do that. They might have done it not only for reasons of retributive justice but to fund their whole operation.

But if group members are insecure enough, or if there is some limited pool of resources to divide up that each member really wants for themselves, then each member experiences a strong pressure to signal their devotion harder and harder, often burning substantial personal resources.

To add to this: if the group leaders seem anxious or distressed, then one of the ways in which people may signal devotion is by also being anxious and distressed. This will then make everything worse - if you're anxious, you're likely to think poorly and fixate on what you think is wrong without necessarily being able to do any real problem-solving around it. It also causes motivated reasoning about how bad everything is, so that one could maintain that feeling of distress.

In various communities there's often a (sometimes implicit, sometimes explicit) notion of "if you're not freaked out by what's happening, you're not taking things seriously enough". E.g. to take an example from EA/rationalist circles, this lukeprog post, while not quite explicitly saying that, reads to me as coming close (I believe that Luke only meant to say that it's good for people to take action, but the way it's phrased, it implie... (read more)

3Ben Pace
Do you have an example of this from other communities? I am not quickly thinking of other examples (I think corrupt leaders often try to give vibes of being calm and in control and powerful, not being anxious and worried).  And furthermore I basically buy the claim that if you're not freaked out by our civilization then you don't understand it. From my current vantage point I agree that people will imitate the vibe of the leadership, but I feel like you're saying "and the particular vibe of anxiousness is common for common psychological reasons" but I don't know why you think that or what psychological reasons you have in mind.
7Richard_Ngo
There's probably a version of this sentence that I'd be sympathetic to (e.g. maybe "almost everyone's emotional security relies on implicit assumptions about how competent civilization is, which are false"). But in general I am pretty opposed to claims which imply that there is one correct emotional reaction to understanding a given situation. I think it's an important component of rationality to notice when judgments smuggle in implicit standards (as per my recent post), which this is an example of. Having said that, it's also an important component of rationality to not reason your way out of ever being freaked out. If the audience reading this weren't LWers, then I probably wouldn't have bothered pushing back, since I think something like my rephrasing above is true for many people, which implies that a better understanding would make them freak out more. But I think that LWers in particular are more often making the opposite mistake, of assuming that there's one correct emotional reaction.
2Ben Pace
Your suggested sentence is basically what I had in mind.
1Noosphere89
Sorry, I'm getting confused and I don't understand this sentence. Are you literally saying that you can't reason out of being afraid? Because this would be a terrible guideline, for many reasons.
7Kaj_Sotala
I can't think of specific quotes offhand, but I feel like I've caught that kind of an vibe from some social justice and climate change people/conversations. E.g. I recall getting backlash from suggesting that climate change might not be an extinction risk.

I feel like consequentialists are more likely to go crazy due to not being grounded in deontological or virtue-ethical norms of proper behavior. It's easy to think that if you're on track to saving the world, you should be able to do whatever is necessary, however heinous, to achieve that goal. I didn't learn to stop seeing people as objects until I leaned away from consequentialism and toward the anarchist principle of unity of means and ends (which is probably related to the categorical imperative). E.g. I want to live in a world where people are respected as individuals, so I have to respect them as individuals - whereas maximizing individual-respect might lead me to do all sorts of weird things to people now in return for some vague notion of helping lots more future people.

2Viliam
In consequentialism, if you make a conclusion consisting of dozen steps, and one of those steps is wrong, the entire conclusion is wrong. It does not matter whether the remaining steps are right. In theory, this could be fixed by assigning probabilities to individual steps, and then calculating the probability of the entire plan. But of course people usually don't do that. Otherwise they would notice that a plan with dozen steps, even if they are 95% sure about each of them individually, is not very reliable.
1Noosphere89
Only if it's a conjunctive argument. If it's disjunctive, then only 1 step has to be right for the argument to go through. As for the general conversation, I generally agree that consequentialism, especially the more extreme varieties lead to very weird consequences, but I'd argue that a lot of other ethical theories taken to an extreme would result in very bizarre consequences/conclusions.

I think drugs and non-standard lifestyle choices are a contributing factor. Messing with ones biology / ignoring the default lifestyle in your country to do something very non-standard is riskier and less likely to turn out well than many imagine. 

Everyone, read also the comments at EA website, the top one makes a great point:

while EA/rationalism is not a cult, it contains enough ingredients of a cult that it’s relatively easy for someone to go off and make their own.

To avoid derailing the debate towards the definition of cult etc., let me paraphrase it as:

EA/rationalism is not an evil project, but it is relatively easy for someone to start an evil project by recruiting within the EA/rationalist ecosystem. (As opposed to starting an evil project somewhere else.)

This is how.

EA/rationalism in general [...] lacks enforced conformity and control by a leader. [...]

However, what seems to have happened is that multiple people have taken these base ingredients and just added in the conformity and charismatic leader parts. You put these ingredients in a small company or a group house, put an unethical or mentally unwell leader in charge, and you have everything you need for an abusive [...] environment. [...] This seems to have happened multiple times already. 

I didn't spend much time thinking about it, but I suspect that the lack of "conformity and control" in EA/rationalism may actually be a weakness, from this perspective. Wh... (read more)

6the gears to ascension
I'd suggest that's what's needed is a strong immune system against conformity and control which does not itself require conformity or control: a generalized resistance to agentic domination from another being, of any kind, and as part of it, a culture of habit around joining up with others who are skilled at resisting pressure in response to an attempted pressure, without that join up creating the problems it attempts to prevent. I'm a fan of this essay on this topic, as well as other writing from that group.
1dr_s
I don't think the lack of a leadership is in itself the only issue here. I think the general framework is vulnerable to cultishness for a few more reasons. The first one is that in general it encourages dallying with unconventional ideas, and eschewing the notion that if it sounds crazy to the average person then it must be wrong. In fact, there's plenty of "everyone else are the actually crazy ones" thinking which might be necessary insofar as you want to try becoming more rational, but also means you now have less checks on your overall beliefs and behaviour. The other, related, is the focus on utilitarianism as a philosophy, which creates even stronger conditions for specific beliefs like "doing crazy thing X is actually good as long as it's justified by projected good outcomes". You mention religion but Catholicism is pretty much the only one with such a clear leadership. It helps but it doesn't make it the only stable one. IMO with a strong leadership rationalism as a whole would just be more likely to become a cult.
2Viliam
I agree it is not the only issue. I think it is a combination of ideas being genuinely dangerous, and no one having the authority to declare: "you are using those ideas wrong". Plus the general "contrarian" attitude where the edgier opinions automatically give you higher status. So if someone hypothetically volunteered for the role of calling out wrong implementations of the dangerous ideas, they would probably be perceived as not smart/brave enough to appreciate them. * Thinking more about the analogies with religion... when people repeatedly propose something that is (in their perspective) an incorrect application of their ideas, they give it a name, declare it a heresy, and that makes it easier to deflect the problem in future by simply labeling it. So perhaps the rationalist/EA alternative could be to maintain an online list of "ideas that are frequently attributed to, or associated with, rationalists / effective altruists, but we explicitly disapprove of them; here is a short summary why". Probably with a shorter title, maybe "frequent bad ideas". The next step would be to repeatedly tell new members about this list. This has a potential to backfire, by making those bad ideas more visible. So we would be trading certainty of exposition against a probability of explosion. On the conservative side, we could simply describe the things that have already happened (the ideas that were already followed and it didn't end well).
3dr_s
Again, you seem to be focused on Catholicism. Catholicism is not the norm. Orthodox Christianity is maybe also kinda like that, but Protestant denominations are not, Islam is not, most of Buddhism is not, Hinduism is not, and don't even get me started on Shinto and other forms of animism. Most religions don't have a fixed canon, they have at most a community which might decide to strongly shun (or even straight up violently punish) anyone whose beliefs are so aberrant they might as well not belong to the same religion any more. But the boundary itself isn't well defined. If one wants to draw inspiration from religions (not that they are always the best example; Islam has given birth to plenty of violent and radical offshoots, for example, and exactly for the reason you bring up no one is quite in a position to call them out as wrong with unique authority), then you have to look at other things, at how they achieve collective cohesion even without central leadership, because central leadership is pretty much only the specific solution Catholics came up with.

People sometimes say that cult members tend to have conflicts that lead to them joining the cult. Recently I've been wondering if this is an underrated aspect of cultishness.

Let's take the LaSota crew as an example. As I understand, they were militant vegans.

And if I understand correctly, vegans are concerned about the dynamic where, in order to obtain animal flesh to eat, people usually hire people to raise lots of animals in tiny indoor spaces, sometimes cutting off body parts such as beaks if they are too irritable. Never letting them out to live freely. Taking their children away shortly after they give birth. Breeding them to grow rapidly but also often having genetic disorders that cause a lot of pain and disability. And basically just having them live like that until they get killed and butchered.

And from what I understand, society often tries to obscure this. People get uncomfortable and try to change the subject when you talk about it. People might make laws to make it hard to film and share what's going on. People come up with convoluted denials of animals having feelings. And so on.

(I am not super certain about the above two paragraphs because I haven't investigated it m... (read more)

5Viliam
This mostly makes sense, but the wrong part of it is the "homogeneity of the outgroup" assumption. Basically, the cult leader does the trick of dividing the world into two groups: the cult, and everyone else. The cult tells the truth about that one thing you strongly care about? Check. All people who lie about the thing you care about are in the "everyone else" group? Check. The missing part is that... many people in the "everyone else" group also tell the truth about that one thing you strongly care about. That's because the "everyone else" group literally contains billions of people, with all kinds of opinions and behaviors. But it is easy to miss this, especially when the cult leader tends to use the liars as the prototypes of the outgroup (essentially "weakmanning" the rest of humanity). As a specific example, if you strongly care about veganism, you should notice that although the majority of non-Zizians are non-vegans, the majority of vegans are non-Zizians. So you shouldn't conclude that there is no salvation outside of Zizians.
5tailcalled
To an extent, yes this is a good solution. But also, it doesn't always work. You might have multiple conflicts going on, exponentially reducing who fits the bill. Most people don't know who you are, so you might be limited to your local circles. Sometimes the conflict itself is an obscure thing that few can interact with. On its own, yes there are lots of vegans they could have had contact with. But the Zizians were also rat/EA types, which restricts their community reach heavily. Though there are lots of peaceful EA vegans, so this can't explain it all. But like - could there be any other conflicts they had? I expect there to be, though I am not sure about the details. Maybe I am wrong.
2Viliam
That sounds correct. Rat, vegan, trans... maybe one or two more things, and the selection is sufficiently narrow.

[note: I'm not particularly EA, beyond the motte of caring about others and wanting my activities to be effective. ]

I think this is basically correct.  EA tends to attract outliers who are susceptible to claims of aggrandizement - telling themselves and being told they're the heroes in the story.  It reinforces this with contrarian-ness, especially on dimensions with clever, math-sounding legible arguments behind them.  And then reinforces that "effective" is really about the biggest numbers you can plausibly multiple your wild guesses out to.  

Until recently, it was all circulating in a pile of free money driven by related insanity of crypto and tech investment, which seemed to have completely forgotten that zero interest rates were unlikely to continue forever, and actually producing stuff would eventually be important.

[ epistemic status for next section: provocative devil's advocate argument ]

The interesting question is "sure, it's crazy, but is it wrong?"  I suspect it is wrong - the multiplicative factors into the future are extremely tenuous.  But in the event that this level of commitment and intensity DOES cause alignment to be solved in time, it's arguable that all the insanity is worth it.  If your advice makes the efforts less individually harmful, but also a little bit less effective, it could be a net harm to the universe.

I'm focusing on the aspects specific to rationalism and effective altruism that could lead to people who nominally are part of the rationality community being crazy at a higher rate than one would expect. From your post, I got the following list:

  • Isolationist
  • fewer norms
  • highly ambitious
  • gender-ratios
  • X-risk being scary
  • encourages doing big things

I may be missing some but these are all the aspects that stood out to me. From my perspective, the #1 most important cause of the craziness that sometimes occurs in nominally rationalist communities is that rationalists reject tradition. This kind of falls under the fewer norms category but I don't think 'fewer norms' really captures it.

A lot of people will naturally do crazy things without the strict social rules and guidelines that humans have operated with for hundreds of years. The same rules that have been slowly eroding since 1900. And nominally rationalist communities are kind of at the forefront of eroding those social rules. Rationalists accept as normal ideas like polyamory, group homes (in adulthood as a long term situation), drug use, atheism, mysticism, brain-hacking, transgenderism, sadomasochism, and a whole slew of other... (read more)

3Pee Doom
People living together in group homes (as extended families) used to be the norm? The weird thing is how isolated and individualist we've become. I would argue that group houses where individual adults join up together are preserving some aspect of traditional social arrangement where people live closely, but maybe you would argue that this is not the same as an extended family or the lifelong kinship networks of a village.

It’s not clear to me what “crazy” means in this post & how it relates to something like raising the sanity waterline. A clearer idea of what you mean by crazy would, I think, dissolve the question.

4Ben Pace
I'm not sure what exactly my answer is, but it's a good question, so here's a babble of pointers what I think 'crazy' means, in case that helps someone else figure out a useful definition: * Take actions that most people can confidently know at the time that I will later on not endorse (e.g. physically assault my good friend for fun, set my house on fire, pick up a heroin habit, murder a stranger) or that I wouldn't endorse if you just gave me a bit more basic social security like money, friends, family, etc (such as murdering someone on the street for some food/money, spending days preparing lies to tell someone in order to trick them into giving me resources, hunt and follow a person until they're alone and then try to get them to give me stuff, stalk someone because I think they've fallen in love with me, etc). * Believe things that most people can confidently know that I don't have the evidence for and will later on not believe (e.g. demons are talking to me, I am literally Napoleon, that I have psychic powers and can read anyone's mind at any time). * When someone does or believes things that I (Ben) cannot empathize with or understand why they'd do it unless they didn't really have much relationship between their words/actions and reality (e.g. constantly tell stories that are obviously lies or that aren't internally coherent)

The first one seems like it would describe most people, e.g. many, many people repeatedly drink enough alcohol to predictably acutely regret it later.

The second would seem to exclude incurable cases, and I don’t see how to repair that defect without including ordinary religious people.

The third would also seem to include ordinary religious people.

I think these problems are also problems with the OP’s frame. If taken literally, the OP is asking about a currently ubiquitous or at least very common aspect of the human condition, while assuming that it is rare, intersubjectively verified by most, and pathological.

My steelman of the OP’s concern would be something like “why do people sometimes suddenly, maladaptively, and incoherently deviate from the norm?”, and I think a good answer would take into account ways in which the norm is already maladaptive and incoherent, such that people might legitimately be sufficiently desperate to accept that sort of deviance as better for them in expectation than whatever else was happening, instead of starting from the assumption that the deviance itself is a mistake.

If it’s hard to see how apparently maladaptive deviance might not be a mistake, consider a North Korean Communist asking about attempted defectors - who observably often fail, end up much worse off, and express regret afterwards - “why do our people sometimes turn crazy?”. From our perspective out here it’s easy to see what the people asking this question are missing.

This still leaves me confused about why these people made such terrible mistakes. Many people can look at their society and realize how it is cognitively distorting and tricking them into evil behavior. It seems aggressively dumb to then decide that personally murdering people you think are evil is straightforwardly fine and a good strategy, or that you have psychic powers and should lock people in rooms.[1] I think there are more modest proposals, like seasteading or building internet communities or legalizing prediction markets, that have a strong shot of fixing a chunk of the insanity of your civilization without leaving you entirely out in the wilderness, having to rederive everything for yourself and leading you to shooting yourself in the foot quite so quickly.

I expect all North Korean defectors will get labeled evil and psychotic by the state. Like a sheeple, I don't think all such ones will be labeled this way by everyone in my personal society, though I straightforwardly acknowledge that a substantial fraction will. I think there were other options here that were less... wantonly dysfunctional.

  1. ^

    Or stealing billions of dollars from people. But to honestly be, that one

... (read more)

I think part of what happens in these events is that they reveal how much disorganized or paranoid thought went into someone's normal persona.  You need to have a lot of trust in the people around you to end up with a plan like seasteading or prediction markets - and I notice that those ideas have been around for a long time without visibly generating a much saner & lower-conflict society, so it does not seem like that level of trust is justified.

A lot of people seem to navigate life as though constantly under acute threat and surveillance (without a clear causal theory of how the threat and surveillance are paid for), expecting to be acutely punished the moment they fail to pass as normal - so things they report believing are experienced as part of the act, not the base reality informing their true sense of threat and opportunity.  So it's no wonder that if such people get suddenly jailbroken without adequate guidance or space for reflection, they might behave like a cornered animal and suddenly turn on their captors seemingly at random.

For a compelling depiction of how this might feel from the inside, I strongly recommend John Carpenter's movie They Live (1988), whi... (read more)

9Ben Pace
Hmm... firstly, I hope they do not think and act like that. The world looks to me like most people aren't acting like that most of the time (most people I know have not been killed, though most have been locked in rooms to some extent). If it were true, I'm not sure I believe that it's of primary importance — just as the person in the proverbial Chinese Room does not understand Chinese, even if many in positions of authority are wantonly cruel and dominating, I still personally experience a lot of freedoms. I'd need to think about what the actual effect is of their intentions, the size, and how changing it or punishing certain consequent behaviors compares to the other list of problems-to-solve. This suggestion is quite funny, just from reading your description of They Live and seeing the movie poster. On first blush it sounds quite childishly naive on my part to attempt it. But perhaps I will watch the film, think it through some more and figure out more precisely whether I think such a strategy makes any sense or why it would fail. Initially, to ask such a person to play a longer game, feels like asking them to "keep up the facade" while working on a solution that only has like a 30% chance of working. From your descriptions I anticipate the people in They Live and Office Space to find this too hard after a while and snap (or else they'll lose their grasp on reality). On the other hand I think people sometimes pull off subterfuges successfully. While we're talking about films I have not seen, from what I've heard Schindler's List sounds like one where a character noticed his society was enacting distinctly evil policies and strategically worked to combat it without snapping / doing immoral and (to me) crazy things. (Perhaps I will watch that and find out that he does!) I wonder what the key difference there is. (I will regrettably move on to some other activities for now, construction deadlines are this Monday.)
5Benquo
Maybe this was unclear, but I meant to distinguish two questions so you that you could try to answer one somewhat independently of the other: 1 What determines various authorities' actions? 2 How should a certain sort of person, with less or different information than you, model the authorities' actions? Specifically I was asking you to consider a specific hypothesis as the answer to question 2 - that for a lot of people who aren't skilled social scientists, the behavior of various authorities can look capricious or malicious even if other people have privileged information that allows them to predict those authorities' behavior better and navigate interactions with them relatively freely and safely. To add a bit of precision here, someone who avoids getting hurt by anxiously trying to pass the test (a common strategy in the Rationalist and EA scene) is implicitly projecting quite a bit more power onto the perceived authorities than they actually have, in ways that may correspond to dangerously wrong guesses about what kinds of change in their behavior will provoke what kinds of confrontation.  For example, if you're wrong about how much violence will be applied and by whom if you stop conforming, you might mistakenly physically attack someone who was never going to hurt you, under the impression that it is a justified act of preemption. On this model, the way in which the behavior of people who've decided to stop conforming seems bizarre and erratic to you implies that you have a lot of implicit knowledge of how the world works that they do not.  Another piece of fiction worth looking at in this context is Burroughs's Naked Lunch.  I've only seen the movie version, but I would guess the book covers the same basic content - the disordered and paranoid perspective of someone who has a vague sense that they're "under cover" vs society, but no clear mechanistic model of the relevant systems of surveillance or deception.
9Ben Pace
Not yet answering the central question you asked, but this example is interesting to me, as this both sounds like a severe mistake I have made and also I don't quite understand how it happens. When anxiously trying to pass the test, what false assumption is the person making about the authority's power? I can try to figure it out for myself... I have tried to pass tests (literally, at university) and held it as the standard of a person. I have done this in other situations, holding someone's approval as the standard to meet and presuming that there is some fair game I ought to succeed at to attain their approval. This is not a useless strategy, even while it might blind me to the ways in which (a) the test is dumb, (b) I can succeed via other mechanisms (e.g. side channels, or playing other games entirely). In these situations I have attributed to them far too much real power, and later on have felt like I have majorly wasted my time and effort caring about them and their games when they were really so powerless. But I still do not quite see the exact mistake in my cognition, where I went from a true belief to a false one about their powers. ...I think the mistake has to do with identifying their approval as the scoring function of a fair game, when it actually only approximated a fair game in certain circumstances, and outside of that may not be related whatsoever. ("may not be"! — it is of course not related to that whatsoever in a great many situations.) The problem is knowing when someone's approval is trying to approximate the scoring function of a fair (and worthwhile) game, and when it is not. But I'm still not sure why people end up getting this so wrong.
7Benquo
There’s a common fear response, as though disapproval = death or exile, not a mild diminution in opportunities for advancement. Fear is the body’s stereotyped configuration optimized to prevent or mitigate imminent bodily damage. Most such social threats do not correspond to a danger that is either imminent or severe, but are instead more like moves in a dance that trigger the same interpretive response.
8Ben Pace
Re-reading my comment, the thing that jumps to mind is that "I currently know of no alternative path to success". When I am given the option between "Go all in on this path being a fair path to success" and "I know of no path to success and will just have to give up working my way along any particular path, and am instead basically on the path to being a failure", I find it quite painful to accept the latter, and find it easier on the margin to self-deceive about how much reason I have to think the first path works. I think a few times in my life (e.g. trying to get into the most prestigious UK university, trying to be a successful student once I got in) I could think of no other path in life I could take than the one I was betting on. This made me quite desperate to believe that the current one was working out okay. I think "fear' is an accurate description from my reaction to thinking about the alternative (of failure). Freezing up, not being able to act.
9Benquo
Reality is sufficiently high-dimensional and heterogeneous that if it doesn’t seem like there’s a meaningful “explore/investigate” option with unbounded potential upside, you’re applying a VERY lossy dimensional reduction to your perception.
2Ben Pace
(I appreciate the reply, I will not get back to this thread until Monday at the earliest. Any ping to reply mid next week is very welcome.)
0Benquo
One more thing: the protagonists of The Matrix and Terry Gilliam’s Brazil (1985) are relatively similar to EAs and Rationalists so you might want to start there, especially if you’ve seen either movie.
2Ben Pace
I would say that it requires an advanced understanding of economics, incentives, and how society works, rather than trust in people. Understanding how a mechanism work reduces the requirement for trust. (They are complements in my mind.) I think one of the reasons it would be hard to get a recently jailbroken not-that-intellectual person on-board with such a plan is that it would involve giving them novel understanding of how the world works that they do not have, which somehow people are rarely able to intentionally do, and it can easily fall back to an ask of "trust" that you know something the other person doesn't, rather than a successful communication of understanding. And then after some number of weeks or months or years the world will introduce enough unpredictable noise that the trust will run out and the person will go back to using the world as they understand it, where they were never going to invent a concept like prediction markets. ...but hey, perhaps I'm not giving them enough credit, and actually they would ask themselves questions like "where does all of the cool technology and inventions around me come from" and start building up a model of science and of successful groups and start figuring out which sorts of reasoning actually work and what sorts of structures in society get good things done on purpose and then start to notice which parts of society can give you more of those powers and then start to see notice things like markets and personal freedoms and building mechanistic world models and more as ways to build up those forces in society.  On the one hand this path can take decades and most humans do not go down it. On the other hand the evidence required to build up a functional worldview is increasingly visible as technological progress has sped up over the centuries and so much of the world is viewable at home on a computer screen. Still, teaching anyone anything on purpose is hard in full generality, for some reason, and just as someo
3Thoth Hermes
I do like your definition of "crazy" that uses "an idea [I / the crazy person] would not endorse later." I think it dissolves a lot of the eeriness around the word that makes it kind of overly heavy-hitting when used, but also, I think that if you dissolve it in this way, it pretty much incentivizes dropping the word entirely (which I think is a good thing, but maybe not everyone would). If we define it to mean ideas (not the person) that the person holding them would eventually drop or update to something else, that's more like what the definition of "wrong" is, and which would apply to literally everyone at different points in their lives and to varying degrees at any time. But then maybe this is too wide, and doesn't capture the meaning of the word implied in the OP's question, namely, "why do more people than usual go crazy within EA / Rationality?" Perhaps what is meant by the word in this context is when some people seem to hold wrong ideas that are persistent or cannot be updated later at all. For the record, I am skeptical that this form of "crazy" is really all that prevalent when defined this way.  If we define it as "wrong ideas" (things which won't be endorsed later) then it does offer a rather simple answer to the OP's question: EA / Rationality is rather ambitious about testing out new beliefs at the forefront of society, so they will by definition hold beliefs that aren't held by the majority of people, and which by design, are ambitious and varied enough to be expected to be proven wrong many times over time.  If being ambitious about having new or unusual ideas carries with it accepted risks of being wrong more often than usual, then perhaps a certain level of craziness has to be tolerated as well. 

I have different hypotheses / framings. I will offer them. If you wish to discuss any of them in more detail, please reach out to me via email or PM. Happy to converse. ! 

// 

Mythical/Archetypal take: 

There are large-scale, old, and powerful egregores fighting over the minds of individuals and collectives. They are not always very friendly to human interests or values. In some cases, they are downright evil. (I'd claim the Marxist egregore is a pretty destructive one.)

The damage done by these egregores is multigenerational. It didn't start wi... (read more)

fwiw I think stealing money from mostly-rich-people in order to donate it isn't obviously crazy. Decouple this claim from anything FTX did in particular, since I know next to nothing about the details of what happened there. From my perspective, it could be they were definite villains or super-ethical risk-takers (low prior).

Thought I'd say because I definitely feel reluctance to say so. I don't like this feeling, and it seems like good anti-bandwagon policy to say a thing when one feels even slight social pressure to shut up.

I personally know more than one person for whom the majority of their life savings were stolen from them, who put it into FTX in part because of the trust Sam had in the EA ecosystem. I think there's a pretty strong schelling line (supported and enforced by the law) against theft, such that even if it is worth it on naive utilitarian terms I am strongly in favor of punishing and imprisoning anyone who does so, so that people can work together safe in the knowledge that all the resources they've worked hard to earn won't be straightforwardly taken from them.

(In this comment I'm more trying to say "massive theft should be harshly punished regardless of intention" than say "I know the psychology behind why SBF, Caroline Ellison, and others, stole everyone's money".)

My honest opinion is that Ziz got several friends of mine killed. So i dont exactly have a high opinion of her. But I have never heard of Ziz referring to themselves as LaSota. Its honestly toxic not to use people's preferred names. Its especially toxic if they are trans but the issue isn't restricted to trans people. So Id strongly prefer people refer to Ziz as Ziz. 

I think this position has some merit, though I disagree. I think Ziz is a name that is hard to Google and get context on, and also feels like it's chosen with intimidation in mind. "LaSota" is me trying to actively be neutral and not choose a name that they have actively disendorsed, but while also making it a more unique identifier, not misgendering them (like their full legal name would), and not contributing to more bad dynamics by having a "cool name for the community villain", which I really don't think has good consequences.

3dirk
I think it's not clear that "LaSota" refers to Ziz unless you already happen to have looked up the news stories and used process of elimination to figure out which legal name goes with which online handle, which makes it ineffective for communicative purposes.

I think when it comes to people who get people killed, it's justified to reveal all the names they go by in the interest of public safety, even if they don't like it. 

8ChristianKl
What exactly do you mean by the word 'toxic'? 
0drethelin
Also for practical purposes it's much more clear who is being referred to in the local context especially since there's tons of writing from/about Ziz Plus it's just a much cooler name for a community villain. 

Probably be grounded in more than one social group. Even being part of two different high-intensity groups seems like it should reduce the dynamics here a lot.

Worked well for me!

Eric Chisholm likes to phrase this principle as "the secret to cults is to be in at least two of them".

4TAG
It would be great if the Yudkowsians could spend a week at the Randians, the Randians at the Deutschians, and so on.
  1. Don’t put yourself into positions of insecurity. […]

This seems like it points in the wrong direction to me. I'd instead say something like "look for your own insecurities and then look closely the ones you find". But the current thing you've said sounds like "avoid wherever your insecurities might manifest (because they're fixed)".

[How to resolve insecurities? Coherence Therapy.]

I think there's a commonly-held belief that a feeling of belonging is something that we can get from other people, but I think this is a misconception. Stable confidence doesn't co... (read more)

4DaystarEld
Agreed in principle, though it's worth noting that more resourced people tend to have less insecurities in general. People who have a stable family, no economic insecurity, positive peer support, etc, end up less susceptible to cults, as well as bad social dynamics in general. This isn't to say that people can't create stable confidence for themselves without those things, only that "dependent confidence"  is also a thing that people can have instead that acts protectively, or exposes risk.

Good breakdown of one of the aspects in all this. The insecurity/desperation topic is a really hard one to navigate well, but I agree it's really important.

Hard because when someone feels like an outsider, a group of other likeminded outsiders will naturally want to help them and welcome them, and it can be an uncomplicated good to do so. Important because if someone has only one source of to supply support, resources, social needs, etc, they are far more likely to turn desperate or do desperate things to maintain their place in the community.

Does this mea... (read more)

Hot take: to the extent that EAs and rationalists turn crazy, part of the problem involves that some of their focuses include existential risk + having very low discount rates for the future.

To explain more, I think that utilitarianism is maybe a part of the problem, but it's broader than that. The bigger problem is once you fundamentally believe that we will all die of something, and your group can control the chances of being extinct, that's a fast road to craziness, given that most of these existential risks probably wouldn't materialize anyway, and imp... (read more)

Seems like the forces that turn people crazy are the same ones that lead people to do anything good and interesting at all. At least for EA, a core function of orgs/elites/high status community members is to make the kind of signaling you describe highly correlated with actually doing good. Of course it seems impossible to make them correlate perfectly, and that’s why setting with super high social optimization pressure (like FTX) are gonna be bad regardless.

But (again for EA specifically) I suspect the forces you describe would actually be good to increas... (read more)

To any individual in a group, it can easily be the case that they think the group standard seems dumb, but in a situation of risk aversion, the important part is that you do things that look to everyone like the kind of thing that others would think is part of the standard. In practice this boils down to a very limited kind of reasoning where you do things that look vaguely associated with whatever you think the standard is, often without that standard being grounded in much of any robust internal logic. And doing things that are inconsistent with the actu

... (read more)

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

Most social groups will naturally implement an "in-group / out-group" identifier of some kind and associated mechanisms to apply this identifier on their members. There are a few dynamics at play here:

  1. Before this identification mechanism has been implemented, there isn't really much of a distinction between in-group and out-group. Therefore, there will be people who self-identify as being associated with the group, but who are not part of the sub-group which begins to make the identifications. Some of these members may accordingly get labeled part of the o
... (read more)

I think I might have a promising and better intervention for preventing individuals EAs and Rationalists from “turning crazy”. What would you want to do with it?