I want to add some context I think is important to this.
Jessica was (I don't know if she still is) part of a group centered around a person named Vassar, informally dubbed "the Vassarites". Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to "jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.
Vassar ran MIRI a very long time ago, but either quit or got fired, and has since been saying that MIRI/CFAR is also infinitely corrupt and conformist and traumatizing (I don't think he thinks they're worse than everyone else, but I think he thinks they had a chance to be better, they wasted it, and so it's especially galling that they're just as bad). Since then, he's tried to "jailbreak" a lot of people associated with MIRI and CFAR - again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. The combinat...
Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC
Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I'll try to explain some context for the record.
In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme "trans women are [psychologically] men". I experienced this while dating AM (same as mentioned above). She repeatedly brought up this point in various interactions. Since we were both trans women this was hurting us both, so I look back with more pity than concern about malice. At some point during this time I started treating this as a hidden truth that I was proud of myself for being able to see, which I in retrospect I feel disgusted and complicit to have accepted. This was my state of mind when I discussed these issues with Zack reinforcing each others views. I believe (less certain) I also broached the topic with Michael and/or Anna at some point which probably went...
I want to point out that the level of mental influence being attributed to Michael in this comment and others (e.g. that he's "causing psychotic breaks" and "jailbreaking people" through conversation, "that listening too much to Vassar [causes psychosis], predictably") isn't obviously less than the level of mental influence Leverage attributed to people in terms of e.g. mental objects. Some people in the thread are self-congratulating on the rationalists not being as crazy and abusive as Leverage was in worrying that people were spreading harmful psychological objects to each other, and therefore isolating these people from their friends. Yet many in this comment thread are, literally, calling for isolating Michael Vassar from his friends on the basis of his mental influence on others.
Yes, I agree with you that all of this is very awkward.
I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.
But we have to admit at least small violations of it even to get the concept of "cult". Not just the sort of weak cults we're discussing here, but even the really strong cults like Heaven's Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven's Gate is bad for them, and leave. When we use the word "cult", we're implicitly agreeing that this doesn't always work, and we're bringing in creepier and less comprehensible ideas like "charisma" and "brainwashing" and "cognitive dissonance".
(and the same thing with the concept of "emotionally abusive relationship")
I don't want to call the Vassarites a cult because I'm sure someone will confront me with a Cult Checklist that they don't meet, but I think that it's not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it's weird that you can get tha...
It seems to me like in the case of Leverage, them working 75 hours per week reduced the time the could have used to use Reason to conclude that they are in a system that's bad for them.
That's very different from someone having a few conversation with Vassar and then adopting a new belief and spending a lot of the time reasoning about that alone and the belief being stable without being embedded into a strong enviroment that makes independent thought hard because it keeps people busy.
A cult in it's nature is a social institution and not just a meme that someone can pass around via having a few conversations.
I think "mind virus" is fair. Vassar spoke a lot about how the world as it is can't be trusted. I remember that many of the people in his circle spoke, seemingly apropos of nothing, about how bad involuntary commitment is, so that by the time someone was psychotic their relationship with psychiatry and anyone who would want to turn to psychiatry to help them was poisoned. Within the envelope of those beliefs you can keep a lot of other beliefs safe from scrutiny.
It seems to me that, at least in your worldview, this question of whether and what sort of subtle mental influence between people is possible is extremely important, to the point where different answers to the question could lead to pretty different political philosophies.
Let's consider a disjunction: 1: There isn't a big effect here, 2: There is a big effect here.
In case 1:
I agree I'm being somewhat inconsistent, I'd rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I'm trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you're open to that.
This misses the fact that people’s ability to negatively influence others might vary very widely, making it so that it is silly to worry about, say, 99.99% of people strongly negatively influencing you, but reasonable to worry about the other 0.01%. If Michael is one of those 0.01%, then Scott’s worldview is not inconsistent.
One important implication of "cults are possible" is that many normal-seeming people are already too crazy to function as free citizens of a republic.
In other words, from a liberal perspective, someone who can't make their own decisions about whether to hang out with Michael Vassar and think about what he says is already experiencing a severe psychiatric emergency and in need of a caretaker, since they aren't competent to make their own life decisions. They're already not free, but in the grip of whatever attractor they found first.
Personally I bite the bullet and admit that I'm not living in a society adequate to support liberal democracy, but instead something more like what Plato's Republic would call tyranny. This is very confusing because I was brought up to believe that I lived in a liberal democracy. I'd very much like to, someday.
I think there are less extreme positions here. Like "competent adults can make their own decisions, but they can't if they become too addicted to certain substances." I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.
drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.
I really don't think this is an accurate description of what is going on in people's mind when they are experiencing drug dependencies. I've spent a good chunk of my childhood with an alcoholic father, and he would have paid most of his wealth to stop being addicted to drinking, went through great lengths trying to tie himself to various masts to stop, and generally expressed a strong preference for somehow being able to self-modify the addiction away, but ultimately failed to do so.
Of course, things might be different for different people, but at least in the one case where I have a very large amount of specific data, this seems like it's a pretty bad model of people's preferences. Based on the private notebooks of his that I found after his death, this also seemed to be his position in purely introspective contexts without obvious social desirability biases. My sense is that he would have strongly preferred someone to somehow take control away from him, in this specific domain of his life.
Aristotle seems (though he's vague on this) to be thinking in terms of fundamental attributes, while I'm thinking in terms of present capacity, which can be reduced by external interventions such as schooling.
Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.
*As far as I know I didn't know any such people before 2020; it's very easy for members of the educated class to mistake our bubble for statistical normality.
Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.
This is very interesting to me! I'd like to hear more about how the two group's behavior looks diff, and also your thoughts on what's the difference that makes the difference, what are the pieces of "being brought up to go to college" that lead to one class of reactions?
I have talked to Vassar, while he has a lot of "explicit control over conversations" which could be called charisma, I'd hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)
My hypothesis is the following: I've met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people's identity ("I'm an EA person, thus I'm a good person doing important work"). Two anecdotes to illustrate this:
- I'd recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we're both self-declared rationalists!) because I'd realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity.
- I'd had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: "Only if it works on the alignment problem, everything else is irrelevant to me".
Vassar very persuasively argues against EA and work done at MIRI/CFAR...
What are your or Vassar's arguments against EA or AI alignment? This is only tangential to your point, but I'd like to know about it if EA and AI alignment are not important.
The general argument is that EA's are not really doing what they say they do. One example from Vassar would be that when it comes to COVID-19 for example there seem to be relatively little effective work by EA's. In contrast Vassar considered giving prisoners access to personal equipment the most important and organized effectively for that to happen.
EA's created in EA Global an enviroment where someone who wrote a good paper warning about the risks of gain-of-function research doesn't address that directly but only talks indirectly about it to focus on more meta-issues. Instead of having conflicts with people doing gain-of-function research the EA community mostly ignored it's problems and funded work that's in less conflict with the establishment. There's nearly no interest in learning from those errors in the EA community and people rather avoid conflicts.
If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information.
AI alignment is important but just because one "works on AI risk" doesn't mean that the work actually decreases AI risk. Tying your personal identity to being someone who works to d...
He argued
(a) EA orgs aren't doing what they say they're doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally), and it's hard to get organizations to do what they say they do
(b) Utilitarianism isn't a form of ethics, it's still necessary to have principles, as in deontology or two-level consequentialism
(c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn't well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved
(d) It usually makes more sense to focus on ways of helping others that also build capacities, including gathering more information, to increase long-term positive impact
If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him.
The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.
Vassar's action themselves are about doing altruistic actions more directly by looking for who are most powerless who need help and working to help them. In the COVID case he identified prisoners and then worked on making PPE available for them.
You might see his thesis is that "effective" in EA is about adding a management layer for directing interventions and that management layer has the problems that the immoral maze sequence describes. According to Vassar someone who wants to be altrustic shouldn't delegate his judgements of what's effective and thus warrents support to other people.
I think what you're pointing to is:
I have a large number of negative Leverage experiences between 2015-2017 that I never wrote up due to various complicated adversarial dynamics surrounding Leverage and CEA (as well as various NDAs and legal threats, made by both Leverage and CEA, not leveled at me, but leveled at enough people around me that I thought I might cause someone serious legal trouble if I repeat a thing I heard somewhere in a more public setting)
I'm getting a bit pedantic, but I wouldn't gloss this as "CEA used legal threats to cover up Leverage related information". Partly because the original bit is vague, but also because "cover up" implies that the goal is to hide information.
For example, imagine companies A and B sue each other, which ends up with them settling and signing an NDA. Company A might accept an NDA because they want to move on from the suit and agreeing to an NDA does that most effectively. I would not describe this as company A using legal threats to cover up B-related information.
In the timeframe CEA and Leverage where doing together the Pareto Fellowship. If you read the common knowledge post you find people finding that they were mislead by CEA because the announcement didn't mention that the Pareto Fellowship was largely run by Leverage.
On their mistakes page CEA, they have a section about the Pareto Fellowship but it hides the fact that Leverage was involved in the Pareto Fellowship but says "The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers."
That does look to me like hiding information about the cooperation between Leverage and CEA.
I do think that publically presuming that people who hide information have something to hide is useful. If there's nothing to hide I'd love to know what happened back then or who thinks what happened should stay hidden. At the minimum I do think that CEA witholding the information that the people who went to their programs spend their time in what now appears to be a cult is something that CEA should be open about in their mistakes page.
Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage's history with Effective Altruism. I think this was bad, and continues to be bad.
My initial thought on reading this was 'this seems obviously bad', and I assumed this was done to shield CEA from reputational risk.
Thinking about it more, I could imagine an epistemic state I'd be much more sympathetic to: 'We suspect Leverage is a dangerous cult, but we don't have enough shareable evidence to make that case convincingly to others, or we aren't sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause others to be drawn into its orbit and suffer harm. But we don't feel confident enough to feel comfortable tarring Leverage in public, or our evidence was shared in confidence and we can't say anything we expect others to find convincing. So we'll have to just steer clear of the topic for now.'
Still seems better to just not address the subject if you don't want to give a fully accurate account of it. You don't have to give talks on the history of EA!
I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like "Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth".
"Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth"
That has the collary: "We don't expect EA's to care enough about the truth/being transparent that this is a huge reputational risk for us."
It does look weird to me that CEA doesn't include this on the mistakes page when they talk about Pareto. I just sent CEA an email to ask:
Hi CEA,
On https://www.centreforeffectivealtruism.org/our-mistakes I see "The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers. We realized during and after the program that senior management did not provide enough oversight of the program. For example, reports by some applicants indicate that the interview process was unprofessional and made them deeply uncomfortable."
Is there a reason that the mistakes page does not mention the involvement of Leverage in the Pareto Fellowship? [1]
Jeff
Yep, I think the situation is closer to what Jeff describes here, though, I honestly don't actually know, since people tend to get cagey when the topic comes up.
Huh, that's surprising, if by that he means "no contracts between anyone currently at Leverage and anyone at CEA". I currently still think it's the case, though I also don't see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees?
Talking with Vassar feels very intellectually alive. Maybe, like a high density of insight porn. I imagine that the people Ben talks about wouldn't get much enjoyment out of insight porn either, so that emotional impact isn't there.
There's probably also an element that plenty of people who can normally follow an intellectual conversation can't keep up a conversation with Vassar and then are filled after a conversation with a bunch of different ideas that lack order in their mind. I imagine that sometimes there's an idea overload that prevents people from critically thinking through some of the ideas.
If you have a person who hasn't gone to college, they are used to encountering people who make intellectual arguments that go over their head and have a way to deal with that.
From meeting Vassar, I don't feel like he has the kind of charisma that someone like Valentine has (which I guess Valentine has downstream of doing a lot of bodywork stuff).
This seems mostly right; they're more likely to think "I don't understand a lot of these ideas, I'll have to think about this for a while" or "I don't understand a lot of these ideas, he must be pretty smart and that's kinda cool" than to feel invalidated by this and try to submit to him in lieu of understanding.
The people I know who weren't brought up to go to college have more experience navigating concrete threats and dangers, which can't be avoided through conformity, since the system isn't set up to take care of people like them. They have to know what's going on to survive. This results in an orientation less sensitive to subtle threats of invalidation, and that sees more concrete value in being informed by someone.
In general this means that they're much more comfortable with the kind of confrontation Vassar engages in, than high-class people are.
I talked and corresponded with Michael a lot during 2017–2020, and it seems likely that one of the psychotic breaks people are referring to is mine from February 2017? (Which Michael had nothing to do with causing, by the way.) I don't think you're being fair.
"jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself)
I'm confident this is only a Ziz-ism: I don't recall Michael using the term, and I just searched my emails for jailbreak, and there are no hits from him.
again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs [...] describing how it was a Vassar-related phenomenon
I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea giv...
I don't want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn't harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I'm suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their "it's correct to be freaking about learning your entire society is corrupt and gaslighting" shtick.
I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)
[...]
Michael is a charismatic guy who has strong views and argues forcefully for them. That's not the same thing as having mysterious mind powers to "make people paranoid" or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I'm sure he'd be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.
I more or les...
Thing 0:
Scott.
Before I actually make my point I want to wax poetic about reading SlateStarCodex.
In some post whose name I can't remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, "Huh, I need to only be convinced by true things."
This is extremely relatable to my lived experience. I am a stereotypical "high-functioning autist." I am quite gullible, formerly extremely gullible. I maintain sanity by aggressively parsing the truth values of everything I hear. I am extremely literal. I like math.
To the degree that "rationality styles" are a desirable artifact of human hardware and software limitations, I find your style of thinking to be the most compelling.
Thus I am going to state that your way of thinking about Vassar has too many fucking skulls.
Thing 1:
Imagine two world models:
I enjoyed reading this. Thanks for writing it.
One note though: I think this post (along with most of the comments) isn't treating Vassar as a fully real person with real choices. It (also) treats him like some kind of 'force in the world' or 'immovable object'. And I really want people to see him as a person who can change his mind and behavior and that it might be worth asking him to take more responsibility for his behavior and its moral impacts. I'm glad you yourself were able to "With basic rationality skills, avoid contracting the Vassar, then [heal] the damage to [your] social life."
But I am worried about people treating him like a force of nature that you make contact with and then just have to deal with whatever the effects of that are.
I think it's pretty immoral to de-stabilize people to the point of maybe-insanity, and I think he should try to avoid it, to whatever extent that's in his capacity, which I think is a lot.
"Vassar's ideas are important and many are correct. It just happens to be that he might drive you insane."
I might think this was a worthwhile tradeoff if I actually believed the 'maybe insane' part was unavoidable, and I do not believ...
I think that treating Michael Vassar as an unchangeable force of nature is the right way to go—for the purposes of discussions precisely like this one. Why? Because even if Michael himself can (and chooses to) alter his behavior in some way (regardless of whether this is good or bad or indifferent), nevertheless there will be other Michael Vassars out there—and the question remains, of how one is to deal with arbitrary Michael Vassars one encounters in life.
In other words, what we’ve got here is a vulnerability (in the security sense of the word). One day you find that you’re being exploited by a clever hacker (we decline to specify whether he is a black hat or white hat or what). The one comes to you and recommends a patch. But you say—why should we treat this specific attack as some sort of unchangeable force of nature? Rather we should contact this hacker and persuade him to cease and desist. But the vulnerability is still there…
I think you can either have a discussion that focuses on an individual and if you do it makes sense to model them with agency or you can have more general threat models.
If you however mix the two you are likely to get confused in both directions. You will project ideas from your threat model into the person and you will take random aspects of the individual into your threat model that aren't typical for the threat.
I am not sure how much 'not destabilize people' is an option that is available to Vassar.
My model of Vassar is as a person who is constantly making associations, and using them to point at the moon. However, pointing at the moon can convince people of nonexistent satellites and thus drive people crazy. This is why we have debates instead of koan contests.
Pointing at the moon is useful when there is inferential distance; we use it all the time when talking with people without rationality training. Eliezer used it, and a lot of "you are expected to behave better for status reasons look at my smug language"-style theist-bashing, in the Sequences. This was actually highly effective, although it had terrible side effects.
I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, "Vassar, just only say things that you think will have a positive effect on the person." 1. He already does that. 2. That is advocating that Vassar manipulate people. See Valencia in Worth the Candle.
In the pathological case of Vassar, I think the naive strategy of "just say the thing you think is true" is still correct.
Menta...
I think that if Vassar tried not to destabilize people, it would heavily impede his general communication.
My suggestion for Vassar is not to 'try not to destabilize people' exactly.
It's to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he's interacted with about what it's like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you're speaking to as though they're a full, real human—not a pair of ears to be talked into or a mind to insert things into). When he talks theory, I often get the sense he is talking "at" rather than talking "to" or "with". The listener practically disappears or is reduced to a question-generating machine that gets him to keep saying things.
I expect this process could take a long time / run into issues along the way, and so I don't think it should be rushed. Not expecting a quick change. But claiming there's no available option seems wildly wrong to me. People aren't fixed points and generally shouldn't be treated as such.
This is actually very fair. I think he does kind of insert information into people.
I never really felt like a question-generating machine, more like a pupil at the foot of a teacher who is trying to integrate the teacher's information.
I think the passive, reactive approach you mention is actually a really good idea of how to be more evidential in personal interaction without being explicitly manipulative.
Thanks!
The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought.
“That which can be destroyed by the truth should be”—I seem to recall reading that somewhere.
And: “If my actions aren’t as positive utility as I think, then I desire to believe that my actions aren’t as positive utility as I think”.
If one has such a mental makeup that finding out that one’s actions have worse effects than one imagined causes genuine psychological collapse, then perhaps the first order of business is to do everything in one’s power to fix that (really quite severe and glaring) bug in one’s psyche—and only then to attempt any substantive projects in the service of world-saving, people-helping, or otherwise doing anything really consequential.
Specific claim: the only nontrivial obstacle in front of us is not being evil
This is false. Object-level stuff is actually very hard.
Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)
This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person's language. Ideas and institutions have the agency; they wear people like skin.
Specific claim: this is how to take over New York.
Didn't work.
I think it's a fine way of think about mathematical logic, but if you try to think this way about reality, you'll end up with views that make internal sense and are self-reinforcing but don't follow the grain of facts at all. When you hear such views from someone else, it's a good idea to see which facts they give in support. Do their facts seem scant, cherrypicked, questionable when checked? Then their big claims are probably wrong.
The people who actually know their stuff usually come off very different. Their statements are carefully delineated: "this thing about power was true in 10th century Byzantium, but not clear how much of it applies today".
Also, just to comment on this:
It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype.
I think it's somewhat changeable. Even for people like us, there are ways to make our processing more "fuzzy". Deliberately dimming some things, rounding others. That has many benefits: on the intellectual level you learn to see many aspects of a problem instead of hyperfocusing on one; emotionally you get more peaceful when thinking about things; a...
I mostly see where you're coming from, but I think the reasonable answer to "point 1 or 2 is a false dichotomy" is this classic, uh, tumblr quote (from memory):
"People cannot just. At no time in the history of the human species has any person or group ever just. If your plan relies on people to just, then your plan will fail."
This goes especially if the thing that comes after "just" is "just precommit."
My expectation is that interaction with Vassar is that the people who espouse 1 or 2 expect that the people interacting are incapable of precommitting to the required strength. I don't know if they're correct, but I'd expect them to be, because I think people are just really bad at precommitting in general. If precommitting was easy, I think we'd all be a lot more fit and get a lot more done. Also, Beeminder would be bankrupt.
This is a very good criticism! I think you are right about people not being able to "just."
My original point with those strategies was to illustrate an instance of motivated stopping about people in the community who have negative psychological effects, or criticize popular institutions. Perhaps it is the case that people genuinely tried to make a strategy but automatically rejected my toy strategies as false. I do not think it is, based on "vibe" and on the arguments that people are making, such as "argument from cult."
I think you are actually completely correct about those strategies being bad. Instead, I failed to point out that I expect a certain level of mental robustness-to-nonsanity from people literally called "rationalists." This comes off as sarcastic but I mean it completely literally.
Precommitting isn't easy, but rationality is about solving hard problems. When I think of actual rationality, I think of practices such as "five minutes of actually trying" and alkjash's "Hammertime." Humans have a small component of behavior that is agentic, and a huge component of behavior that is non-agentic and installed by vaguely agentic processes (simple conditioning, mimicry, social...
Michael is very good at spotting people right on the verge of psychosis
...and then pushing them.
Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.
So, this seems deliberate. [EDIT: Or not. Zack makes a fair point.] He is not even hiding it, if you listen carefully.
Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.
So, this seems deliberate.
Because high-psychoticism people are the ones who are most likely to understand what he has to say.
This isn't nefarious. Anyone trying to meet new people to talk to, for any reason, is going to preferentially seek out people who are a better rather than worse match. Someone who didn't like our robot cult could make structurally the same argument about, say, efforts to market Yudkowsky's writing (like spending $28,000 distributing copies of Harry Potter and the Methods to math contest winners): why, they're preying on innocent high-IQ systematizers and filling their heads with scary stories about the coming robot apocalypse!
I mean, technically, yes. But in Yudkowsky and friends' worldview, the coming robot apocalypse is actually real, and high-IQ systematizers are the people best positioned to understand this important threat. Of course they're going to try to market their memes to that neurotype-demographic. What do you expect them to do? What do you expect Michael to do?
There's a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don't have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.
If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you'd object to that targeting strategy even though they'd be able to make an argument structurally the same as your comment.
Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it's even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. people who think machine learning / deep learning are important.
In general this seems really expected and unobjectionable? "If I'm trying to convince people of X, I'm going to find people who already believe a lot of the pre-requisites for understanding X and who might already assign X a non-negligible prior". This is how pretty much all systems of ideas spread, I have trouble thinking of a counterexample.
I mean, do a significant number of people not select who they talk with based on who already agrees with them to some extent and is paying attention to similar things?
If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn't care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.
The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from "psychotic," and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren't already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is connected to everything else. Obviously neither extreme state is desirable, but leaning one way or another could be very helpful in different contexts.
See also: indexicality.
On the other hand, back in my reflective beliefs, I think psychosis is a much scarier failure mode than "autism," on this scale, and I would not personally pursue any actions that pushed people toward it without, among other things, a supporting infrastructure of some kind for processing the psychotic state without losing the plot (social or cultural would work, but whatever).
Well, I don't think it's obviously objectionable, and I'd have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like "we'd all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we're talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren't generally either truth-tracking or good for them" seems plausible to me. But I think it's obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.
As I mentioned elsewhere, I was heavily involved in that incident for a couple months after it happened and I looked for causes that could help with the defense. AFAICT No drugs were taken in the days leading up to the mental health episode or arrest (or people who took drugs with him lied about it).
I, too, asked people questions after that incident and failed to locate any evidence of drugs.
As I heard this story, Eric was actively seeking mental health care on the day of the incident, and should have been committed before it happened, but several people (both inside and outside the community) screwed up. I don't think anyone is to blame for his having had a mental break in the first place.
I now got some better sourced information from a friend who's actually in good contact with Eric. Given that I'm also quite certain that there were no drugs involved and that isn't a case of any one person being mainly responsible for it happening but multiple people making bad decisions. I'm currently hoping that Eric will tell his side himself so that there's less indirection about the information sourcing so I'm not saying more about the detail at this point in time.
Edit: The following account is a component of a broader and more complex narrative. While it played a significant role, it must be noted that there were numerous additional challenges concurrently affecting my life. Absent these complicating factors, the issues delineated in this post alone may not have precipitated such severe consequences. Additionally, I have made minor revisions to the third-to-last bullet point for clarity.
It is pertinent to provide some context to parts of my story that are relevant to the ongoing discussions.
Thanks for sharing the details of your experience. Fyi I had a trip earlier in 2017 where I had the thought "Michael Vassar is God" and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.
If I'm trying to put my finger on a real effect here, it's related to how Michael Vassar was one of the initial people who set up the social scene (e.g. running singularity summits and being executive director of SIAI), being on the more "social/business development/management" end relative to someone like Eliezer; so if you live in the scene, which can be seen as a simulacrum, the people most involved in setting up the scene/simulacrum have the most aptitude at affecting memes related to it, like a world-simulator programmer has more aptitude at affecting the simulation than people within the simulation (though to a much lesser degree of course).
As a related example, Von Neumann was involved in setting up post-WWII US Modernism, and is also attributed extreme mental powers by modernism (e.g. extreme creativity in inventing a wide variety of fields); in creating the social system, he also has more memetic influence within that system, and could more effectively change its boundaries e.g. in creating new fields of study.
While "Vassar's group" is informal, it's more than just a cluster of friends; it's a social scene with lots of shared concepts, terminology, and outlook (although of course not every member holds every view and members sometimes disagree about the concepts, etc etc). In this way, the structure is similar to social scenes like "the AI safety community" or "wokeness" or "the startup scene" that coordinate in part on the basis of shared ideology even in the absence of institutional coordination, albeit much smaller. There is no formal institution governing the scene, and as far as I've ever heard Vassar himself has no particular authority within it beyond individual persuasion and his reputation.
Median Group is the closest thing to a "Vassarite" institution, in that its listed members are 2/3 people who I've heard/read describing the strong influence Vassar has had on their thinking and 1/3 people I don't know, but AFAIK Median Group is just a project put together by a bunch of friends with similar outlook and doesn't claim to speak for the whole scene or anything.
I feel pretty defensive reading and responding to this comment, given a previous conversation with Scott Alexander where he said his professional opinion would be that people who have had a psychotic break should be on antipsychotics for the rest of their life (to minimize risks of future psychotic breaks). This has known severe side effects like cognitive impairment and brain shrinkage and lacks evidence of causing long-term improvement. When I was on antipsychotics, my mental functioning was much lower (noted by my friends) and I gained weight rapidly. (I don't think short-term use of antipsychotics was bad, in my case)
It is in this context that I'm reading that someone talking about the possibility of mental subprocess implantation ("demons") should be "treated as a psychological emergency", when the Eric Bryulant case had already happened, and talking about the psychological processes was necessary for making sense of the situation. I feared involuntary institutionalization at the time, quite a lot, for reasons like this.
If someone expresses opinions like this, and I have reason to believe they would act on them, then I can't believe myself to have freedom of speech. That ...
I don't remember the exact words in our last conversation. If I said that, I was wrong and I apologize.
My position is that in schizophrenia (which is a specific condition and not just the same thing as psychosis), lifetime antipsychotics might be appropriate. EG this paper suggests continuing for twelve months after a first schizophrenic episode and then stopping and seeing how things go, which seems reasonable to me. It also says that if every time you take someone off antipsychotics they become fully and dangerous psychotic again, then lifetime antipsychotics are probably their best bet. In a case like that, I would want the patient's buy-in, ie if they were medicated after a psychotic episode I would advise them of the reasons why continued antipsychotic use was recommended in their case, if they said they didn't want it we would explore why given the very high risk level, and if they still said they didn't want it then I would follow their direction.
I didn't get a chance to talk to you during your episode, so I don't know exactly what was going on. I do think that psychosis should be thought of differently than just "weird thoughts that might be true", as more of a whole-body n...
I don’t remember the exact words in our last conversation. If I said that, I was wrong and I apologize.
Ok, the opinions you've described here seem much more reasonable than what I remember, thanks for clarifying.
I do think that psychosis should be thought of differently than just “weird thoughts that might be true”, since it’s a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom.
I agree, yes. I think what I was afraid of at the time was being called crazy and possibly institutionalized for thinking somewhat weird thoughts that people would refuse to engage with, and showing some signs of anxiety/distress that were in some ways a reaction to my actual situation. By the time I was losing sleep etc, things were quite different at a physiological level and it made sense to treat the situation as a psychiatric emergency.
If you can show someone that they're making errors that correspond to symptoms of mild psychosis, then telling them that and suggesting corresponding therapies to help with the underlying problem seems pretty reasonable.
Thanks, if you meant that, when someone is at a very early stage of thinking strange things, you should talk to them about it and try to come to a mutual agreement on how worrying this is and what the criteria would be for psych treatment, instead of immediately dehumanizing them and demanding the treatment right away, then I 100% agree.
I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom.
If psychosis is caused by an underlying physiological/biochemical process, wouldn't that suggest that e.g. exposure to Leverage Research wouldn't be a cause of it?
If being part of Leverage is causing less reality-based thoughts and nudging someone into mild psychosis, I would expect that being part of some other group could cause more reality-based thoughts and nudge someone away from mild psychosis. Why would causation be possible in one direction but not the other?
I guess another hypothesis here is that some cases are caused by social/environmental factors and others are caused by biochemical factors. If that's true, I'd expect changing someone's environment to be more helpful for the former sort of case.
[probably old-hat [ETA: or false], but I'm still curious what you think] My (background unexamined) model of psychosis-> schizophrenia is that something, call it the "triggers", sets a person on a trajectory of less coherence / grounding; if the trajectory isn't corrected, they just go further and further. The "triggers" might be multifarious; there might be "organic" psychosis and "psychic" psychosis, where the former is like what happens from lead poisoning, and the latter is, maybe, what happens when you begin to become aware of some horrible facts. If your brain can rearrange itself quickly enough to cope with the newly known reality, your trajectory points back to the ground. If it can't, you might have a chain reaction where (1) horrible facts you were previously carefully ignoring, are revealed because you no longer have the superstructure that was ignore-coping with them; (2) your ungroundedness opens the way to unepistemic beliefs, some of which might be additionally horrifying if true; (3) you're generally stressed out because things are going wronger and wronger, which reinforces everything.
If this is true, then your statement:
. I think if someone has mild psychosis a...
If someone tells me that they can talk to their deceased parents, I'm probably not going to invest the time to test whether they can obtain novel information this way; I'm just going to assume they're delusional because I'm confident spirits don't exist.
This is failing to track ambiguity in what's being refered to. If there's something confusing happening--something that seems important or interesting, but that you don't yet have words to well-articulate it--then you try to say what you can (e.g. by talking about "demons"). In your scenario, you don't know exactly what you're dismissing. You can confidently dismiss, in the absence of extraordinary evidence, that (1) their parents's brains have been rotting in the ground, and (2) they are talking with their parents, in the same way you talk to a present friend; you can't confidently dismiss, for example, that they are, from their conscious perspective, gaining information by conversing with an entity that's naturally thought of as their parents (which we might later describe as, they have separate structure in them, not integrated with their "self", that encoded thought patterns from their parents, blah blah blah etc.). You can say "oh well yes of course if it's *just a metaphor* maybe I don't want to dismiss them", but the point is that from a partially pre-theoretic confusion, it's not clear what's a metaphor and it requires further work to disambiguate what's a metaphor.
I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.
Based on the things I am reading about what has happened, blame, ostracism, and cancelling seem like the bare minimum of what we should do.
Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve.
This is really, really serious. If this happened to someone closer to me I'd be out for blood, and probably legal prosecution.
Let's not minimize how fucked up this is.
Olivia, Devi and I all talked to people other than Michael Vassar, such as Anna Salamon. We gravitated towards the Berkeley community, which was started around Eliezer's writing. None of us are calling for blame, ostracism, or cancelling of Michael. Michael helped all of us in ways no one else did. None of us have a motive to pursue a legal case against him. Ziz's sentence you quoted doesn't implicate Michael in any crimes.
The sentence is also misleading given Devi didn't detransition afaik.
Jessicata, I will be blunt here. This article you wrote was [EDIT: expletive deleted] misleading. Perhaps you didn't do it on purpose; perhaps this is what you actually believe. But from my perspective, you are an unreliable narrator.
Your story, original version:
Your story, updated version:
If you can't see how these two stories differ, then... I don't have sufficiently polite words to describe it, so let's just say that to me these two stories seem very different.
Lest you accuse me of gaslighting, let me remind you that I am not doubting any of the factual statements you made. (I actually tried to...
I could be very wrong, but the story I currently have about this myself is that Vassar himself was a different and saner person before he used too much psychedelics. :( :( :(
Do you have a timeline of when you think that shift happened? That might make it easier for other people who knew Vassar at the time to say whether their observation matched yours.
you publicly describe your suffering as a way to show people that MIRI/CFAR is evil.
Could you expand more on this? E.g. what are a couple sentences in the post that seem most trying to show this.
Because it seems like you call it bad when you attribute it to MIRI/CFAR, but when other people suggest that Vassar was responsible, then it seems a bit like no big deal, definitely not anything to blame him for.
I appreciate the thrust of your comment, including this sentence, but also this sentence seems uncharitable, like it's collapsing down stuff that shouldn't be collapsed. For example, it could be that the MIRI/CFAR/etc. social field could set up (maybe by accident, or even due to no fault of any of the "central" people) the conditions where "psychosis" is the best of the bad available options; in which case it makes sense to attribute causal fault to the social field, not to a person who e.g. makes that clear to you, and therefore more proximal causes your breakdown. (Of course there's disagreement about whether that's the state of the world, but it's not necessarily incoherent.)
I do get the sense that jessicata is relating in a funny way to Michael Vassar, e.g. by warping the narrative around him while selectively posing as "just trying to state facts" in relation to other narrative fields; but this is hard to tell, since it's also what it might look like if Michael Vassar was systematically scapegoated, and jessicata is reporting more direct/accurate (hence less bad-seeming) observations.
Where did jessicata corroborate this sentence "then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil" ?
I should note that, as an outsider, the main point I recall Eliezer making in that vein is that he used Michael Vassar as a model for the character who was called Professor Quirrell. As an outsider, I didn't see that as an unqualified endorsement - though I think your general message should be signal-boosted.
Eliezer has openly said Quirrell's cynicism is modeled after a mix of Michael Vassar and Robin Hanson.
But from my perspective, you are an unreliable narrator.
I appreciate you're telling me this given that you believe it. I definitely am in some ways, and try to improve over time.
then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil
I said in the text that (a) there were conversations about corruption in EA institutions, including about the content of Ben Hoffman's posts, (b) I was collaborating with Michael Vassar at the time, (c) Michael Vassar was commenting about social epistemology. I admit that connecting points (a) and (c) would have made the connection clearer, but it wouldn't have changed the text much.
In cases where someone was previously part of a "cult" and later says it was a "cult" and abusive in some important ways, there has to be a stage where they're thinking about how bad the social context was, and practically always, that involves conversations with other people who are encouraging them to look at the ways their social context is bad. So my having conversations where people try to convince me CFAR/MIRI are evil is expected given what el...
I'm not going to comment on drug usage in detail for legal reasons, except to note that there are psychedelics legal in some places, such as marijuana in CA.
It doesn't make sense to attribute unique causal responsibility for psychotic breaks to anyone, except maybe to the person it's happening to. There are lots of people all of us were talking to in that time period who influenced us, and multiple people were advocating psychedelic use. Not all cases happened to people who were talking significantly with Michael around the time. As I mentioned in the OP, as I was becoming more psychotic, people tried things they thought might help, which generally didn't, and they could have done better things instead. Even causal responsibility doesn't imply blame, e.g. Eliezer had some causal responsibility due to writing things that attracted people to the Berkeley scene where there were higher-variance psychological outcomes. Michael was often talking with people who were already "not ok" in important ways, which probably affects the statistics.
Please see my comment on the grandparent.
I agree with Jessica's general characterization that this is better understood as multi-causal rather than the direct cause of actions by one person.
Relevant bit of social data: Olivia is the most irresponsible-with-drugs person I've ever met, by a sizeable margin; and I know of one specific instance (not a person named in your comment or any other comments on this post) where Olivia gave someone an ill-advised drug combination and they had a bad time (though not a psychotic break).
My memory of the RBC incident you're referring to was that it wasn't supplements that did it, it was a caffeine overdose from energy drinks leading into a panic attack. But there were certainly a lot of supplements around and they could've played a role I didn't know about.
When I say that I believe Olivia is irresponsible with drugs, I'm not excluding the unscheduled supplements, but the story I referred to involved the scheduled kind.
A question for the 'Vassarites', if they will: were you doing anything like the "unihemispheric sleep" exercise (self-inducing hallucinations/dissociative personalities by sleep deprivation) the Zizians are described as doing?
I banned him from SSC meetups for a combination of reasons including these
If you make bans like these it would be worth to communicate them to the people organizing SSC meetups. Especially, when making bans for safety reasons of meetup participants not communicating those bans seems very strange to me.
Vassar lived a while after he left the Bay Area in Berlin and for decisions whether or not to make an effort to integrate someone like him (and invite him to LW and SSC meetups) such kind of information is valuable and Bay people not sharing it but claiming that they do anything that would work in practice like a ban feels misleading.
For reasons I don't fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything's kind of been frozen in place since then.
I think Vassar left the Bay area more then a year before COVID happened. As far as I remember his stated reasoning was something along the lines of everyone in the Bay Area getting mindkilled by leftish ideology.
It was on the Register of Bans, which unfortunately went down after I deleted the blog. I admit I didn't publicize it very well because this was a kind of sensitive situation and I was trying to do it without destroying his reputation.
If there are bans that are supposed to be enforced, mentioning that in the mails that go out to organizers for a ACX everywhere event would make sense. I'm not 100% sure that I got all the mails because Ruben forwarded mails for me (I normally organize LW meetups in Berlin and support Ruben with the SSC/ACX meetups), but in those there was no mention of the word ban.
I don't think it needs to be public but having such information in a mail like the one Aug 23 would likely to be necessary for a good portion of the meetup organizers to know that there an expectation that certain people aren't welcome.
https://www.lesswrong.com/posts/iWWjq5BioRkjxxNKq/michael-vassar-at-the-slatestarcodex-online-meetup seems to have happened after that point in time. Vassar not only attended a Slate Star Codex but was central in it and presenting his thoughts.
I organized that, so let me say that:
It seems to me that despite organizing multiple SSC events you had no knowledge that Vassar was banned from SSC events. Neither had anyone reading the event anouncement to the extend that they would tell you that Vassar was banned before the event happened.
To me that suggests that there's a problem of not sharing information about who's banned to those organizing meetups in an effective way, so that a ban has the consequence one would expect it to have.
So, it's been a long time since I actually commented on Less Wrong, but since the conversation is here...
Hearing about this is weird for me, because I feel like, compared to the opinions I heard about him from other people in the community, I kind of... always had uncomfortable feelings about Mike Vassar? And I say this without having had direct personal contact with him except, IIRC, maybe one meetup I attended where he was there and we didn't talk directly, although we did occasionally participate in some of the same conversations online.
By all accounts, it sounds like he's always been quite charismatic in person, and this isn't the first time I've heard someone describe him as a "wizard." But empirically, there are some people who're very charismatic who propagate some really bad ideas and whose impacts on the lives of people around them, or on society at large, can be quite negative. As of last I was paying attention to him, I wouldn't have expected Mike Vassar to have that negative an effect on the lives of the people around him, but I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought ...
I met Vassar once. He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd. Something about his delivery was so captivating, that it took me a while to "shake off the fairy dust" and realize just how silly some of his claims were, even when it should have been obvious from the start. Moreover, his worldview seemed heavily based on paranoidal / conspiracy-theory type of thinking. So, yes, I'm not too surprised by Scott's revelations about him.
He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd.
Yeah, it definitely didn't work on me. I believe I wrote this thread shortly after my one-and-only interaction with him, in which he said a lot of things that made me very skeptical but that I couldn't easily refute, or had much time to think about before he would move on to some other topic. (Interestingly, he actually replied in that thread even though I didn't mention him by name.)
It saddens me to learn that his style of conversation/persuasion "works" on many people who otherwise seem very smart and capable (and even self-selected for caring about being rational). It seems like pretty bad news as far as what kind of epistemic situation humanity is in (e.g., how easily we will be manipulated by even slightly-smarter-than-human AIs / human-AI systems).
I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought to be taken.
Heh, the same feeling here. I didn't have much opportunity to interact with him in person. I remember repeatedly hearing praise about how incredibly smart he is (from people whom I admired), then trying to find something smart written by him, and feeling unimpressed and confused, like maybe I wasn't reading the right texts or I failed to discover the hidden meaning that people smarter than me have noticed.
Hypothesis 1: I am simply not smart enough to recognize his greatness. I can recognize people one level above me, and they can recognize people one level above them, but when I try to understand someone two levels above me, it's all gibberish to me.
Hypothesis 2: He is more persuasive in person than in writing. (But once he impressed you in person, you will now see greatness in his writing, too. Maybe because of halo affect. Maybe because now you understand the hidden layers of what he actually meant by that.) Maybe he is more persuasive in person because he can make his message optimized for the receiver; which might be a good thing...
Not a direct response to you, but if anyone who hasn't talked to Vassar is wanting an example of Vassar-conversation that may be easier to understand or get some sense from than most examples would (though it'll have a fair bit in it that'll probably still seem false/confusing), you might try Spencer Greenberg's podcast with Vassar.
As a datapoint: I listened to that podcast 4 times, and took notes 3 of those 4 times, to try and clearly parse what he's saying. I certainly did not fully succeed.
It seems like he said some straightforwardly contradictory things? For instance, that strong conflict theorists trust their own senses and feelings more, but also trust them less?
I would really like to understand what he's getting at by the way, so if it is clearer for you than it is for me, I'd actively appreciate clarification.
Seeing as how you posted this 9 days ago, I hope you did not bite off more than you could chew, and I hope you do not want to scream anymore.
In Harry Potter the standard practice seems to be to "eat chocolate" and perhaps "play with puppies" after exposure to ideas that are both (1) possibly true, and (2) very saddening to think about.
Then there is Gendlin's Litany (and please note that I am linking to a critique, not to unadulterated "yay for the litany" ideas) which I believe is part of Lesswrong's canon somewhat on purpose. In the critique there are second and third thoughts along these lines, which I admire for their clarity, and also for their hopefulness.
...Ideally [a better version of the Litany] would communicate: “Lying to yourself will eventually screw you up worse than getting hurt by a truth,” instead of “learning new truths has no negative consequences.”
This distinction is particularly important when the truth at hand is “the world is a fundamentally unfair place that will kill you without a second thought if you mess up, and possibly even if you don’t.”
EDIT TO CLARIFY: The person who goes about their life ignoring the universe’s Absolute Neutrality is very fundamentally
There's also these 2 podcasts which cover quite a variety of topics, for anyone who's interested:
You've Got Mel - With Michael Vassar
Jim Rutt Show - Michael Vassar on Passive-Aggressive Revolution
Okay, meta: This post has over 500 comments now and it's really hard to keep a handle on all of the threads. So I spent the last 2 hours trying to outline the main topics that keep coming up. Most top-level comments are linked to but some didn't really fit into any category, so a couple are missing; also apologies that the structure is imperfect.
Topic headers are bolded and are organized very roughly in order of how important they seem (both to me personally and in terms of the amount of air time they've gotten).
I find something in me really revolts at this post, so epistemic status… not-fully-thought-through-emotions-are-in-charge?
Full disclosure: I am good friends with Zoe; I lived with her for the four months leading up to her post, and was present to witness a lot of her processing and pain. I’m also currently dating someone named in this post, but my reaction to this was formed before talking with him.
First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away. If the points in the post felt more compelling, then I’d probably be more down for an argument of “we should bin these together and look at this as a whole”, but as it stands the stuff listed in here feels like it’s describing something significantly less damaging, and of a different kind of damage. I’m also annoyed that this post relies so heavily on Zoe’s, and the comparison feels like it cheapens what Zoe went through. I keep having a recurring thought that the author must have utterly failed to understand the intensity of the very direct impact from Leverage’s operations on Zoe. Mo...
I want to note that this post (top-level) now has more than 3x the number of comments that Zoe's does (or nearly 50% more comments than the Zoe+BayAreaHuman posts combined, if you think that's a more fair comparison), and that no one has commented on Zoe's post in 24 hours. [ETA: This changed while I was writing this comment. The point about lowered activity still stands.]
This seems really bad to me — I think that there was a lot more that needed to be figured out wrt Leverage, and this post has successfully sucked all the attention away from a conversation that I perceive to be much more important.
I keep deleting sentences because I don't think it's productive to discuss how upset this makes me, but I am 100% with Aella here. I was wary of this post to begin with and I feel something akin to anger at what it did to the Leverage conversation.
I had some contact with Leverage 1.0 — had some friends there, interviewed for an ops job there, and was charted a few times by a few different people. I have also worked for both CFAR and MIRI, though never as a core staff member at either organization; and more importantly, I was close friends with maybe 50% of the people who worked at ...
It seems like it's relatively easy for people to share information in the CFAR+MIRI conversation. On the other hand, for those people who have actually the most central information to share in the Leverage conversation it's not as easy to share them.
In many cases I would expect that private in person conversation are needed to progress the Leverage debate and that just takes time. Those people at leverage who want to write up their own experience likely benefit from time to do that.
Practically, helping Anna get an overview over timeline of members and funders and getting people to share stories with Aella seems to be the way going forward that's largely not about leaving LW comments.
I agree with the intent of your comment mingyuan, but perhaps the reason for the asymmetry in activity on this post is simply due to the fact that there are an order of magnitude (or several orders of magnitude?) more people with some/any experience and interaction with CFAR/MIRI (especially CFAR) compared to Leverage?
I think some of it has got to be that it's somehow easier to talk about CFAR/MIRI, rather than a sheer number of people thing. I think Leverage is somehow unusually hard to talk about, such that maybe we should figure out how to be extraordinarily kind/compassionate/gentle to anyone attempting it, or something.
I agree that Leverage has been unusually hard to talk about bluntly or honestly, and I think this has been true for most of its existence.
I also think the people at the periphery of Leverage, are starting to absorb the fact that they systematically had things hidden from them. That may be giving them new pause, before engaging with Leverage as a topic.
(I think that seems potentially fair, and considerate. To me, it doesn't feel like the same concern applies in engaging about CFAR. I also agree that there were probably fewer total people exposed to Leverage, at all.)
...actually, let me give you a personal taste of what we're dealing with?
The last time I choose to talk straightforwardly and honestly about Leverage, with somebody outside of it? I had to hard-override an explicit but non-legal privacy agreement*, to get a sanity check. When I was honest about having done so shortly thereafter, I completely and permanently lost one of my friendships as a result.
Lost-friend says they were traumatized as a result of me doing this. That having "made the mistake of trusting me" hurt their relationships with other Leveragers. That at the time, they wished they'd lied to me, which stung.
I t...
I'm finally out about my story here! But I think I want to explain a bit of why I wasn't being very clear, for a while.
I've been "hinting darkly" in public rather than "telling my full story" due to a couple of concerns:
I don't want to "throw ex-friend under the bus," to use their own words! Even friend's Leverager partner (who they weren't allowed to visit, if they were "infected with objects") seemed more "swept-up in the stupidity" than "malicious." I don't know how to tell my truth, without them feeling drowned out. I do still care about that. Eurgh.
Via models that come out of my experience with Brent: I think this level of silence, makes the most sense if some ex-Leveragers did get a substantial amount of good out of the experience (sometimes with none of the bad, sometimes alongside it), and/or if there's a lot of regrettable actions taken by people who were swept up in this at the time, by people who would ordinarily be harmless under normal circumstances. I recognize that bodywork was very helpful to my friend, in working through some of their (unrelated) trauma. I am more than a little reluctant to put people through the sort of mob-driven invalidation I felt, in the
The fact that the people involved apparently find it uniquely difficult to talk about is a pretty good indication that Leverage != CFAR/MIRI in terms of cultishness/harms etc.
Yes; I want to acknowledge that there was a large cost here. (I wasn't sure, from just the comment threads; but I just talked to a couple people who said they'd been thinking of writing up some observations about Leverage but had been distracted by this.)
I am personally really grateful for a bunch of the stuff in this post and its comment thread. But I hope the Leverage discussion really does get returned to, and I'll try to lend some momentum that way. Hope some others do too, insofar as some can find ways to actually help people put things together or talk.
Seems to me that, given the current situation, it would probably be good to wait maybe two more days until this debate naturally reaches the end. And then restart the debate about Leverage.
Otherwise, we risk having two debates running in parallel, interfering with each other.
The comments section of this post is full of CFAR and MIRI employees attempting to do collaborative truth-seeking. The only comments made by Leverage employees in comparable threads were attempts at reputation management. That alone tells you a lot!
Then it is good that this debate happened. (Despite my shock when I saw it first.) It's just the timing with regards to the debate about Leverage that is unfortunate.
By way of narrowing down this sense, which I think I share, if it's the same sense: leaving out the information from Scott's comment about a MIRI-opposed person who is advocating psychedelic use and causing psychotic breaks in people, and particularly this person talks about MIRI's attempts to have any internal info compartments as a terrible dark symptom of greater social control that you need to jailbreak away from using psychedelics, and then those people have psychotic breaks - leaving out this info seems to be not something you'd do in a neutrally intended post written from a place of grave concern about community dynamics. It's taking the Leverage affair and trying to use it to make a point, and only including the info that would make that point, and leaving out info that would distract from that point. And I'm not going to posture like that's terribly bad inhuman behavior, but we can see it and it's okay to admit to ourselves that we see it.
And it's also okay for somebody to think that the original Leverage affair needed to be discussed on its own terms, and not be carefully reframed in exactly the right way to make a point about a higher-profile group the author...
not something you'd do in a neutrally intended post written from a place of grave concern about community dynamics
I'm not going to posture like that's terribly bad inhuman behavior, but we can see it and it's okay to admit to ourselves that we see it
These have the tone of allusions to some sort of accusation, but as far as I can tell you're not actually accusing Jessica of any transgression here, just saying that her post was not "neutrally intended," which - what would that mean? A post where Gricean implicature was not relevant?
Can you clarify whether you meant to suggest Jessica was doing some specific harmful thing here or whether this tone is unendorsed?
Okay, sure. If what Scott says is true, and it matches my recollections of things I heard earlier - though I can attest to very little of it of my direct observation - then it seems like this post was written with knowledge of things that would make the overall story arc it showed, look very different, and those things were deliberately omitted. This is more manipulation than I myself would personally consider okay to use in a situation like this one, though I am ever mindful of Automatic Norms and the privilege of being more verbally facile than others in which facts I can include but still make my own points.
First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away...
I want to second this reaction (basically your entire second paragraph). I have been feeling the same but hadn't worked up the courage to say it.
I am also mad at what I see to be piggybacking on Zoe’s post, downplaying of the harms described in her post, and a subtle redirection of collective attention away from potentially new, timid accounts of things that happened to a specific group of people within Leverage and seem to have a lot of difficulty talking about it.
I hope that the sustained collective attention required to witness, make sense of and address the accounts of harm coming out of the psychology division of Leverage doesn’t get lost as a result of this post being published when it was.
For a moment I actually wondered whether this was a genius-level move by Leverage, but then I decided that I am just being paranoid. But it did derail the previous debate successfully.
On the positive side, I learned some new things. Never heard about Ziz before, for example.
EDIT:
Okay, this is probably silly, but... there is no connection between the Vassarites and Leverage, right? I just realized that my level of ignorance does not justify me dismissing a hypothesis so quickly. And of course, everyone knows everyone, but there are different levels of "knowing people", and... you know what I mean, hopefully. I will defer to judgment of people from Bay Area about this topic.
Outside of "these people probably talked to each other like once every few months" I think there is no major connection between Leverage and the Vassarites that I am aware of.
Thanks.
I mostly assumed this; I suppose in the opposite case someone probably would have already mentioned that. But I prefer to have it confirmed explicitly.
The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away.
I'm assuming that sensemaking is easier, rather than harder, with more relevant information and stories shared. I guess if it's pulling the spotlight away, it's partially because it's showing relevant facts about things other than Leverage, and partially because people will be more afraid of scapegoating Leverage if the similarities to MIRI/CFAR are obvious. I don't like scapegoating, so I don't really care if it's pulling the spotlight away for the second reason.
If the points in the post felt more compelling, then I’d probably be more down for an argument of “we should bin these together and look at this as a whole”, but as it stands the stuff listed in here feels like it’s describing something significantly less damaging, and of a different kind of damage.
I don't really understand what Zoe went through, just reading her post (although I have talked with other ex-Leverage people about the events). You don't understand what I went through, either. It was really, really psychologically disturbing. I sound paran...
I don't really understand what Zoe went through, just reading her post (although I have talked with other ex-Leverage people about the events). You don't understand what I went through, either. It was really, really psychologically disturbing. I sound paranoid writing what I wrote, but this paranoia affected so many people.
It would have probably better if you would have focused on your experience and drop all of the talk about Zoe from this post. That would make it easier for the reader to just take the information value from your experience.
I think that your post is still valuable information but that added narrative layer makes it harder to interact with then it would have been if it would have been focused more on your experience.
One example for this is comparing Zoe’s mention of someone at Leverage having a psychotic break to the author having a psychotic break. But Zoe’s point was that Leverage treated the psychotic break as an achievement, not that the psychotic break happened.
From the quotes in Scott's comment, it seems to me also the case that Michael Vassar also treated Jessica's and Ziz's psychoses as an achievement.
it seems to me also the case that Michael Vassar also treated Jessica's [...] psycho[sis] as an achievement
Objection: hearsay. How would Scott know this? (I wrote a separate reply about the ways in which I think Scott's comment is being unfair.) As some closer-to-the-source counterevidence against the "treating as an achievement" charge, I quote a 9 October 2017 2:13 p.m. Signal message in which Michael wrote to me:
Up for coming by? I'd like to understand just how similar your situation was to Jessica's, including the details of her breakdown. We really don't want this happening so frequently.
(Also, just, whatever you think of Michael's many faults, very few people are cartoon villains that want their friends to have mental breakdowns.)
First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away.
If we're trying to solve problems rather than attack the bad people, then the boundaries of the discussion should be determined by the scope of the problem, not by which people we're saying are bad. If you're trying to attack the bad people without standards or a theory of what's going on, that's just mob violence.
I... think I am trying to attack the bad people? I'm definitely conflict-oriented around Leverage; I believe that on some important level treating that organization or certain people in it as good-intentioned-but-misguided is a mistake, and a dangerous one. I don't think this is true for MIRI/CFAR; as is summed up pretty well in the last section of Orthonormal's post here. I'm down for the boundaries of the discussion being determined by the scope of the problem, but I perceive the original post here to be outside the scope of the problem.
I'm also not sure how to engage with your last sentence. I do have theories for what is going on (but regardless I'm not sure if you give a mob a theory that makes it not a mob).
I perceive you as doing a conversational thing here that I don't like, where you like... imply things about my position without explicitly stating them? Or talk from a heavy frame that isn't explicit?
I don't really view you as engaging in good faith at this point, so I'm precommitting not to respond to you after this.
Flagging that... I somehow want to simultaneously upvote and downvote Benquo's comment here.
Upvote because I think he's standing for good things. (I'm pretty anti-scapegoating, especially of the 'quickly' kind that I think he's concerned about.)
Downvote because it seems weirdly in the wrong context, like he's trying to punch at some kind of invisible enemy. His response seems incongruous with Aella's actual deal.
I have some probability on miscommunication / misunderstanding.
But also ... why ? are you ? why are your statements so 'contracting' ? Like they seem 'narrowizing' of the discussion in a way that seems like it philosophically tenses with your stated desire for 'revealing problems'. And they also seem weirdly 'escalate-y' like somehow I'm more tense in my body as I read your comments, like there's about to be a fight? Not that I sense any anger in you, but I sense a 'standing your ground' move that seems like it could lead to someone trying to punch you because you aren't budging.
This is all metaphorical language for what I feel like your communication style is doing here.
Thanks for separating evaluation of content from evaluation of form. That makes it easy for me to respond to your criticism of my form without worrying so much that it's a move to suppress imperfectly expressed criticism.
The true causal answer is that when I perceive someone as appealing to a moralistic framework, I have a tendency to criticize their perspective from inside a moralistic frame, even though I don't independently endorse moralizing. While this probably isn't the best thing I could do if I were perfectly poised, I don't think this is totally pointless either. Attempts to scapegoat someone via moralizing rely on the impression that symmetric moral reasoning is being done, so they can be disrupted by insistent opposition from inside that frame.
You might think of it as standing in territory I think someone else has unjustly claimed, and drawing attention to that fact. One might get punched sometimes in such circumstances, but that's not so terrible; definitely not as bad as being controlled by fear, and it helps establish where recourse/justice is available and where it isn't, which is important information to have! Occasionally bright young people with a moral compass ge...
The true causal answer is that when I perceive someone as appealing to a moralistic framework, I have a tendency to criticize their perspective from inside a moralistic frame, even though I don't independently endorse moralizing.
o
hmmm, well i gotta chew on that more but
Aella seems like a counter-productive person to stand your ground against. I sense her as mainly being an 'advocate' for Zoe. She claims wanting to attack the bad people, but compared with other commenters, I sense less 'mob violence' energy from her and ... maybe more fear that an important issue will be dropped / ignored. (I am not particularly afraid of this; the evidence against Leverage is striking and damning enough that it doesn't seem like it will readily be dropped, even if the internet stops talking about it. In fact I hope to see the internet talking about it a bit less, as more real convos happen in private.)
I'm a bit worried about the way Scott's original take may have pulled us towards a shared map too quickly. There's also a general anti-jessicata vibe I'm getting from 'the room' but it's non-specific and has a lot to do with karma vote patterns. Naming these here for the sake of group awareness ...
I don't understand where guarantees came into this. I don't understand how I could answer a question of the form "why did you do X rather than Y" without making some kind of comparison of the likely outcomes of X and Y.
I do know that in many cases people falsely claim to be comparing costs and benefits honestly, or falsely claim that some resource is scarce, as part of a strategy of coercion. I have no reason to do this to myself but I see many people doing it and maybe that's part of what turned you off from the idea.
On the other hand, there's a common political strategy where a dominant coalition establishes a narrative that something should be provided universally without rationing, or that something should be absolutely prevented without acknowledging taboo tradeoffs. Since this policy can't be implemented as stated, it empowers people in the position to decide which exceptions to make, and benefits the kinds of people who can get exceptions made, at the expense of less centrally connected people.
It seems to me like thinking about tradeoffs is the low-conflict alternative to insisting on guaranteed outcomes.
Generalizing from your objection to thinking about things in terms of r...
Uhhh sorry, the thing about 'guarantees' was probably a mis-speak.
For reference, I used to be a competitive gamer. This meant I used to use resource management and cost-benefit analysis a lot in my thinking. I also ported those framings into broader life, including how to win social games. I am comfortable thinking in terms of resource constraints, and lived many years of my life in that mode. (I was very skilled at games like MTG, board games, and Werewolf/Mafia.)
I have since updated to realize how that way of thinking was flawed and dissociated from reality.
I don't understand how I could answer a question of the form "why did you do X rather than Y" without making some kind of comparison of the likely outcomes of X and Y.
I wrote a whole response to this part, but ... maybe I'm missing you.
Thinking strategically seems fine to the extent that one is aligned with love / ethics / integrity and not acting out of fear, hate, or selfishness. The way you put your predicament caused me to feel like you were endorsing a fear-aligned POV.
..."Since I expect my adversaries to make use of resources they seize to destroy more of what I care about," "But I'm in an adversaria
optimizing processes coordinating with copies of themselves, distributed over many people
Question about balance: how do you not end up reifying these in your mind, creating a paranoid sense of 'there be ghosts lurking in shadows' ?
This question seems central to me because the poison I detect in Vassar-esque-speak is
a) Memetically more contagious stories seem to include lurking ghosts / demons / shadows because adding a sense of danger or creating paranoia is sticky and salient. Vassar seems to like inserting a sense of 'hidden danger' or 'large demonic forces' into his theories and way of speaking about things. I'm worried this is done for memetic intrigue, viability, and stickiness, not necessarily because it's more true. It makes people want to listen to him for long periods of time, but I don't sense it being an openly curious kind of listening but a more addicted / hungry type of listening. (I can detect this in myself.)
I guess I'm claiming Vassar has an imbalance between the wisdom/truth of his words and the power/memetic viability of his words. With too much on the side of power.
b) Reifying these "optimizing processes coordinating" together, maybe "aga...
I empathise with the feeling of slipperyness in the OP, I feel comfortable attributing that to the subject matter rather than malice.
If I had an experience that matched zoe's to the degree jessicata's did (superficially or otherwise) I'd feel compelled to post it. I found it helpful in the question of whether "insular rationalist group gets weird and experiences rash of psychotic breaks" is a community problem, or just a problem with stray dude.
Scott's comment does seem to verify the "insular rationalist group gets weird and experiences rash of psychotic breaks" trend, but it seems to be a different group than the one named in the original post.
One of the things that can feel like gaslighting in a community that attracts highly scrupulous people is when posting about your interpretation of your experience is treated as a contractual obligation to defend the claims and discuss any possible misinterpretations or consequences of what is a challenging thing to write in the first place.
I feel like here and in so many other comments in this discussion that there's important and subtle distinctions that are being missed. I don't have any intention to conditionlessly accept and support all accusations made (I have seen false accusations cause incredible harm and suicidality in people close to me). I do expect people who make serious claims about organizations to be careful about how they do it. I think Zoe's Leverage post easily met my standard, but that this post here triggered a lot of warning flags for me, and I find it important to pay attention to those.
Speaking of highly scrupulous...
I think that the phrases "treated as a contractual obligation" and "any possible misinterpretations or consequences" are both hyperbole, if they are (as they seem) intended as fair summaries or descriptions of what Aella wrote above.
I think there's a skipped step here, where you're trying to say that what Aella wrote above might imply those things, or might result in those things, or might be tantamount to those things, but I think it's quite important to not miss that step.
Before objecting to Aella's [A] by saying "[B] is bad!" I think one should justify or at least explicitly assert [A—>B]
Yes, and to clarify I am not attempting to imply that there is something wrong with Aella's comment. It's more like this is a pattern I have observed and talked about with others. I don't think people playing a part in a pattern that has some negative side effects should necessarily have a responsibility frame around that, especially given that one literally can't track all various possible side effects of actions. I see epistemic statuses as partially attempting to give people more affordance for thinking about possible side effects of the multi context nature of online comms and that was used to good effect here, I likely would have had a more negative reaction to Aella's post if it hadn't included the epistemic status.
The community still seems in the middle of sensemaking around Leverage
Understanding how other parts of the community were similar/dissimilar to Leverage seems valuable from a sensemaking point of view.
Lots of parts the post sort of implicitly presents things as important, or asks you to draw conclusions without explicitly pointing out those conclusions.
I think you may be asking your reader to draw the conclusion that this is a dishonest way to write, without explicitly pointing out that conclusion :-) Personally, I see nothing wrong with presenting only observations.
Some context, please. Imagine the following scenario:
There is absolutely nothing wrong with this, whether it happens the same day, the next day, or week later. Maybe victim B was encouraged by (reactions to) victim A's message, maybe it was just a coincidence. Nothing wrong with that either.
Another scenario:
This is a good thing to happen; more evidence, encouragement for further victims to come out.
But this post is different in a few important ways. First, Jessicata piggybacks on Zoe's story a lot, insinuating analogies, but providing very little actual data. (If you rewrote the article to avoid referring to Zoe, it would be 10 times shorter.) Second, Jessicata repeatedly makes comparison between Zoe's experience at Leverage and her experience at MIRI/CFAR, and usually concludes that Leverage was less bad (for reasons that are weird to me, such as because their abuse was legible, or because they provided space for people to talk about demons and exorcise them). Here are some quotes:
...I want to disagree with a frame that says th
I don't think "don't police victims' timing" is an absolute rule; not policing the timing is a pretty good idea in most cases. I think this is an exception.
And if I wasn't clear, I'll explicitly state my position here: I think it's good to pay close attention to negative effects communities have on its members, and I am very pro people talking about this, and if people feel hurt by an organization it seems really good to have this publicly discussed.
But I believe the above post did not simply do that. It also did other things, which is frame things I perceive in misleading ways, leave out key information relevant to a discussion (as per Eliezer's comment here), and also rely very heavily directly on Zoe's account at Leverage to bring validity to their own claims when I perceive Leverage as have been being both significantly worse and worse in a different category of way. If the above post hadn't done these things, I don't think I would have any issue with the timing.
Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness.
Just zooming in on this, which stood out to me personally as a particular thing I'm really tired of.
If you're not disagreeing with people about important things then you're not thinking. There are many options for how to negotiate a significant disagreement with a colleague, including spending lots of time arguing about it, finding a compromise action, or stopping collaborating with the person (if it's a severe disagreement, which often it can be). But telling someone that by disagreeing they're claiming to be 'better' than another person in some way always feels to me like an attempt to 'control' the speech and behavior of the person you're talking to, and I'm against it.
It happens a lot. I recently overheard someone (who I'd not met before) telling Eliezer Yudkowsky that he's not allowed to have extreme beliefs about AGI outcomes. I don't recall the specific claim, just that EY's probability mass for the claim was in the 95-99% range. The person argued that because EY disagrees w...
I affirm the correctness of Ben Pace's anecdote about what he recently heard someone tell me.
"How dare you think that you're better at meta-rationality than Eliezer Yudkowsky, do you think you're special" - is somebody trolling? Have they never read anything I've written in my entire life? Do they have no sense, even, of irony? Yeah, sure, it's harder to be better at some things than me, sure, somebody might be skeptical about that, but then you ask for evidence or say "Good luck proving that to us all eventually!" You don't be like, "Do you think you're special?" What kind of bystander-killing argumentative superweapon is that? What else would it prove?
I really don't know how I could make this any clearer. I wrote a small book whose second half was about not doing exactly this. I am left with a sense that I really went to some lengths to prevent this, I did what society demands of a person plus over 10,000% (most people never write any extended arguments against bad epistemology at all, and society doesn't hold that against them), I was not subtle. At some point I have to acknowledge that other human beings are their own people...
The irony was certainly not lost on me; I've edited the post to make this clearer to other readers.
I'm glad you agree that the behavior Jessica describes is explicitly opposed to the content of the Sequences, and that you clearly care a lot about this. I don't think anyone can reasonably claim you didn't try hard to get people to behave better, or could reasonably blame you for the fact that many people persistently try to do the opposite of what you say, in the name of Rationality.
I do think it would be a very good idea for you to investigate why & how the institutions you helped build and are still actively participating in are optimizing against your explicitly stated intentions. Anna's endorsement of this post seems like reasonably strong confirmation that organizations nominally committed to your agenda are actually opposed to it, unless you're actually checking. And MIRI/CFAR donors seem to for the most part think that you're aware of and endorse those orgs' activities.
When Jessica and another recent MIRI employee asked a few years ago for some of your time to explain why they'd left, your response was:
...My guess is that I could talk over Signal voice for 30 minutes or in person for 15 minutes on the 15th, with an upcoming other commitment providing a definite cutoff poi
Anna's endorsement of this post seems like reasonably strong confirmation that organizations nominally committed to your agenda are actually opposed to it,
Presumably Eliezer's agenda is much broader than "make sure nobody tries to socially enforce deferral to high-status figures in an ungrounded way" though I do think this is part of his goals.
The above seems to me like it tries to equivocate between "this is confirmation that at least some people don't act in full agreement with your agenda, despite being nominally committed to it" and "this is confirmation that people are actively working against your agenda". These two really don't strike me as the same, and I really don't like how this comment seems like it tries to equivocate between the two.
Of course, the claim that some chunk of the community/organizations Eliezer created are working actively against some agenda that Eliezer tried to set for them is plausible. But calling the above a "strong confirmation" of this fact strikes me as a very substantial stretch.
It's explicitly opposition to core Sequences content, which Eliezer felt was important enough to write a whole additional philosophical dialogue about after the main Sequences were done. Eliezer's response when informed about it was:
is somebody trolling? Have they never read anything I’ve written in my entire life? Do they have no sense, even, of irony?
That doesn't seem like Eliezer agrees with you that someone got this wrong by accident, that seems like Eliezer agrees with me that someone identifying as a Rationalist has to be trying to get core things wrong to end up saying something like that.
I don't think this follows. I do not see how degree of wrongness implies intent. Eliezer's comment rhetorically suggests intent ("trolling") as a way of highlighting how wrong the person is; he is free to correct me if I am wrong, but I am pretty sure that is not an actual suggestion of intent, only a rhetorical one.
I would say moreover, that this is the sort of mistake that occurs, over and over, by default, with no intent necessary. I might even say that it is avoiding, not committing, this sort of mistake, that requires intent. Because this sort of mistake is just sort of what people fall into by default, and avoiding it requires active effort.
Is it contrary to everything Eliezer's ever written? Sure! But reading the entirety of the Sequences, calling yourself a "rationalist", does not in any way obviate the need to do the actual work of better group epistemology, of noticing such mistakes (and the path to them) and correcting/avoiding them.
I think we can only infer intent like you're talking about if the person in question is, actually, y'know, thinking about what they're doing. But I think people are really, like, acting on autopilot a pretty big fraction of the time; n...
Behavior is better explained as strategy than as error, if the behaviors add up to push the world in some direction (along a dimension that's "distant" from the behavior, like how "make a box with food appear at my door" is "distant" from "wiggle my fingers on my keyboard"). If a pattern of correlated error is the sort of pattern that doesn't easily push the world in a direction, then that pattern might be evidence against intent. For example, the conjunction fallacy will produce a pattern of wrong probability estimates with a distinct character, but it seems unlikely to push the world in some specific direction (beyond whatever happens when you have incoherent probabilities). (Maybe this argument is fuzzy on the edges, like if someone keeps trying to show you information and you keep ignoring it, you're sort of "pushing the world in a direction" when compared to what's "supposed to happen", i.e. that you update; which suggests intent, although it's "reactive" rather than "proactive", whatever that means. I at least claim that your argument is too general, proves too much, and would be more clear if it were narrower.)
"How dare you think that you're better at meta-rationality than Eliezer Yudkowsky, do you think you're special" reads to me as something Eliezer Yudkowsky himself would never write.
I don't recall the specific claim, just that EY's probability mass for the claim was in the 95-99% range. The person argued that because EY disagrees with some other thoughtful people on that question, he shouldn't have such confidence.
I think people conflate the very reasonable "I am not going to adopt your 95-99% range because other thoughtful people disagree and I have no particular reason to trust you massively more than I trust other people" with the different "the fact that other thoughtful people mean there's no way you could arrive at 95-99% confidence" which is false. I think thoughtful people disagreeing with you is decent evidence you are wrong but can still be outweighed.
I sought a lesson we could learn from this situation, and your comment captured such a lesson well.
This is reminiscent of the message of the Dune trilogy. Frank Herbert warns about society's tendencies to "give over every decision-making capacity" to a charismatic leader. Herbert said in 1979:
The bottom line of the Dune trilogy is: beware of heroes. Much better rely on your own judgment, and your own mistakes.
If you're not disagreeing with people about important things then you're not thinking.
This is a great sentence. I kind of want it on a t-shirt.
If you're not disagreeing with people about important things then you're not thinking.
Indeed. And if the people object against someone disagreeing with them, that would imply they are 100% certain about being right.
I recently overheard someone (who I'd not met before) telling Eliezer Yudkowsky that he's not allowed to have extreme beliefs about AGI outcomes.
On one hand, this suggests that the pressure to groupthink is strong. On the other hand, this is evidence of Eliezer not being treated as an infallible leader... which I suppose is a good news in this avalanche of bad news.
(There is a method to reduce group pressure, by making everyone write their opinion first, and only then tell each other the opinions. Problem is, this stops working if you estimate the same thing repeatedly, because people already know what the group opinion was in the past.)
Kate Donovan messaged me to say:
I think four people experiencing psychosis in a period of five years, in a community this large with high rates of autism and drug use, is shockingly low relative to base rates.
[...]
A fast pass suggests that my 1-3% for lifetime prevalence was right, but mostly appearing at 15-35.
And since we have conservatively 500 people in the cluster (a lot more people than that attended CFAR workshops or are in MIRI or CFAR's orbit), 4 is low. Given that I suspect the cluster is larger and I am pretty sure my numbers don't include drug induced psychosis, just primary psychosis.
The base rate seems important to take into account here, though per Jessica, "Obviously, for every case of poor mental health that 'blows up' and is noted, there are many cases that aren't." (But I'd guess that's true for the base-rate stats too?)
This is a good point regarding the broader community. I do think that, given that at least 2 cases were former MIRI employees, there might be a higher rate in that subgroup.
EDIT: It's also relevant that a lot of these cases happened in the same few years. 4 of the 5 cases of psychiatric hospitalization or jail time I know about happened in 2017, the other happened sometime 2017-2019. I think these people were in the 15-35 age range, which spans 20 years.
I'm a complete outsider looking in here, so here's an outsider's perspective (from someone in CS academia, currently in my early 30s).
I've never heard or seen anyone, in real life, ever have psychosis. I know of 0 cases. Yeah, I know that people don't share such things, but I've heard of no rumors either.
By contrast, depression/anxiety seems common (especially among grad students) and I know of a couple of suicides. There was even a murder! But never psychosis; without the internet I wouldn't even know it's a real thing.
I don't know what the official base rate is, but saying "4 cases is low" while referring to the group of people I'm familiar with (smart STEM types) is, from my point of view, absurd.
The rate you quote is high. There may be good explanations for this: maybe rationalists are more open about their psychosis when they get it. Maybe they are more gossipy so each case of psychosis becomes widely known. Maybe the community is easier to enter for people with pre-existing psychotic tendencies. Maybe it's all the drugs some rationalists use.
But pretending the reported rate of psychosis is low seems counterproductive to me.
I lived in a student housing cooperative for 3 years during my undergrad experience. These were non-rationalists. I lived with 14 people, then 35, then 35 (somewhat overlapping) people.
In these 3 years I saw 3 people go through a period of psychosis.
Once it was because of whippets, basically, and updated me very very strongly away from nitrous oxide being safe (it potentiates response to itself, so there's a positive feedback loop, and positive feedback loops in biology are intrinsically scary). Another time it was because the young man was almost too autistic to function in social environments and then feared that he'd insulted a woman and would be cast out of polite society for "that action and also for overreacting to the repercussions of the action". The last person was a mixture of marijuana and having his Christianity fall apart after being away from the social environment of his upbringing.
A striking thing about psychosis is that up close it really seems more like a biological problem rather than a philosophic one, whereas I had always theorized naively that there would be something philosophically interesting about it, with opportunities to learn or teach in a way that conn...
I agree with other commenters that you are just less likely to see psychosis even if it's there, both because it's not ongoing in the way that depression and anxiety are, and because people are less likely to discuss it. I was only one step away from Jessica in the social graph in October of 2017 and never had any inkling that she'd had a psychotic episode until just now. I also wasn't aware that Zack Davis had ever had a psychotic episode, despite having met him several times and having read his blog a bit. I also lived with Olivia during the time that she was apparently inspiring psychosis in others.
In fact, the only psychotic episodes I've known about are ones that had news stories written about them, which suggests to me that you are probably underestimating the extent to which people keep quiet about the psychotic episodes of themselves and those close to them. It seems in quite poor taste to gossip about, akin to gossiping about friends' suicide attempts (which I also assume happen much more often than I hear about — I think one generally only hears about the ones that succeed or that are publicized to spread awareness).
Just for thoroughness, here are the psychotic epis...
I feel like people keep telling me that psychosis around me should be higher than what I hear about, which is irrelevant to my point: my point is the frequency in which I hear about psychosis in the rationalist community is like an order of magnitude higher than the frequency I hear about it elsewhere.
It doesn't matter whether people hide psychosis among my social group; the observation to explain is why people don't hide psychosis in the rationalist community to the same extent.
For example, you mention 2 separate example of Bay Area rationalists making the news for psychosis. I know of no people in my academic community who have made the news for psychosis. Assuming equal background rates, what is left to explain is why rationalists are more likely to make the news when they get psychosis.
Another example: there have now been 1-2 people who have admitted to psychosis in blog posts intended as public callouts. I know of no people in my academic community who have written public callout blog posts in which they say they've had psychosis. Is there an explanation for why rationalists who've had psychosis are more likely to write public callout blog posts?
Anyway, this discussion feels kind of moot now that I've read Scott Alexander's update to his comment. He says that several people (who knew each other) all had psychosis around the same time in 2017. No reasonable person can think this is merely baseline; some kind of social contagion is surely involved (probably just people sharing drugs or drug recommendations).
Sampling error. Psychosis is not an ongoing thing, yielding many fewer chances to observe it than months or years long depression or anxiety. Psychosis often manifests when people are already isolated due to worsening mental health, whereas depression and anxiety can be exactly exacerbated by the situations in which you would observe it i.e. socializing. Nor would people volunteer their experience due to much greater stigma.
A hypothesis is that rationalists are a larger gossip community, so that e.g. you might hear about psychosis from 4 years ago in people you're nth-degree socially connected with, where maybe most other communities aren't like that?
Certainly possible! I mentioned this hypothesis upthread.
I wonder if there are ways to test it. For instance, do non-Bay-Arean rationalists also have a high rate of reported psychosis? I think not (not sure though), though perhaps most of the gossip centers on the Bay Area.
Are Bay Area rationalists also high in reported levels of other gossip-mediated things? I'm trying to name some, but most sexual ones are bad examples because of the polyamory confounder. How about: are Bay rationalists high in reported rates of plastic surgery? How about abortion? These seem like somewhat embarrassing things that you'd normally not find out about, but that people like to gossip about.
Or maybe people don't care to gossip about these things on the internet, because they are less interesting the psychosis.
I’m someone with a family history of psychosis and I spend quite a lot of time researching it—treatments, crisis response, cultural responses to it. There are roughly the same number of incidences of psychosis in my immediate to extended family than are described in this post in the extended rationalist community. Major predictive factors include stress, family history and use of marijuana (and, to a lesser extent, other psychedelics). I don’t have studies to back this up but I have an instinct based on my own experience that openness-to-experience and risk-of-psychosis are correlated in family risk factors. So given the drugs, stress and genetic openness, I’d expect generic Bay Area smart people to have a fairly high risk of psychosis compared to, say, people in more conservative areas already.
PhoenixFriend alleges multiple cases you didn't know about, but so far no one else has affirmed that those cases existed or were closely connected with CFAR/MIRI.
I think it's entirely possible that those cases did exist and will be affirmed, but at the moment my state is "betting on skeptical."
First, thank you for writing this.
Second, I want to jot down a thought I've had for a while now, and which came to mind when I read both this and Zoe's Leverage post.
To me, it looks like there is a recurring phenomenon in the rationalist/EA world where people...
I worked for CFAR full-time from 2014 until mid to late 2016, and have worked for CFAR part-time or as a frequent contractor ever since. My sense is that dynamics like those you describe were mostly not present at CFAR, or insofar as they were present weren't really the main thing. I do think CFAR has not made as much research progress as I would like, but I think the reasoning for that is much more mundane and less esoteric than the pattern you describe here.
The fact of the matter is that for almost all the time I've been involved with CFAR, there just plain hasn't been a research team. Much of CFAR's focus has been on running workshops and other programs rather than on dedicated work towards extending the art; while there have occasionally been people allocated to research, in practice even these would often end up getting involved in workshop preparation and the like.
To put things another way, I would say it's much less "the full-time researchers are off unproductively experimenting on their own brains in secret" and more "there are no full-time researchers". To the best of my knowledge CFAR has not ever had what I would consider a systematic research and development program -- ...
Maybe offtopic, but the "trying too hard to try" part rings very true to me. Been on both sides of it.
The tricky thing about work, I'm realizing more and more, is that you should just work. That's the whole secret. If instead you start thinking how difficult the work is, or how important to the world, or how you need some self-improvement before you can do the work effectively, these thoughts will slow you down and surprisingly often they'll be also completely wrong. It always turns out later that your best work wasn't the one that took the most effort, or felt the most important at the time; you were just having a nose-down busy period, doing a bunch of things, and only the passage of time made clear which of them mattered.
Does anyone have thoughts about avoiding failure modes of this sort?
Especially in the "least convenient possible world" where some of the bullet points are actually true -- like, if we're disseminating principles for wannabe AI Manhattan Projects, and we're optimizing the principles for the possibility that one of the wannabe AI Manhattan Projects is the real deal, what principles should we disseminate?
Most of my ideas are around "staying grounded" -- spend significant time hanging out with "normies" who don't buy into your worldview, maintain your sense of humor, fully unplug from work at least one day per week, have hobbies outside of work (perhaps optimizing explicitly for escapism in the form of computer games, TV shows, etc.) Possibly live somewhere other than the Bay Area, someplace with fewer alternative lifestyles and a stronger sense of community. (I think Oxford has been compared favorably to Berkeley with regard to presence of homeless people, at least.)
But I'm just guessing, and I encourage others to share their thoughts. Especially people who've observed/experienced mental health crises firsthand -- how could they have been prevented?
EDIT: I'm also curious how to ...
IMO, A large number of mental health professionals simply aren't a good fit for high intelligence people having philosophical crises. People know this and intuitively avoid the large hassle and expense of sorting through a large number of bad matches. Finding solid people to refer to who are not otherwise associated with the community in any way would be helpful.
I know someone who may be able to help with finding good mental health professionals for those situations; anyone who's reading this is welcome to PM me for contact info.
There's an "EA Mental Health Navigator" now to help people connect to the right care.
https://eamentalhealth.wixsite.com/navigator
I don't know how good it is yet. I just emailed them last week, and we set up an appointment for this upcoming Wednesday. I might report back later, as things progress.
Does anyone have thoughts about avoiding failure modes of this sort?
Meredith from Status451 here. I've been through a few psychotic episodes of my own, often with paranoid features, for reasons wholly unrelated to anything being discussed at the object-level here; they're unpleasant enough, both while they're going on and while cleaning up the mess afterward, that I have strong incentives to figure out how to avoid these kinds of failure modes! The patterns I've noticed are, of course, only from my own experience, but maybe relating them will be helpful.
I do think that encouraging people to stay in contact with their family and work to have good relationships is very useful. Family can provide a form of grounding that having small talk with normies while going dancing or persuing other hobbies doesn't provide.
When deciding whether a personal development group is culty I think it's a good test to ask whether or not the work of the group lead to the average person in the group having better or worse relationships with their parents.
It would be both surprising news, and immensely bad news, to learn that only a tiny group of people could (or should) work on such a problem -- that would mean applying vastly less parallel "compute" to the problem, relative to what is theoretically available, and that when the problem is forbiddingly difficult to begin with.
I have substantial probability on an even worse state: there's *multiple* people or groups of people, *each* of which is *separately* necessary for AGI to go well. Like, metaphorically, your liver, heart, and brain would each be justified in having a "rarity narrative". In other words, yes, the parallel compute is necessary--there's lots of data and ideas and thinking that has to happen--but there's a continuum of how fungible the compute is relative to the problems that need to be solved, and there's plenty of stuff at the "not very fungible but very important" end. Blood is fungible (though you definitely need it), but you can't just lose a heart valve, or your hippocampus, and be fine.
I didn't mention it in the comment, but having a larger pool of researchers is not only useful for doing "ordinary" work in parallel -- it also increases the rate at which your research community discovers and accumulates outlier-level, irreplaceable genius figures of the Euler/Gauss kind.
If there are some such figures already in the community, great, but there are presumably others yet to be discovered. That their impact is currently potential, not actual, does not make its sacrifice any less damaging.
Most of these bullet points seem to apply to some degree to every new and risky endeavor ever started. How risky things are is often unclear at the start. Such groups are build from committed people. Small groups develop their own dynamics. Fast growth leads to social growing pains. Lack of success leads to a lot of additional difficulties. Also: Evaporative cooling. And if (partial) success happens even more growth leads to needed management level etc etc. And later: Hindsight bias.
Thank you for writing this, Jessica. First, you've had some miserable experiences in the last several years, and regardless of everything else, those times sound terrifying and awful. You have my deep sympathy.
Regardless of my seeing a large distinction between the Leverage situation and MIRI/CFAR, I agree with Jessica that this is a good time to revisit the safety of various orgs in the rationality/EA space.
I almost perfectly overlapped with Jessica at MIRI from March 2015 to June 2017. (Yes, this uniquely identifies me. Don't use my actual name here anyway, please.) So I think I can speak to a great deal of this.
I'll run down a summary of the specifics first (or at least, the specifics I know enough about to speak meaningfully), and then at the end discuss what I see overall.
Claim: People in and adjacent to MIRI/CFAR manifest major mental health problems, significantly more often than the background rate.
I think this is true; I believe I know two of the first cases to which Jessica refers; and I'm probably not plugged-in enough socially to know the others. And then there's the Ziz catastrophe.
Claim: Eliezer and Nate updated sharply toward shorter timelines, other MIRI researchers...
: People in and adjacent to MIRI/CFAR manifest major mental health problems, significantly more often than the background rate.
I think this is true
My main complaint about this and the Leverage post is the lack of base-rate data. How many people develop mental health problems in a) normal companies, b) startups, c) small non-profits, d) cults/sects? So far, all I have seen are two cases. And in the startups I have worked at, I would also have been able to find mental health cases that could be tied to the company narrative. Humans being human narratives get woven. And the internet being the internet, some will get blown out of proportion. That doesn't diminish the personal experience at all. I am updating only slightly on CFAR or MIRI. And basically not at all on "things look better from the outside than from the inside."
In particular, I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed depression or anxiety (link). I think given the kind of undirected, often low-paid, work that many have been doing for the last decade, I think that's the right reference class to draw from, and my current guess is we are roughly at that same level, or slightly below it (which is a crazy high number, and I think should give us a lot of pause).
I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed [emphasis mine] depression or anxiety (link)
I'm confused about how you got to this conclusion, and think it is most likely false. Neither your link, the linked study, or the linked meta-analysis in the linked study of your link says this. Instead the abstract of the linked^3 meta-analysis says:
Among 16 studies reporting the prevalence of clinically significant symptoms of depression across 23,469 Ph.D. students, the pooled estimate of the proportion of students with depression was 0.24 (95% confidence interval [CI], 0.18-0.31; I2 = 98.75%). In a meta-analysis of the nine studies reporting the prevalence of clinically significant symptoms of anxiety across 15,626 students, the estimated proportion of students with anxiety was 0.17 (95% CI, 0.12-0.23; I2 = 98.05%).
Further, the discussion section of the linked^3 study emphasizes:
...While validated screening instruments tend to over-identify cases of depression (relative to structured clinical interviews) by approximately a factor of two67,68, our findings nonetheless point to a major public health problem among Ph
Sorry, am I misunderstanding something? I think taking "clinically significant symptoms", specific to the UC system, as a given is wrong because it did not directly address either of my two criticisms:
1. Clinically significant symptoms =/= clinically diagnosed even in worlds where there is a 1:1 relationship between clinically significant symptoms and would have been clinically diagnosed, as many people do not get diagnosed
2. Clinically significant symptoms do not have a 1:1 relationship with would have been clinically diagnosed.
Sorry, maybe this is too nitpicky, but clinically significant symptoms =/= clinically diagnosed, even in worlds where the clinically significant symptoms are severe enough to be diagnosed as such.
If you instead said in "population studies 30-40% of graduate students have anxiety or depression severe enough to be clinically diagnosed as such were they to seek diagnosis" then I think this will be a normal misreading from not jumping through enough links.
Put another way, if someone in mid-2020 told me that they had symptomatic covid and was formally diagnosed with covid, I would expect that they had worse symptoms than someone who said they had covid symptoms and later tested for covid antibodies. This is because jumping through the hoops to get a clinical diagnosis is nontrivial Bayesian evidence of severity and not just certainty, under most circumstances, and especially when testing is limited and/or gatekeeped (which is true for many parts of the world for covid in 2020, and is usually true in the US for mental health).
Ah, sorry, yes. Me being unclear on that was also bad. The phrasing you give is the one I intended to convey, though I sure didn't do it.
I think CFAR would be better off if Anna delegated hiring to someone else.
I think Pete did (most of?) the hiring as soon as he became ED, so I think this has been the state of CFAR for a while (while I think Anna has also been able to hire people she wanted to hire).
It's always been a somewhat group-involved process, but yes, I was primarily responsible for hiring for roughly 2016 through the end of 2017, then it would have been Tim. But again, it's a small org and always involved some involvement of the whole group.
Without denying that it is a small org and staff usually have some input over hiring, that input is usually informal.
My understanding is that in the period when Anna was ED, there was an explicit all-staff discussion when they were considering a hire (after the person had done a trial?). In the Pete era, I'm sure Pete asked for staff members' opinions, and if (for instance) I sent him an email with my thoughts on a potential hire, he would take that info into account, but there was not institutional group meeting.
if one believed somebody else were just as capable of causing AI to be Friendly, clearly one should join their project instead of starting one's own.
Nitpicking: there are reasons to have multiple projects, for example it's convenient to be in the same geographic location but not anyone can relocate to any place.
A secondary concern in that it's better to have one org that has some people in different locations, but everyone communicating heavily, than to have two separate organizations.
I think this is much more complex than you're assuming. As a sketch of why, costs of communication scale poorly, and the benefits of being small and coordinating centrally often beats the costs imposed by needing to run everything as one organization. (This is why people advise startups to outsource non-central work.)
[Edit: I want to note that this is represents only a fraction of my overall feelings and views on this whole thing.]
I don't want to concentrate on the question of which is "worse"; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases.
I feel some annoyance at this sentence. I appreciate the stated goal of just trying to understand what happened in the different situations, without blaming or trying to evaluate which is worse.
But then the post repeatedly (in every section!) makes reference to Zoe's post, comparing her experience at Leverage to your (and others') experience at MIRI/CFAR, taking specific elements from her account and drawing parallels to your own. This is the main structure of the post!
Some more or less randomly chosen examples (ctrl-f "Leverage" or "Zoe" for lots more):
...Zoe begins by listing a number of trauma symptoms she experienced. I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break.
...
Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past event
This feels especially salient because a number of the specific criticisms, in my opinion, don't hold up to scrutiny, but this is obscured by the comparison to Leverage.
Like for any cultural characteristic X, there will be healthy and unhealthy versions. For instance, there are clearly good healthy versions of "having a culture of self improvement and debugging", and also versions that are harmful.
For each point Zoe contends that (at least some parts of Leverage) had a destructive version, and you point out that there was a similar thing at MIRI/CFAR. And for many (but not all) of those points, 1) I agree that there was a similar dynamic at MIRI/CFAR, and also 2) I think that the MIRI CFAR version was much less harmful than what Zoe describes.
For instance,
Zoe is making the claim that (at least some parts of) Leverage had an unhealthy and destructive culture of debugging. You, Jessica, make the claim that CFAR had a similar culture of debugging, and that this is similarly bad. My current informed impression is that CFAR's self improvement culture both had some toxic elements and is/was also an order of magnitude better than what Zoe describes.
Assuming that for a moment that my ...
Ok. After thinking further and talking about it with others, I've changed my mind about the opinion that I expressed in this comment, for two reasons.
1) I think there is some pressure to scapegoat Leverage, by which I mean specifically, "write off Leverage as reprehensible, treat it as 'an org that we all know is bad', and move on, while feeling good about our selves for not being bad they way that they were".
Pointing out some ways that MIRI or CFAR are similar to Leverage disrupts that process. Anyone who both wants to scapegoat Leverage and also likes MIRI has to contend with some amount of cognitive dissonance. (A person might productively resolve this cognitive dissonance by recognizing what I contend are real disanalogies between the two cases, but they do have to come to terms with it at all.)
If you mostly want to scapegoat, this is annoying, but I think we should be making it harder, not easier, to scapegoat in this way.
2) My current personal opinion is that the worst things that happened at MIRI or CFAR are not in the same league as what was describes as happening in (at least some parts of) Leverage in Zoe's post, both in terms of the deliberateness of the bad dynami...
I'm not sure what writing this comment felt like for you, but from my view it seems like you've noticed a lot of the dynamics about scapegoating and info-suppression fields that Ben and Jessica have blogged about in the past (and occasionally pointed out in the course of these comments, though less clearly). I'm going to highlight a few things.
I do think that Jessica writing this post will predictably have reputational externalities that I don't like and I think are unjustified.
Broadly, I think that onlookers not paying much attention would have concluded from Zoe's post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe's and Jessica's post, is likely to conclude that Leverage and MIRI are similarly bad cults.
I totally agree with this. I also think that to the degree to which an "onlooker not paying much attention" concludes this is the degree to which they are habituated to engaging with discussion of wrongdoing as scapegoating games. This seems to be very common (though incredibly damaging) behavior. Scapegoating works on the associative/impressionistic logic of "looks", and Jessica's post certainly makes CFAR/MIRI "look" bad. This...
I appreciate this comment, especially that you noticed the giant upfront paragraph that's relevant to the discussion :)
One note on reputational risk: I think I took reasonable efforts to reduce it, by emailing a draft to people including Anna Salamon beforehand. Anna Salamon added Matt Graves (Vaniver) to the thread, and they both said they'd be happy with me posting after editing (Matt Graves had a couple specific criticisms of the post). I only posted this on LW, not on my blog or Medium. I didn't promote it on Twitter except to retweet someone who was already tweeting about it. I don't think such reputation risk reduction on my part was morally obligatory (it would be really problematic to require people complaining about X organization to get approval from someone working at that organization), just possibly helpful anyway.
Spending more than this amount of effort managing reputation risks would seriously risk important information not getting published at all, and too little of that info being published would doom the overall ambitious world-saving project by denying it relevant knowledge about itself. I'm not saying I acted optimally, just, I don't see the people complaining about this making a better tradeoff in their own actions or advising specific policies that would improve the tradeoff.
Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about "HEY, DON'T USE THIS TO SCAPEGOAT"
I think that's literally true, but also they way you wrote this sentence implies that that is unusual or uncommon.
I think that's backwards. If a person was intentionally and deliberately motivated to scapegoat some other person or group, it is an effective rhetorical move to say "I'm not trying to punish them, I just want to talk freely about some harms."
By pretending that you're not attacking the target, you protect yourself somewhat from counter attack. Now you can cause reputational damage, and if people try to punish you for doing that, you can retreat to the Motte of "but I was just trying to talk about what's going on. I specifically said not to punish any one!"
and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.
This also seems to strong too me. I expect that many movement EAs will read the post Zoe's and think "well, that's enough information for me to never have anything to do with Geoff or Leverage." This isn't because they're not interested in justice, it's because they don't have time time or the interest to investigate every allegation, so they're using some rough heuristics and policies such as "if something looks sufficiently like a dangerous cult, don't even bother giving it the benefit of the doubt."
When I was drafting my comment, the original version of the text you first quoted was, "Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about 'HEY DON'T USE THIS TO SCAPEGOAT' (which people are totally capable of ignoring)", guess I should have left that in there. I don't think it's uncommon to ignore such disclaimers, I do think it actively opposes behaviors and discourse norms I wish to see in the world.
I agree that putting a "I'm not trying to blame anyone" disclaimer can be a pragmatic rhetorical move for someone attempting to scapegoat. There's an alternate timeline version of Jessica that wrote this post as a well crafted, well defended rhetorical attack, where the literal statements in the post all clearly say "don't fucking scapegoat anyone, you fools" but all the associative and impressionistic "dark implications" (Vaniver's language) say "scapegoat CFAR/MIRI!" I want to draw your attention to the fact that for a potential dark implication to do anything, you need people who can pick up that signal. For it to be an effective rhetorical move, you need a critical mass of people who are well practiced in ignoring literal speech, who understand...
This works as a general warning against awareness of hypotheses that are close to but distinct from the prevailing belief. The goal should be to make this feasible, not to become proficient in noticing the warning signs and keeping away from this.
I think the feeling that this kind of argument is fair is a kind of motivated cognition that's motivated by credence. That is, if a cognitive move (argument, narrative, hypothesis) puts forward something false, there is a temptation to decry it for reasons that would prove too much, that would apply to good cognitive moves just as well if considered in their context, which credence-motivated cognition won't be doing.
Full disclosure: I am a MIRI Research Associate. This means that I receive funding from MIRI, but I am not a MIRI employee and I am not privy to its internal operation or secrets.
First of all, I am really sorry you had these horrible experiences.
A few thoughts:
Thought 1: I am not convinced the analogy between Leverage and MIRI/CFAR holds up to scrutiny. I think that Geoff Anders is most likely a bad actor, whereas MIRI/CFAR leadership is probably acting in good faith. There seems to be significantly more evidence of bad faith in Zoe's account than in Jessica's account, and the conclusion is reinforced by adding evidence from other accounts. In addition, MIRI definitely produced some valuable public research whereas I doubt the same can be said of Leverage, although I haven't been following Leverage so I am not confident about the latter (ofc it's in principle possible for a deeply unhealthy organization to produce some good outputs, and good outputs certainly don't excuse abuse of personnel, but I do think good outputs provide some evidence against such abuse).
It is important not to commit the fallacy of gray: it would risk both judging MIRI/CFAR too harshly and judging Leverage in...
Plus a million points for "IMO it's a reason for less secrecy"!
If you put a lid on something you might contain it in the short term, but only at the cost of increasing the pressure: And pressure wants out, and the higher the pressure the more explosive it will be when it inevitably does come out.
I have heard too many accounts like this, in person and anecdotally, on the web and off for me to currently be interested in working or even getting to closely involved with any of the organizations in question. The only way to change this for me is to believably cultivate a healthy, transparent and supportive environment.
This made me go back and read "Every Cause wants to be a Cult" (Eliezer, 2007), which includes quotes like this one:
"Here I just want to point out that the worthiness of the Cause does not mean you can spend any less effort in resisting the cult attractor. And that if you can point to current battle lines, it does not mean you confess your Noble Cause unworthy. You might think that if the question were, “Cultish, yes or no?” that you were obliged to answer, “No,” or else betray your beloved Cause."
Thought 2: From my experience, AI alignment is a domain of research that intrinsically comes with mental health hazards. First, the possibility of impending doom and the heavy sense of responsibility are sources of stress. Second, research inquiries often enough lead to "weird" metaphysical questions that risk overturning the (justified or unjustified) assumptions we implicitly hold to maintain a sense of safety in life. I think it might be the closest thing in real life to the Lovecraftian notion of "things that are best not to know because they will drive you mad". Third, the sort of people drawn to the area and/or having the necessary talents seem to often also come with mental health issues (I am including myself in this group).
That sounds like MIRI should have a councillor on it's staff.
That would make them more vulnerable to claims that they use organizational mind control on their employees, and at the same time make it more likely that they would actually use it.
You would likely hire someone who's traditionally trained, credentialed and has work experience instead of doing a bunch of your own psych-experiments, likely in a tradition like gestalttherapy that focuses on being nonmanipulative.
There's an easier solution that doesn't run the risk of being or appearing manipulative. You can contract external and independent councillors and make them available to your staff anonymously. I don't know if there's anything comparable in the US, but in Australia they're referred to as Employee Assistance Programs (EAPs). Nothing you discuss with the councillor can be disclosed to your workplace, although in rare circumstances there may be mandatory reporting to the police (e.g. if abuse or ongoing risk of a minor is involved).
This also goes a long way toward creating a place where employees can talk about things they're worried will seem crazy in work contexts.
Solutions like that might work, but it's worth noting that just having an average therapist likely won't be enough.
If you actually care about a level of security that protects secrets against intelligence agencies, operational security of the office of the therapist is a concern.
Governments that have security clearances don't want their employees to talk with therapists who don't have the secuirty clearances about classified information.
Talking nonjudgmentally with someone who has reasonable fears that the humanity won't survive the next ten years because of fast AI timelines is not easy.
As far as I can tell, normal corporate management is much worse than Leverage
Your original post drew a comparison between MIRI and Leverage, the latter of which has just been singled out for intense criticism.
If I take the quoted sentence literally, you're saying that "MIRI was like Leverage" is a gentler critique than "MIRI is like your regular job"?
If the intended message was "my job was bad, although less bad than the jobs of many people reading this, and instead only about as bad as Leverage Research," why release this criticism on the heels of a post condemning Leverage as an abusive cult? If you believe the normally-employed among LessWrong readers are being abused by sub-Leverage hellcults, all the time, that seems like quite the buried lede!
Sorry for the intense tone, it's just ... this sentence, if taken seriously, reframes the entire post for me in a big, weird, bad way.
MIRI wouldn't make sense as a project if most regular jobs were fine, people who were really ok wouldn't have reason to build unfriendly AI.
I just want to note that this is a contentious claim.
There is a competing story, and one much more commonly held among people who work for or support MIRI, that the world is heading towards an unaligned intelligence explosion due to the combination of a coordination problem and very normal motivated reasoning about the danger posed by lucrative and prestigious projects.
One could make the claim "healthy" people (whatever that means) wouldn't exhibit those behaviors, ie that they would be able to coordinate and avoid rationalizing. But that's a non-standard view.
I would prefer that you specifically flag it as a non-standard view, and then either make the argument for that view over the more common one, or highlight that you're not going into detail on the argument and that you don't expect others to accept the claim.
As it is, it feels a little like this is being slipped in as if it is a commonly accepted premise.
Note that there's an important distinction between "corporate management" and "corporate employment"--the thing where you say "yeesh, I'm glad I'm not a manager at Google" is substantially different from the thing where you say "yeesh, I'm glad I'm not a programmer at Google", and the audience here has many more programmers than managers.
[And also Vanessa's experience matches my impressions, tho I've spent less time in industry.]
[EDIT: I also thought it was clear that you meant this more as a "this is what MIRI was like" than "MIRI was unusually bad", but I also think this means you're open to nostalgebraist's objection, that you're ordering things pretty differently from how people might naively order them.]
My experience was that if you were T-5 (Senior), you had some overlap with PM and management games, and at T-6 (Staff), you were often in them. I could not handle the politics to get to T-7. Programmers below T-5 are expected to earn promotions or to leave.
Google's a big company, so it might have been different elsewhere internally. My time at Google certainly traumatized me, but probably not to the point of anything in this or the Leverage thread.
Programmers below T-5 are expected to earn promotions or to leave.
This changed something like five years ago [edit: August 2017], to where people at level four (one level above new grad) no longer needed to get promoted to stay long term.
I think maybe a bit of the confusion here is nostalgebraist reading “corporate management” to mean something like “a regular job in industry”, whereas you’re pointing at “middle- or upper-management in sufficiently large or maze-like organizations”?
Yes, that seems likely. I did some interships at Google as a software engineer and they didn't seem better than working at MIRI on average, although they had less intense psychological effects, as things didn't break out in fractal betrayal during the time I was there.
Separately I’m confused about the claim that “people who were really ok wouldn’t have reason to build unfriendly AI”
People might think they "have to be productive" which points at increasing automation detached from human value, which points towards UFAI. Alternatively, they might think there isn't a need to maximize productivity, and they can do things that would benefit their own values, which wouldn't include UFAI. (I acknowledge there could be coordination problems where selfish behavior leads to cutting corners, but I don't think that's the main driver of existential risk failure modes)
I worked for 16 years in the industry, including management positions, including (briefly) having my own startup. I talked to many, many people who worked in many companies, including people who had their own startups and some with successful exits.
The industry is certainly not a rose garden. I encountered people who were selfish, unscrupulous, megalomaniac or just foolish. I've seen lies, manipulation, intrigue and plain incompetence. But, I also encountered people who were honest, idealistic, hardworking and talented. I've seen teams trying their best to build something actually useful for some corner of the world. And, it's pretty hard to avoid reality checks when you need to deliver a real product for real customers (although some companies do manage to just get more and more investments without delivering anything until the eventual crash).
I honestly think most of them are not nearly as bad as Leverage.
Trying to do a cooperative, substantive reply. Seems like openness and straightforwardness are the best way here.
I found the above to be a mix of surprising and believable. I was at CFAR full-time from Oct 2015 to Oct 2018, and in charge of the mainline workshops specifically for about the last two of those three years.
At least four people
This surprises me. I don't know what the bar for "worked in some capacity with the CFAR/MIRI team" is. For instance, while at CFAR, I had very little attention on the comings-and-goings at MIRI, a much larger organization, and also CFAR had a habit of using five or ten volunteers at a time for workshops, month in and month out. So this could be intended to convey something like "out of the 500 people closest to both orgs." If it's meant to imply "four people who would have worked for more than 20 hours directly with Duncan during his three years at CFAR," then I am completely at a loss; I can't think of any such person who I am aware had a psychotic break.
Psychedelic use was common among the leadership
This also surprises me. I do not recall ever either directly encountering or hearing open discussions of p...
Like, I want to agree wholeheartedly with the poster's distaste for the described situation, separate from my ability to evaluate whether it took place.
As a general dynamic, no idea if it was happening here but just to have as a hypothesis, sometimes people selectively follow rules of behavior around people that they expect will seriously disapprove of the behavior. This can be well-intentioned, e.g. simply coming from not wanting to harm people by doing things around them that they don't like, but could have the unfortunate effect of producing selected reporting: you don't complain about something if you're fine with it or if you don't see it, so the only reports we get are from people who changed their mind (or have some reason to complain about something they don't actually think is bad). (Also flagging that this is a sort of paranoid hypothesis; IDK how the world is on this dimension, but the Litany of Gendlin seems appropriate. Also it's by nature harder to test, and therefore prone to the problems that untestable hypotheses have.)
This literally happened with Brent; my current model is that I was (EDIT: quite possibly unconsciously/reflexively/non-deliberately) cultivated as a shield by Brent, in that he much-more-consistently-than-one-would-expect-by-random-chance happened to never grossly misbehave in my sight, and other people, assuming I knew lots of things I didn't, never just told me about gross misbehaviors that they had witnessed firsthand.
I worked for CFAR from 2016 to 2020, and am still somewhat involved.
This description does not reflect my personal experience at all.
And speaking from my view of the organization more generally (not just my direct personal experience): Several bullet points seem flatly false to me. Many of the bullet points have some grain of truth to them, in the sense that they refer to or touch on real things that happened at the org, but then depart wildly from my understanding of events, or (according to me) mischaracterize / distort things severely.
I could go through and respond in more detail, point by point, if that is really necessary, but I would prefer not to do that, since it seems like a lot of exhausting work.
As a sort of free sample / downpayment:
- At least four people who did not listen to Michael's pitch about societal corruption and worked in some capacity with the CFAR/MIRI team had psychotic episodes.
I don't know who this is referring to. To my knowledge 0 people who are or have been staff at CFAR had a psychotic episode either during or after working at CFAR.
...
- Psychedelic use was common among the leadership of CFAR and spread through imitation, if not actual institutiona
Thank you for adding your detailed take/observations.
My own take on some of the details of CFAR that’re discussed in your comment:
Debugging sessions with Anna and with other members of the leadership was nigh unavoidable and asymmetric, meaning that while the leadership could avoid getting debugged it was almost impossible to do so as a rank-and-file member. Sometimes Anna described her process as "implanting an engine of desperation" within the people she was debugging deeply. This obviously had lots of ill psychological effects on the people involved, but some of them did seem to find a deeper kind of motivation.
I think there were serious problems here, though our estimates of the frequencies might differ. To describe the overall situation in detail:
Related to my reply to PhoenixFriend (in the parent comment), but hopping meta from it:
I have a question for whoever out there thinks they know how the etiquette of this kind of conversation should go. I had a first draft of my reply to PhoenixFriend, where I … basically tried to err on the side of being welcoming, looking for and affirming the elements of truth I could hear in what PhoenixFriend had written, and sort of emphasizing those elements more than my also-real disagreements. I ran it by a CFAR colleague at my colleague’s request, who said something like “look, I think your reply is pretty misleading; you should be louder and clearer about the ways your best guess about what happened differed from what’s described in PhoenixFriend’s comment. Especially since I and others at CFAR have our names on the organization too, so if you phrase things in ways that’ll cause strangers who’re skim-reading to guess that things at CFAR were worse than they were, you’ll inaccurately and unjustly mess with other peoples’ reputations too.” (Paraphrased.)
So then I went back and made my comments more disagreeable and full of details about where my and PhoenixFriend’s models differ. (Thoug...
Okay, so, that old textbook does not look like a picture of goal-factoring, at least not on that page. But I typed "goal-factoring" into my google drive and got up these old notes that used the word while designing classes for the 2012 minicamps. A rabbithole, but one I enjoyed so maybe others will.
I worked for CFAR full-time from 2014 until mid-to-late 2016 and have continued working as a part-time employee or frequent contractor since. I'm sorry this was your experience. That said, it really does not mesh that much with what I've experienced and some of it is almost the opposite of the impressions that I got. Some brief examples:
I've worked at CFAR for most of the last 5 years, and this comment strikes me as so wildly incorrect and misleading that I have trouble believing it was in fact written by a current CFAR employee. Would you be willing to verify your identity with some mutually-trusted 3rd party, who can confirm your report here? Ben Pace has offered to do this for people in the past.
I don't know if you trust me, but I confirmed privately that this person is a past or present CFAR employee.
Sure, but they led with "I'm a CFAR employee," which suggests they are a CFAR employee. Is this true?
It sounds like they meant they used to work at CFAR, not that they currently do.
Also given the very small number of people who work at CFAR currently, it would be very hard for this person to retain anonymity with that qualifier so...
I think it's safe to assume they were a past employee... but they should probably update their comment to make that clearer because I was also perplexed by their specific phrasing.
It sounds like they meant they used to work at CFAR, not that they currently do.
The interpretation of "I'm a CFAR employee commenting anonymously to avoid retribution" as "I'm not a CFAR employee, but used to be one" seems to me to be sufficiently strained and non-obvious that we should infer from the commenter's choice not to use clearer language that they should be treated as having deliberately intended for readers to believe that they're a current CFAR employee.
I like the local discourse norm of erring on the side of assuming good faith, but like steven0461, in this case I have trouble believing this was misleading by accident. Given how obviously false, or at least seriously misleading, many of these claims are (as I think accurately described by Anna/Duncan/Eli), my lead hypothesis is that this post was written by a former staff member, who was posing as a current staff member to make the critique seem more damning/informed, who had some ax to grind and was willing to engage in deception to get it ground, or something like that...?
It seems misleading in a non-accidental way, but it seems fairly plausible that their main motive was to obscure their identity.
FYI I just interpreted it to mean "former staff member" automatically. (This is biased by my belief that CFAR has very few current staff members so of course it was highly unlikely to be one, but I don't think it was an unreasonably weird reading)
Relatedly, the organization uses a technique called goal factoring during debugging which was in large part inspired by Geoff Anders' Connection Theory and was actually taught by Geoff at CFAR workshops at some point. This means that CFAR debugging in many ways resembles Leverage's debugging and the similarity in naming isn't just a coincidence of terms.
While it's true that there's some structural similarity between Goal Factoring and Connection Theory, and Geoff did teach Goal Factoring at some workshops (including one I attended), these techniques are more different than they are similar. In particular, goal factoring is taught as a solo technique for introspecting on what you want in a specific area, while Connection Theory is a therapy-like technique in which a facilitator tries to comprehensively catalog someone's values across multiple sessions going 10+ hours.
Thanks for this reply, Jim; I winced a bit at my own "no resemblance whatsoever" and your comment is clearer and more accurate.
I don't have an object-level opinion formed on this yet, but want to +1 this as more of the kind of description I find interesting, and isn't subject to the same critiques I had with the original post.
Thanks for this.
I'm interested in figuring out more what's going on here - how do you feel about emailing me, hashing out the privacy issues, and, if we can get them hashed out, you telling me the four people you're thinking of who had psychotic episodes?
Update: I interviewed many of the people involved and feel like I understand the situation better.
My main conclusion is that I was wrong about Michael making people psychotic. Everyone I talked to had some other risk factor, like a preexisting family or personal history, or took recreational drugs at doses that would explain their psychotic episodes.
Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeable at the moment they become psychotic. But aside from one case where he recommended someone take a drug that made a bad situation slightly worse, and the general Berkeley rationalist scene that he (and I and everyone else here) is a part of having lots of crazy ideas that are psychologically stressful, I no longer think he is a major cause.
While interviewing the people involved, I did get some additional reasons to worry that he uses cult-y high-pressure recruitment tactics on people he wants things from, in ways that make me continue to be nervous about the effect he *could* have on people. But the original claim I made that I k...
I want to summarize what's happened from the point of view of a long time MIRI donor and supporter:
My primary takeaway of the original post was that MIRI/CFAR had cultish social dynamics, that this lead to the spread of short term AI timelines in excess of the evidence, and that voices such as Vassar's were marginalized (because listening to other arguments would cause them to "downvote Eliezer in his head"). The actual important parts of this whole story are a) the rationalistic health of these organizations, b) the (possibly improper) memetic spread of the short timelines narrative.
It has been months since the OP, but my recollection is that Jessica posted this memoir, got a ton of upvotes, then you posted your comment claiming that being around Vassar induced psychosis, the karma on Jessica's post dropped in half while your comment that Vassar had magical psychosis inducing powers is currently sitting at almost five and a half times the karma of the OP. At this point, things became mostly derailed into psychodrama about Vassar, drugs, whether transgender people have higher rates of psychosis, et cetera, instead of discussion about the health of these organizations and how short ...
Thanks so much for talking to the folks involved and writing this note on your conclusions, I really appreciate that someone did this (who I trust to actually try to find out what happened and report their conclusions accurately).
outsiders as "normies"
I've seen the term used a few times on LW. Despite the denotational usefulness, it's very hard to keep it from connotationally being a slur, not without something like there being an existing slur and the new term getting defined to be its denotational non-slur counterpart (how it actually sounds also doesn't help).
So it's a good principle to not give it power by using it (at least in public).
It's true some CFAR staff have used psychedelics, and I'm sure they've sometimes mentioned that in private conversation. But CFAR as an institution never advocated psychedelic use, and that wasn't just because it was illegal, it was because (and our mentorship and instructor trainings emphasize this) psychedelics often harm people.
What does "significant involvement" mean here? I worked for CFAR full-time during that period and to the best of my knowledge you did not work there -- I believe for some of that time you were dating someone who worked there, is that what you mean by significant involvement?
I remember being a "guest instructor" at one workshop, and talking about curriculum design with Anna and Kenzi. I was also at a lot of official and unofficial CFAR retreats/workshops/etc. I don't think I participated in much of the normal/official CFAR process, though I did attend the "train the trainers workshop", and in this range of contexts saw some of how decisions were made, how workshops were run, how people related to each other at parties.
As I recall it, what I observed first-hand and was told second-hand at the time confirms bullets 2, 4, and 6 of the top-level comment. Many of the others are about how people felt, and are consistent with what people I knew reported at the time. Nothing in the top-level comment seems dissonant with what I observed.
It seems like there was a lot of fragmentation (which is why we mostly didn't interact). I felt bad about exercising (a small amount of) unaccountable influence at the time through these mechanisms, but I was confused about so much relative to the rate at which I was willing to ask questions that I didn't end up asking about the info-siloing. In hindsight it seems intended to keep the true nature of governance obscure and theref...
As I recall it, what I observed first-hand and was told second-hand at the time confirms bullets 2, 4, and 6 of the top-level comment.
I would like a lot more elaboration about this, if you can give it.
Can you say more specifically what you observed?
Unfortunately I think the working relationship between Anna and Kenzi was exceptionally bad in some ways and I would definitely believe that someone who mostly observed that would assume the organization had some of these problems; however I think this was also a relatively unique situation within the organization.
(I suspect though am not certain that both Anna and Kenzi would affirm that indeed this was an especially bad dynamic.)
With respect to point 2, I do not believe there was major peer pressure at CFAR to use psychadelics and I have never used psychadelics myself. It's possible that there was major peer pressure on other people or it applied to me but I was oblivious to it or whatever but I'd be surprised.
Psychadelic use was also one of a few things that were heavily discouraged (or maybe banned?) as conversation topics for staff at workshops -- like polyphasic sleep (another heavily discouraged topic), psychadelics were I believe viewed as potentially destabilizing and inappropriate to recommend to participants, plus there are legal issues involved. I personally consider recreational use of psychadelics to be immoral as well.
My comment initially said 2014-2016 but IIRC my involvement was much less after 2015 so I edited it.
Thanks for the clarification, I've edited mine too.
I think that CFAR, at least while I was there full-time from 2014 to sometime in 2016, was heavily focused on running workshops or other programs (like the alumni reunions or the MIRI Summer Fellows program). See for instance my comment here.
Most of what the organization was doing seemed to involve planning and executing workshops or other programs and teaching the existing curriculum. There were some developments and advancements to the curriculum, but they often came from the workshops or something around them (like followups) rather than a systematic development project. For example, Kenzi once took on the lion's share of workshop followups for a time, which led to her coming up with new curriculum based on her sense of what the followup participants were missing even after having attended the workshop.
(In the time before I joined there had been significantly more testing of curriculum etc. outside of workshops, but this seemed to have become less the thing by the time I was there.)
A lot of CFAR's internal focus was on improving operations capacity. There was at one time a narrative that the staff was currently unable to do some of the longer-term development because too much ti...
One takeaway I got from this when combined with some other stuff I've read:
Don't do psychedelics. Seriously, they can fuck up your head pretty bad and people who take them and organizations that encourage taking them often end up drifting further and further away from normality and reasonableness until they end up in Cloudcuckooland.
I'm about ready to propose a group norm against having any subgroups or leaders who tell other people they should take psychedelics. Maybe they have individually motivated uses - though I get the impression that this is, at best, a high-variance bet with significantly negative expectation. But the track record of "rationalist-adjacent" subgroups that push the practice internally and would-be leaders who suggest to other people that they do them seems just way too bad.
I'm also about ready to propose a similar no-such-group policy on 'woo', tarot-reading, supernaturalism only oh no it's not really supernaturalism I'm just doing tarot readings as a way to help myself think, etc. I still think it's not our community business to try to socially prohibit things like that on an individual level by exiling individuals like that from parties, I don't think we have or should have that kind of power over individual behaviors that neither pick pockets nor break legs. But I think that when there's anything like a subgroup or a leader with those properties we need to be ready to say, "Yeah, that's not a group in good standing with the rest of us, don't go there." Th...
Copying over a related Oct. 13-17 conversation from Facebook:
(context: someone posted a dating ad in a rationalist space where they said they like tarot etc., and rationalists objected)
_____________________________________________
Marie La: As a cultural side note, most of my woo knowledge (like how to read tarot) has come from the rationalist community, and I wouldn't have learned it otherwise
_____________________________________________
Eliezer Yudkowsky: @Marie La Any ideas how we can stop that?
(+1 from Rob B)
_____________________________________________
Marie La: Idk, it's an introspective technique that works for some people. Doesn't particularly work for me. Sounds like the concern is bad optics / PR rather than efficacy
(+1 from Rob B)
_____________________________________________
Shaked Koplewitz: @Marie La optics implies that the concern is with the impression it makes on outsiders, my concern here is the effect on insiders (arguably this is optics too, but a non-central example)
_____________________________________________
Rob Bensinger: If the concern is optics, either to insiders or outsiders, then it seems vastly weaker to me than i...
Jim Babcock's stance here is the most sensible one I've seen in this thread:
My own impression is that the effect of LSD is not primarily a regression to the mean thing, but rather, that it temporarily enables some self-modification capabilities, which can be powerfully positive but which require a high degree of sanity and care to operate safely.
...
Meanwhile nearly everyone has been exposed to extremely unsubtle and substantially false anti-drug propaganda, which fails to survive contact with reality. So it's unfortunate but also unsurprising that the how-much-caution pendulum in their heads winds up swinging too far to the other side. The ideal messaging imo would leave most people feeling like planning an acid trip is more work than they personally will get around to, plus mild disdain towards impulsive usage and corner-cutting.
Somehow this reminds me of the time I did a Tarot reading for someone, whose only previous experience had been Brent Dill doing a Tarot reading, and they were... sort of shocked at the difference. (I prefer three card layouts with a simple context where both people think carefully about what each of the cards could mean; I've never seen his, but the impression I got was way more showmanship.)
If it works as a device to facilitate sub-conscious associations, then maybe an alternative should be designed that sheds the mystical baggage and comes with clear explanations of why and how it works.
Thank you for saying this!
I wonder where the line will be drawn with regards to the { meditation, Buddhism, post-rationality, David Chapman, etc. } cluster. On one hand, meditation -- when done without all the baggage, hypothetically -- seems like a useful tool. On the other hand, it simply invites all that baggage, because that is in the books, in the practicing communities, etc. Also, Christianity is an outgroup, but Buddhism is a fargroup, so people seem less averse to religious connotations; in my opinion, it's just the different flavor of the same poison. Buddhism is sometimes advertized as a kind of evidence-based philosophy, but then you read the books and they discuss the supernatural and describe the miracles done by Buddha. Plus the insights into your previous lives, into the ultimate nature of reality (my 200 Hz brain sees the quantum physics, yeah), etc.
Also, somewhat ironically...
...Marcello and I developed a convention in our AI work: when we ran into something we didn’t understand, which was often, we would say “magic”—as in, X magically does Y”—to remind ourselves that here was an unsolved problem, a gap in our understanding. It is far better to say “magic” than “compl
Western Buddhism tends to be more of a bag of wellness tricks than a religion, but it’s worth sharing that Buddhism proper is anti-life. It came out of a Hindu obsession with ending the cycle of reincarnation. Nirvana means “cessation.” The whole idea of meditation is to become tolerant of signals to action so you can let them pass without doing the things that replicate them or, ultimately, propagate any life-like process. Karma is described as a giant wheel that powers reincarnation and gains momentum whenever you act unconsciously. The goal is for the wheel to stop moving and the way is to unlearn your habit of kicking it. When the Buddha became enlightened under the Bodhi tree, it wasn’t actually complete enlightenment. He was “enlightened with residues”— he stopped making new karma but he was still burning off old karma. He achieved actual cessation when he died. To be straight up enlightened, you stop living. The whole project of enlightenment is to end life.
It’s a sinister and empty philosophy, IMO. A lot of the insights and tools are great but the thrust of (at least Theravada) Buddhism is my enemy.
I agree this is pretty sinister and empty. Traditional samsara includes some pretty danged nice places (the heavens), not just things that have Earth-like quantities or qualities of flourishing; so rejecting all of that sounds very anti-life.
Some complicating factors:
(Obviously, this becomes more anti-life when you get rid of supernaturalism -- then the only alternative to 'samsara' is oblivion. But the modern Buddhist can retreat to various mottes about what 'nirvana' is, such as embracing living nirvana (sopadhishesa-nirvana) while rejecting parinirvana.)
The latter view is still pretty anti-life, but notably, it's a psychological claim ('this is what it's really like to experience things'), not a normative claim that we should reject life a priori. If a Buddhist updates away from thinking everything is dukkha, they aren't necessarily required to reject life anymore -- the life-rejection wasn't was contingent on the psych theory.
There are also versions of the psychological theory in which dukkha is not associated with all motivation, just the craving-based system, which is in a sense "extra"; it's a layer on top of the primary motivation system, which would continue to operate even if all craving was eliminated. Under that model (which I think is the closest to being true), you could (in principle) just eliminate the unpleasant parts of human motivation, while keeping the ones that don't create suffering - and probably get humans who were far more alive as a result, since they would be far more willing to do even painful things if pain no longer caused them suffering.
Pain would still be a disincentive in the same way that a reinforcement learner would generally choose to take actions that brought about positive rather than negative reward, but it would make it easier for people to voluntarily choose to experience a certain amount of pain in exchange for better achieving their values afterwards, for instance.
I find arguments by analogy etymology almost maximally unconvincing here, unless dukkha was a neologism? Like, those arguments make me update away from your conclusion, because they seem so not-of-the-correct-type. Normally, word etymologies are a very poor guide to meaning compared to looking at usage -- what do other sources actually mean when they say "dukkha" in totally ordinary contexts?
There's a massive tradition across many cultures of making sophistical arguments about words' 'true' or 'real' meaning based on (real or imagined) etymologies. This is even dicier when the etymology is as vague/uninformative as this one -- there are many different ways you can spin 'bad axle hole' to give exactly opposite glosses of dukkha.
I still don't find this 100% convincing/exacting, but the following account at least doesn't raise immediate alarm bells for me:
...According to Pali-English Dictionary, dukkha (Sk. duḥkha) means unpleasant, painful, causing misery.[4] [...]
The other meaning of the word dukkha, given in Venerable Nyanatiloka written Buddhist Dictionary, is “ill”. As the first of the Four Noble Truths and the second of the three characteristics of existence (tilakkhaṇa), the term
The set of metaphors that have come to the west are dominated by the early transmission of Buddhism which occurred in the late 1800's, and was carried out by Sanskrit scholars translating from Sanskrit sources. The Buddha specifically warned people against translating his teachings into Sanskrit for pretty much the sorts of reasons being passed off as genuine Buddhism here.
I think if rationalists are interested in Buddhism as part of their quest to find truth, they should know that it has, at the very least, deathist origins.
I think this is not true in full generality -- I think meditation does give people insights that are hard to verbalize, and does make some common verbal distinctions feel less joint-carving, so it makes sense for a tradition of meditators to say a lot in favor of 'things that are hard to verbalize' and 'things that can't be neatly carved up in the normal intuitive ways'.
I do think that once you have those insights, there's a strong temptation to lapse into sophistry or doublethink to defend whatever silly thing you feel like defending that day -- if someone doubts your claim that the Buddha lives on like a god or ghost after death, you can say that the Buddha's existence-status after death transcends concepts and verbalization.
When in fact the honest thing to say if you believed in immaterial souls would be 'I don't know what happened to the Buddha when he died', and the honest thing to say if you're an educated modern person is 'the Buddha was totally annihilated when he died, the exact same as anyone else who dies.'
If the world was one where meditation only made people feel like they had insights that were hard to verbalize, then I probably wouldn't have figured out ways to verbalize some of them (mostly due to having knowledge of neuroscience etc. stuff that most historical Buddhists haven't had).
Are there canonical correct answers to koans?
Regarding meditation, Kevin Fischer reported a surprising-to-me anecdote on FB yesterday:
I had one conversation with Soryu [the head of Monastic Academy / MAPLE] at a small party once. I mentioned that my feeling about meditation is that it’s really good for everyone when done for 15 minutes a day, and when done for much more than that forever, it’s much more complicated and sometimes harmful.
He straightforwardly agreed, and said he provides the environment for long term dedication to meditation because there is a market demand for that product. 🤷
He straightforwardly agreed, and said he provides the environment for long term dedication to meditation because there is a market demand for that product. 🤷
FWIW as a resident of MAPLE, my sense is Soryu believes something like:
"Smaller periods of meditation will help you relax/focus and probably have only a very small risk of harm. Larger/longer periods of meditation come with deeper risks of harm, but are also probably necessary to achieve awakening, which is important for the good of the world."
But I am a newer resident and could easily misunderstanding here.
On one hand, meditation -- when done without all the baggage, hypothetically -- seems like a useful tool. On the other hand, it simply invites all that baggage, because that is in the books, in the practicing communities, etc.
I think meditation should be treated similarly to psychedelics -- even for meditators who don't think of it in terms of anything supernatural, it can still have very large and unpredictable effects on the mind. The more extreme the style of meditation (e.g. silent retreats), the more likely this sort of thing is.
Any subgroups heavily using meditation seem likely to have the same problems as the ones Eliezer identified for psychedelics/woo/supernaturalism.
Even in the case of Sam Harris, who seems relatively normal, he lost a decade of his life pursuing “enlightenment” though meditation - also notable is this was spurred on by psychedelic use. Though I am sure he would not agree with the frame that it was a waste, I read his *Waking Up* as a bit of a horror story. For someone without his high IQ and indulgent parents, you could imagine more horrible ends.
I know of at least one person who was bright, had wild ambitious ideas, and now spends his time isolated from his family inwardly pursuing “enlightenment.” And this through the standard meditation + psychedelics combination. I find it hard to read this as anything other than wire-heading, and I think a good social norm would be one where we consider such behavior as about as virtuous as obsessive masturbation.
In general, for any drug that produces euphoria, especially spiritual euphoria, the user develops an almost romantic relationship with their drug, as the feelings they inspire are just as intense (and sometimes more so) as familial love. One should at least be slightly suspicious of the benefits propounded by their users, who in many cases literally worship their drugs of choice.
fwiw as a data point here, I spent some time inwardly pursuing "enlightenment" with heavy and frequent doses of psychedelics for a period of 10 months and consider this to be one of the best things I've ever done. I believe it raised my resting set point happiness, among other good things, and I am still deeply altered (7 years later).
I do not think this is a good idea for everyone and lots of people who try would end up worse off. But I strongly object to this being seen as virtuous as obsessive masturbation. Sure, it might not be your thing, but this frame seriously misses a huge amount of really important changes in my experience. And I get you might think I'm... brainwashed or something? by drugs? So I don't know what I could say that would convince you otherwise.
But I did have concrete things, like solving a pretty big section of childhood trauma (like; I had a burning feeling of rage in my chest before, and the burning feeling was gone afterwards), I had multiple other people comment on how different I was now (usually in regards to laughing easier and seeming more relaxed), I lost my anxiety around dying, my relationship to pain altered in such a way that I am significantly ...
In my culture, it's easy to look at "what happens at the ends of the bell curves" and "where's the middle of the bell curve" and "how tight vs. spread out is the bell curve (i.e. how different are the ends from the middle)" and "are there multiple peaks in the bell curves" and all of that, separately.
Like, +1 for the above, and I join the above in giving a reminder that rounding things off to "thing bad" or "thing good" is not just not required, it's actively unhelpful.
Policies often have to have a clear answer, such as the "blanket ban" policy that Eliezer is considering proposing. But the black-or-white threshold of a policy should not be confused with the complicated thing underneath being evaluated.
And I get you might think I'm... brainwashed or something? by drugs?
I'm not sure what you find implausible about that. Drugs do not literally propagandize the user, but they can hijack the reward system, in the case of many drugs, and in the case of psychedelics they seem to alter beliefs in reliable ways. Psychedelics are also taken in a memetic context with many crystalized notions about what the psychedelic experience is, what enlightenment is, that enlightenment itself is a mysterious but worthy pursuit.
The classic joke about psychedelics is they provide the feelings associated with profound insights without the actual profound insights. To the extent this is true, I feel this is pretty dangerous territory for a rationalist to tread.
In your own case unless I am misremembering, I believe on your blog you discuss LSD permanently lowering your mathematical abilities degrading your memory. This seems really, really bad to me…
Maybe this one is less concrete, but some part of me feels really deeply at peace, always, like it knows everything is going to be ok and I didn't have that before.
I’m glad your anxiety is gone, but I don't think everything is going to be alright by default. I would not like to modify myself to think that. It seems clearly untrue.
Perhaps the masturbation line was going too far. But the gloss of virtue that “seeking enlightenment” has strikes me as undeserved.
Also fwiw, I took psychedelics in a relatively memetic-free environment. I'd been homeschooled and not exposed to hippie/drug culture, and especially not significant discussion around enlightenment. I consider this to be one of the reasons my experience was so successful; I didn't have it in relationship to those memes, and did not view myself as pursuing enlightenment (I know I said I was inwardly pursuing enlightenment in my above comment, but I was mostly riffing off your phrasing; in some sense I think it was true but it wasn't a conscious thing.)
LSD did not permanently lower my mathematical abilities, and if I suggested that I probably misspoke? I suspect it damaged my memory, though; my memory is worse now than before I took LSD.
And sorry; by 'everything being ok' I didn't mean that I literally think that situation will end up being the ones I want; I mean that I know I will be okay with whatever happens. Very related to my endurance of pain going up by quite a lot, and my anxiety of death disappearing.
Separately, I do think that a lot of the memes around psychedelics are... incomplete? It's hard to find a good word. Naive? Something around the difference between the aesthetic of a thing and the thing itself? And in that I might agree with you somewhere that "seeking enlightenment" isn't... virtuous or whatever.
Even in the case of Sam Harris, who seems relatively normal, he lost a decade of his life pursuing “enlightenment” though meditation
What kind of a cost-benefit analysis is this?
if you start from the assumption that something isn't useful, of course spending time on that thing is a waste. As far as I can see, this is the totality of your argument. You can do this for just about anyone, e.g.:
Even in the case of Scott Garrabrant who seems relatively normal, he lost a decade of his life pursuing "AI alignment" through the use of mathematics.
I happen to think that Scott did amazing work at Miri, but objectively speaking, it is significantly harder to justify his time spent doing research at Miri than that of Sam Harris pursuing englightenment in India. Sam has released the Waking Up app, which is effectively a small company making a ton of money, donating 10% of its income to the most effective charities (arguably that alone is more than enough to pay for one decade of Sam's time) and has thousands of people reporting enormous psychological benefits. I'm one of them; in terms of productivity alone, I'd say my time working as increased by at least 20% and has gotten at least 10% ...
No greater sign that Eliezer isn't leading a cult than that my first reaction to this was "pfft, good luck", even when I misread it as "we should shame individuals for doing these things Elizabeth finds valuable" and not the more reasonable "leaders pushing this are suspect"
Big +1.
Really important to disambiguate the two:
"People shouldn't do psychedelics" is highly debatable and has to argue against a lot of research demonstrating their efficacy for improving mental wellness and treating psychiatric disorders.
"Leaders & subgroups shouldn't push psychedelics on their followers" seems straightforwardly correct.
I haven't taken any psychedelics myself. I have the impression that best practice with LSD is not to take it alone but to have someone skillful as a trip sitter. I imagine having a fellow rationalist as a trip sitter is much better then having some one agey person with sketchy epistemics.
I'm also about ready to propose a similar no-such-group policy on 'woo', tarot-reading, supernaturalism only oh no it's not really supernaturalism I'm just doing tarot readings as a way to help myself think, etc.
Hmm. I can't tell if the second half is supposed to be pointing at my position on Tarot, or the thing that's pretending to be my position but is actually confused?
Like, I think the hitrate for 'woo' is pretty low, and so I spend less time dredging there than I do other places which are more promising, but also I am not ashamed of the things that I've noticed that do seem like hits there. Like, I haven't delivered on my IOU to explain 'authenticity' yet, but I think Circling is actually a step above practices that look superficially similar in a way we could understand rigorously, even if Circling is in a reference class that is quite high in woo, and many Circlers like the flavor of woo.
That said, I could also see an argument that's like "look, we really have to implement rules like this at a very simple level or they will get bent to hell, and it's higher EV to not let in woo."
Would it be acceptable to regard practices like self-reflective tarot and circling and other woo-adjacent stuff as art rather than an attempt at rationality? I think it is a danger sign when people are claiming those highly introspective and personal activities as part of their aspiring to rationality. Can we just do art and personal emotional and creative discovery and not claim that it’s directly part of the rationalist project?
I mean, I also do things that I would consider 'art' that I think are distinct from rationality. But, like, just like I wouldn't really consider 'meditation' an art project instead of 'inner work' or 'learning how to think' or w/e, I wouldn't really consider Circling an art project instead of those things.
I would consider meditation and circling to have the same relationship to “discovering the truth” as art. The insights can be real and profound but are less rigorous and much more personal.
Instead of declaring group norms I think it would be worth it to have posts that actually lay out the case in a convincing manner. In general there are plenty of contrarian rationalists for whom "it's a group norm" is not enough to not do something. Declaring it as a drug norm might get them to be more secretive about it which is bad.
Trying to solve issues about people doing the wrong things with group norms instead of with deep arguments doesn't seem to be the rationalist way.
Have the important conversations about why you shouldn't take drugs / engage in woo openly on LessWrong instead only having them only privately where it doesn't reach many people. Then confront people who suggest something in that direction with those posts.
There are some potential details that might swing one way or the other (Vaniver's comment points at some), but as-written above, and to the best of my ability to predict what such a proposal would actually look like once Eliezer had put effort into it:
I expect I would wholeheartedly and publicly endorse it, and be a signatory/adopter.
I feel tempted to mostly agree with Eliezer here...
Umm To relay a trad Buddhist perspective, you're not (traditionally) supposed to make a full-blown attempt for 'enlightenment' or 'insight' until you've spent a fairly extensive time working on personal ethics & discipline. I think an unnamed additional step is to establish your basic needs, like good community, health, food, shelter, etc. It's also recommended that you avoid drugs, alcohol, and even sex.
There's also an important sense I get from trad Buddhism, which is: If you hold a nihilistic view, things will go sideways. A subtle example of nihilism is the sense that "It doesn't matter what I do or think because it's relatively inconsequential in the scheme of things, so whatever." or a deeper hidden sense of "It doesn't really matter if everyone dies." or "I feel it might be better if I just stopped existing?" or "I can think whatever I want inside my own head, including extensive montages of murder and rape, because it doesn't really affect anything."
These views seem not uncommon among modern people, and subtler forms seem very common. Afaict from reading biographies, modern people have more trouble wit...
OTOH a significant amount of (seemingly sane) people credit psychedelics for important personal insights and mental health/trauma healing. Psychedelics seem to be showing enough promise for that for the psychiatric establishment to be getting interested in them again [1, 2] despite them having been stigmatized for decades, and AFAIK the existing medical literature generally finds them to be low-risk [3, 4].
It's interesting that a lot of the discussion about psychedelics here is arguing from intuitions and personal experience, rather than from the trial results that have been coming out.
I do think that psychedelic experiences vary a lot from person-to-person and trip-to-trip, and that psychedelics aren't for everyone. (This variability probably isn't fully captured by the trial results because study participants are carefully screened for lots of factors that may be contraindicated.)
Psilocybin-based psychedelics are indeed considered low-risk both in terms of addiction and overdose. This chart sums things up nicely, and is a good thing to 'pin on your mental fridge':
You want to stay as close as possible to the bottom left corner of that graph!
This graph shows death and addiction potential but it doesn't say anything about sanity
I want to second this. I worked for an organization where one of key support people took psychedelics and just...broke from reality. This was both a personal crisis for him and an organizational crisis for the company to deal with the sudden departure of a bus factor 1 employee.
I suspect that psychedelic damage happens more often than we think because there's a whole lobby which buys the expand-your-mind narrative.
I don't regret having used psychedelics, though I understand why people might take what I've written as a reason not to try psychedelics.
The most horrific case I know of LSD being involved in a group's downward spiral from weird and kinda messed up to completely disconnected from reality and really fucking scary is the Manson family, but that's far from a typical example. But if you do want to be a cult leader, LSD does seem to do something that makes the job a lot easier.
(Note: I feel nervous posting this under my own name, in part because my Dad is considering transitioning at the moment and I worry he'd read it as implying some hurtful thing I don't mean, but I do want to declare the conflict of interest that I work at CFAR or MIRI).
The large majority of folks described in the OP as experiencing psychosis are transgender. Given the extremely high base rate of mental illness in this demographic, my guess is this is more explanatorily relevant than the fact that they interacted with rationalist institutions or memes.
I do think the memes around here can be unusually destabilizing. I have personally experienced significant psychological distress thinking about s-risk scenarios, for example, and it feels easy to imagine how this distress could have morphed into something serious if I'd started with worse mental health.
But if we're exploring analogies between what happened at Leverage and these rationalist social circles, it strikes me as relevant to ask why each of these folks were experiencing poor mental health. My impression from reading Zoe's writeup is that she thinks her poor mental health resulted from memes/policies/conversations t...
As I understand it you're saying:
At Leverage people were mainly harmed by people threatening them, whether intentionally or not. By contrast, in the MIRICFAR social cluster, people were mainly harmed by plausible upsetting ideas. (Implausible ideas that aren't also threats couldn't harm someone because there's no perceived incentive to believe them.)
An example of a threat is Roko's Basilisk. An example of an upsetting plausible idea was the idea in early 2020 that there was going to be a huge pandemic soon. Serious attempts were made to suppress the former meme and promote the latter.
If someone threatens me I am likely to become upset. If someone informs me about something bad, I am also likely to become upset. Psychotic breaks are often a way of getting upset about one's prior situation.. People who transition genders are also usually responding to something in their prior situation that they were upset about.
Sometimes people get upset in productive ways. When Justin Shovelain called me to tell me that there was going to be a giant pandemic, I called up some friends and talked through self-quarantine thresholds, resulting in this blog post. Later, some friends and I did some other...
The large majority of folks described in the OP as experiencing psychosis are transgender.
That would be, arguably, 3 of the 4 cases of psychosis I knew about (if Zack Davis is included as transgender) and not the case of jail time I knew about. So 60% total. [EDIT: See PhoneixFriend's comment, there were 4 cases who weren't talking with Michael and who probably also weren't trans (although that's unknown); obviously my own knowledge is limited to my own social circle and people including me weren't accounting for this in statistical inference]
My impression from reading Zoe’s writeup is that she thinks her poor mental health resulted from memes/policies/conversations that were at best accidentally mindfucky, and often intentionally abusive and manipulative.
In contrast, my impression of what happened in these rationalist social circles is more like “friends or colleagues earnestly introduced people (who happened to be drawn from a population with unusually high rates of mental illness) to upsetting plausible ideas.”
These don't seem like mutually exclusive categories? Like, "upsetting plausible ideas" would be "memes" and "conversations" that could include things like AI p...
exiting would increase social isolation (increasing social dependence on a small number of people), which is a known risk factor
If exiting makes you socially isolated, it means that (before exiting) all/most of your contacts were within the group.
That suggests that the safest way to exit is to gradually start meeting new people outside the group, start spending more time with them and less time with other group member, until the majority of your social life happens outside the group, which is when you should quit.
Cults typically try to prevent you from doing this, to keep the exit costly and dangerous. One method is to monitor you and your communications all the time. (For example, Jehovah Witnesses are always out there in pairs, because they have a sacred duty to snitch on each other. Another way is to keep you at the group compound where you simply can't meet non-members. Yet another way is to establish a duty to regularly confess what you did and who you talked to, and to chastise you for spending time with unbelievers.) Another method is simply to keep you so busy all day long that you have no time left to interact with strangers.
To revert this -- a healthy group will provide y...
Were you criticized for socializing with people outside MIRI/CFAR, especially with "rival groups"?
As a datapoint, while working at MIRI I started dating someone working at OpenAI, and never felt any pressure from MIRI people to drop the relationship (and he was welcomed at the MIRI events that we did, and so on), despite Eliezer's tweets discussed here representing a pretty widespread belief at MIRI. (He wasn't one of the founders, and I think people at MIRI saw a clear difference between "founding OpenAI" and "working at OpenAI given that it was founded", so idk if they would agree with the frame that OpenAI was a 'rival group'.)
This does not seem like the obvious reading of the thread to me.
Obviously, Eliezer is saying that there is a plausible but extremely upsetting idea that could be learned by studying neural networks sufficiently competently.
I think Eliezer is saying that if you understood on a gut level how messy deep networks are, you'd realize how doomed prosaic alignment is. And that would be horrible news. And that might make you scream, although perhaps not constantly.
After all, Eliezer is known to use... dashes... of colorful imagery. Do you really think he is literally constantly screaming silently to himself? No? Then he was probably also being hyperbolic about how he truly thinks a person would respond to understanding a deep network in great detail.
That's why I feel that your interpretation is grasping really hard at straws. This is a standard "we're doomed by inadequate AI alignment" thread from Eliezer.
Even though it's an exaggeration, Eliezer is, with this exaggeration, trying to indicate an extremely high level of fear, off the charts compared with what people are normally used to, as a result of really taking in the information. Such a level of fear is not clearly lower than the level of fear experienced by the psychotic people in question, who experienced e.g. serious sleep loss due to fear.
I strong-upvoted both of Jessica's comments in this thread despite disagreeing with her interpretation in the strongest possible terms; I did so because I think it is important to note that, for every "common-sense" interpretation of a community leader's words, there will be some small minority who interpret it in some other (possibly more damaging) way--and while I think (importantly) this does not imply it is the community leader's responsibility to manage their words in such a way that no misinterpretation is possible (which I think is simply completely unfeasible), I am nonetheless in favor of people sharing their (non-standard) interpretations, given the variation in potential responses.
As Eliezer once said (I'm paraphrasing from memory here, so the following may not be word-for-word accurate, but I am >95% confident I'm not misremembering the thrust of what he said), "The question I have to ask myself is, will this drive more than 5% of my readers insane?"
EDIT: I have located the text of the original comment. I note (with some vindication) that once again, it seems that Eliezer was sensitive to this concern way ahead of when it actually became a thing.
He specified "mission-critical". An AI's ability to take over other machines in the network, take over the internet, manufacture grey goo, etc. (choose your favorite doomsday scenario), is not really related to how mission-critical its original task was. (In fact, someone's AI to choose the best photo filters to match the current mood on Instagram to maximize "likes" seems both more likely to have arbitrary network access and less likely to have careful oversight than a self-driving car AI.) Therefore I do think his comment was about the likelihood of failure in the critical task, and not about alignment.
I think he meant something like this: The neural net, used e.g. to recognize cars on the road, makes most of its deductions based on accidental correlations and shortcuts in the training data—things like "it was sunny in all the pictures of trucks", or "if it recognizes the exact shape and orientation of the car's mirror, then it knows which model of car it is, and deduces the rest of the car's shape and position from that, rather than by observing the rest of the car". (Actually they'd be lower-level and less human-legible than this. It's like s...
One thing that has been bothering me a lot is that it seems like it's really likely that people don't realize just how distinct CFAR and MIRI are.
I've worked at each org for about three years total.
Some things which make it reasonable to lump them together and use the label "CFAR/MIRI":
I agree with all of the above. And yet a third thing, which Jessica also discusses in the OP, is the community near MIRI and/or CFAR, whose ideology has been somewhat shaped by the two organizations.
There are some good things to be gained from lumping things together (larger datasets on which to attempt inference) and some things that are confusing.
I know you're busy with all this and other things, but how is this statement
One thing that has been bothering me a lot is that it seems like it’s really likely that people don’t realize just how distinct CFAR and MIRI are.
[...]
I agree with all of the above
compatible with this statement?
As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong.
Actually, that was true for the last few years (with an ambiguous in-between time during covid), but it is not true now.
This thread is agreeing the orgs are completely different, but elsewhere you agreed that CFAR functions as a funnel into MIRI. I ask this out of personal interest in CFAR and MIRI going forwards and because I'm currently much more confused about how the two work than I was a week ago.
In the era 2015 - 2018, CFAR served mostly as not a funnel into MIRI in terms of total effort, programs, the curriculum of those programs, etc., but also:
Toward the 2018 - 2020 era, some CFAR staff incubated the AIRCS program, which was a lot like CFAR workshops except geared toward bridging between the AI risk community and various computer scientist bubbles, with a strong eye toward finding people who might work on MIRI projects. AIRCS started as a more-or-less independent project that occasionally borrowed CFAR logistical support, but over time CFAR decided to contribute more explicit effort to it, until it eventually became (afaik) straightforwardly one of the two or three most important "things going on at CFAR," according to CFAR.
Staff who were there at the time (this was as I was phasing out) might correct this summary, but I believe it's right in its essentials.
In the last two years, CFAR hasn't done much outward-facing work at all, due to COVID, and so has neither been a MIRI funnel nor definitively not a MIRI funnel.
In the last two years, CFAR hasn't done much outward-facing work at all, due to COVID, and so has neither been a MIRI funnel nor definitively not a MIRI funnel.
Yes, but I would predict that we won't be the same sort of MIRI funnel going forward. This is because MIRI used to have specific research programs that it needed to hire for, and it it was sponsoring AIRCS (covering direct expenses plus loaning us some researchers to help run the thing) in order to recruit for that, and those research programs have been discontinued and so AIRCS won't be so much of a thing anymore.
This has been the main part of why no AIRCS post vaccines, not just COVID.
I, and I would guess some others at CFAR, am interested in running AIRCS-like programs going forward, especially if there are groups that want to help us pay the direct expenses for those programs and/or researchers that want to collaborate with us on such programs. (Message me if you're reading this and in one of those categories.) But it'll be less MIRI-specific this time, since there isn't that recruiting angle.
Also, more broadly, CFAR has adopted different structures for organizing ourselves internally, and we are bigger now into "i...
Mostly agree. I especially agree about the organizational structure being very different.
I would not have said ""The median CFAR employee and the median MIRI employee interact frequently." is not even close to true", but it depends on the operationalization of frequently. But according to my operationalization, the lunch table alone makes it close to true.
I would also not have said "I think that a CFAR staff retreat is extremely unlike a MIRI research retreat." (e.g. we have attempted to Circle at a research retreat more than once.) (I haven't actually been to a CFAR staff retreat, but I have been to some things that I imagine are somewhat close, like workshops where a majority of attendees are CFAR staff).
If you pick a randomly selected academic or hobby conference, I will be much more surprised that they had circling than if they had food.
FWIW, the above matches my own experiences/observations/hearsay at and near MIRI and CFAR, and seems to me personally like a sensible and correct way to put it together into a parsable narrative. The OP speaks for me. (Clarifying at a CFAR colleague's request that here and elsewhere, I'm speaking for just for myself and not for CFAR or anyone else.)
(I of course still want other conflicting details and narratives that folks may have; my personal 'oh wow this puts a lot of pieces together in a parsable form that yields basically correct predictions' level is high here, but insofar as I'm encouraging anything because I'm in a position where my words are loud invitations, I want to encourage folks to share all the details/stories/reactions pointing in all the directions.) I also have a few factual nitpicks that I may get around to commenting, but they don’t subtract from my overall agreement.
I appreciate the extent to which you (Jessicata) manage to make the whole thing parsable and sensible to me and some of my imagined readers. I tried a couple times to write up some bits of experience/thoughts, but had trouble managing to say many different things A without seeming to also negate other true things A’, A’’, etc., maybe partly because I’m triggered about a lot of this / haven’t figured out how to mesh different parts of what I’m seeing with some overall common sense, and also because I kept anticipating the same in many readers.
The OP speaks for me.
Anna, I feel frustrated that you wrote this. Unless I have severely misunderstood you, this seems extremely misleading.
For context, before this post was published Anna and I discussed the comparison between MIRI/CFAR and Leverage.
At that time, you, Anna, posited a high level dynamic involving "narrative pyramid schemes" accelerating, and then going bankrupt, at about the same time. I agreed that this seemed like it might have something to it, but emphasized that, despite some high level similarities, what happened at MIRI/CFAR was meaningfully different from, and much much less harmful than, what Zoe described in her post.
We then went through a specific operationalization of one of the specific claimed parallels (specifically the frequency and oppressiveness of superior-to-subordinate debugging), and you agreed that while the CFAR case was, quantitatively, an order of magnitude better than what Zoe describes. We talked more generally about some of the other parallels, and you generally agreed that the specific harms were much greater in the Leverage case.
(And just now, I talked with another CFAR staff member who reported that the two of you went poi...
I think that you believe, as I do, that there were some high-level structural similarities between the dynamics at MIRI/CFAR and at Leverage, and also what happened at Leverage was an order of magnitude worse than what happened at MIRI/CFAR.
Leverage_2018-2019 sounds considerably worse than Leverage 2013-2016.
My current guess is that if you took a random secular American to be your judge, or a random LWer, and you let them watch the life of a randomly chosen member of the Leverage psychology team from 2018-2019 (which I’m told is the worst part) and also of a randomly chosen staff member at either MIRI or CFAR, they would be at least 10x more horrified by the experience of the one in the Leverage psychology team.
I somehow don’t know how to say in my own person “was an order of magnitude worse”, but I can say the above. The reason I don’t know how to say “was an order of magnitude worse” is because it honestly looks to me (as to Jessica in the OP) like many places are pretty bad for many people, in the sense of degrading their souls via deceptions, manipulations, and other ethical violations. I’m not sure if this view of mine will sound over-the-top/dismissable or we-all-already...
These claims seem rather extreme and unsupported to me:
"Lots of upper middle class adults hardly know how to have conversations..."
"the average workplace [is] more than 1/10th as damaging to most employees’ basic human capacities, compared to Leverage_2018-2019."
I suggest if you write a toplevel post, you search for evidence for/against them.
Elaborating a bit on my reasons for skepticism:
It seems like for the past 10+ years, you've been mostly interacting with people in CFAR-adjacent contexts. I'm not sure what your source of knowledge is on "average" upper middle class adults/workplaces. My personal experience is normal people are comfortable having non-superficial conversations if you convince them you aren't weird first, and normal workplaces are pretty much fine. (I might be overselecting on smaller companies where people have a sense of humor.)
The two observations seems a bit inconsistent, if you'll
RE: "Lots of upper middle class adults hardly know how to have conversations..."
I will let Anna speak for herself, but I have evidence of my own to bring... maybe not directly about the thing she's saying but nearby things.
Oh yeah they also spent a lot of time trying to have the right or correct opinions. So they would certainly talk about 'the world' but mostly for the sake of having "right opinions" about it. Not so that they could necessarily, like, have insights into it or feel connected to what was happening. It was a game with not very high or real stakes for them. They tended to rehash the SAME arguments over and over with each other.
This all sounds super fascinating to me, but perhaps a new post would be better for this.
My current best guess is that some people are "intrinsically" interested in the world, and for others the interest is only "instrumental". The intrinsically interested are learning things about the real world because it is fascinating and because it is real. The instrumentally interested are only learning about things they assume might be necessary for satisfying their material needs. Throwing lots of money at them will remove chains from the former, but will turn off the engine for the latter.
For me another shocking thing about people in tech is how few of them are actually interested in the tech. Again, seems to be this intrinsical/instrumental distinction. The former group studies Haskell or design patterns or whatever. The latter group is only interested in things that can currently increase their salary, and even there they are mostly looking for shortcuts. Twenty years ago, programmers were considered nerdy. These days, programmers who care about e.g. clean code are considered too nerdy by most programmers.
...I also don't like the way it insulates people from noticing how much death, sufferi
I used to think the ability to have deep conversations is an indicator of how "alive" a person is, but now I think that view is wrong. It's better to look at what the person has done and is doing. Surprisingly there's little correlation: I often come across people who are very measured in conversation, but turn out to have amazing skills and do amazing things.
I also feel really frustrated that you wrote this, Anna. I think there are a number of obvious and significant disanalogies between the situations at Leverage versus MIRI/CFAR. There's a lot to say here, but a few examples which seem especially salient:
Yeah, sorry. I agree that my comment “the OP speaks for me” is leading a lot of people to false views that I should correct. It’s somehow tricky because there’s a different thing I worry will be obscured by my doing this, but I’ll do it anyhow as is correct and try to come back for that different thing later.
To the best of my knowledge, the leadership of neither MIRI nor CFAR has ever slept with a subordinate, much less many of them.
Agreed.
...While I think staff at CFAR and MIRI probably engaged in motivated reasoning sometimes wrt PR, neither org engaged in anything close to the level of obsessive, anti-epistemic reputational control alleged in Zoe's post. CFAR and MIRI staff were certainly not required to sign NDAs agreeing they wouldn't talk badly about the org—in fact, in my experience CFAR staff much more commonly share criticism of the org than praise. CFAR staff were regularly encouraged to share their ideas at workshops and on LessWrong, to get public feedback. And when we did mess up, we tried extremely hard to publicly and accurately describe our wrongdoing—e.g., Anna and I personally spent hundreds of hours investigating/thinking about the Brent affair, and tr
I endorse Adam's commentary, though I did not feel the frustration Eli and Adam report, possibly because I know Anna well enough that I reflexively did the caveating in my own brain rather than modeling the audience.
Anna and I personally spent hundreds of hours investigating/thinking about the Brent affair... our writeup about it...
Am I missing something here? The communication I read from CFAR seemed like it was trying to reveal as little as it could get away with...
FWIW, I think you and Adam are talking about two different pieces of communication. I think you are thinking of the communication leading up to the big community-wide discussion that happened in Sept 2018, while Adam is thinking specifically of CFAR's follow-up communication months after that — in particular this post. (It would have been in between those two times when Adam and Anna did all that thinking that he was talking about.)
I agree manager/staff relations have often been less clear at CFAR than is typical. But I'm skeptical that's relevant here, since as far as I know there aren't really even borderline examples of this happening. The closest example to something like this I can think of is that staff occasionally invite their partners to attend or volunteer at workshops, which I think does pose some risk of fucky power dynamics, albeit dramatically less risk imo than would be posed by "the clear leader of an organization, who's revered by staff as a world-historically important philosopher upon whose actions the fate of the world rests, and who has unilateral power to fire any of them, sleeps with many employees."
Am I missing something here? The communication I read from CFAR seemed like it was trying to reveal as little as it could get away with, gradually saying more (and taking a harsher stance towards Brent) in response to public pressure, not like it was trying to help me, a reader, understand what had happened.
As lead author on the Brent post, I felt bummed reading this. I tried really hard to avoid letting my care for/interest in CFAR affect my descriptions of what happened, or my choices abou...
It would help if they actually listed and gave examples of exactly what kind of mental manipulation they were doing to people other than telling them to take drugs. These comments seem to dance around the exactly details of what happened and only talk about the group dynamics between people as a result of these mysterious actions/events.
To be clear, a lot of what I find so relaxing about Jessica’s post is that my experience reading it is of seeing someone who is successfully noticing a bunch of details in a way that, relative to what I’m trying to track, leaves room for lots of different things to get sorted out separately.
I just got an email that led me to sort of triggeredly worry that folks will take my publicly agreeing with the OP to mean that I e.g. think MIRI is bad in general. I don’t think that; I really like MIRI and have huge respect and appreciation for a lot of the people there; I also like many things about the CFAR experiment and love basically all of the people who worked there; I think there’s a lot to value across this whole space.
I like the detailed specific points that are made in the OP (with some specific disagreements; though also with corroborating detail I can add in various places); I think this whole “how do we make sense of what happens when people get together into groups? and what happened exactly in the different groups?” question is an unusually good time to lean on detail-tracking and reading comprehension.
To my understanding, since the time when the events described in the OP took place, MIRI and CFAR have been very close and getting closer and closer. As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong. Since you're one of the leaders of CFAR, that makes you one of the leading people behind all those things the OP is critical of.
The OP even writes that she thought and thinks CFAR was corrupt in 2017:
Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz. (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; ...)
Here she mentions Ziz also thinking that CFAR was corrupt, and I remember that in her blog, Ziz thought you being in the center of said corruption.
So, how all is this compatible with you agreeing with the OP?
Since you're one of the leaders of CFAR, that makes you one of the leading people behind all those things the OP is critical of.
Yes.
So, how all is this compatible with you agreeing with the OP?
Basically because I came to see I’d been doing it wrong.
Happy to try to navigate follow-up questions if anyone has any.
Happy to try to navigate follow-up questions if anyone has any.
PhoenixFriend wrote:
Psychedelic use was common among the leadership of CFAR and spread through imitation, if not actual institutional encouragement, to the rank-and-file.
Is this true?
Basically no. Can't say a plain "no," but can say "basically no." I'm not willing to give details on this one. I'm somehow fretting on this one, asking if "basically no" is true from all vantage points (it isn't, but it's true from most), looking for a phrase similar to that but slightly weaker, considering e.g. "mostly no", but something stronger is true. I think this'll be the last thing I say in this thread about this topic.
A CFAR board member asked me to clarify what I meant about “corrupt”, also, in addition to this question.
So, um. Some legitimately true facts the board member asked me to share, to reduce confusion on these points:
This board member pointed out that if I call somebody “tall” people might legitimately think I mean they are taller than most people, and if I agree with an OP that says CFAR was “corrupt” they might think I’m agreeing that CFAR was “more corrupt” than most similarly sized and durationed non-profits, or something.
The thing I actually think here is not that. It’s more that I think CFAR’s actions were far from the kind of straight-forward, sincere attempt to increase rationali...
I have strong-upvoted this comment, which is not a sentence I think people usually ought leave as its own reply but which seems relevant given my relationship to Anna and CFAR and so forth.
As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong.
Actually, that was true for the last few years (with an ambiguous in-between time during covid), but it is not true now. Partly because MIRI abandoned the research direction we’d most been trying to help them recruit for. CFAR will be choosing its own paths going forward more.
I see that many people are commenting how it's crazy to try to keep things secret between coworkers, or to not allow people to even mention certain projects, or that this kind of secrecy is psychologically damaging, or the like.
Now, I imagine this is heavily dependent on exactly how it's implemented, and I have no idea how it's implemented at MIRI. But just as a relevant data point - this kind of secrecy is totally par for the course for anybody who works for certain government and especially military-related organizations or contractors. You need extensive background checks to get a security clearance, and even then you can't mention anything classified to someone else unless they have a valid need to know, you're in a secure classified area that meets a lot of very detailed guidelines, etc. Even within small groups, there are certain projects that you simply are not allowed to discuss with other group members, since they do not necessarily have a valid need to know. If you're not sure whether something is classified, you should be talking to someone higher up who does know. There are projects that you cannot even admit that they exist, and there are even words that you cannot men...
Some secrecy between coworkers could be reasonable. Including secrecy about what secret projects exist (e.g. "we're combining AI techniques X and Y and applying them to application Z first as a test").
What seemed off is that the only information concealed by the policy in question (that researchers shouldn't ask each other what they're working on) is who is and isn't recently working on a secret project. That isn't remotely enough information to derive AI insights to any significant degree. Doing detective work on "who started saying they had secrets at the same time" to derive AI insights is a worse use of time than just reading more AI papers.
The policy in question is strictly dominated by an alternative policy, of revealing that you are working on a secret project but not which one. When I see a policy that is this clearly suboptimal for the stated goal, I have to infer alternative motives, such as maintaining domination of people by isolating them from each other. (Such a motive could be memetic/collective, partially constituted by people copying each other, rather than serving anyone's individual interest, although personal motives are relevant too)
Mainstream organizations...
There are a few parts in here that seem fishy enough to me to try to red flag them.
Mainstream organizations being secretive at the level MIRI was isn’t a particularly strong argument. As we learned with COVID, many mainstream organizations are opposing their stated mission.
This is fair as a detraction to the sorta appeal to authority it is in reply to, but is also not a very good proof that secrecy is a bad idea. To boil it down smaller, the argument went "Secrecy works well for many existing organizations" and you replied "Many existing organizations did a bad job during Covid". Strictly speaking, doing a bad job during Covid means that not everything is going well, but this is still a pretty weird and weak argument.
This whole paragraph:
...Zack Davis points out that controlling people into acting against their interests is a common function of mainstream policies (this is especially obvious in the military). Such control is especially counterproductive for FAI research, where a large part of the problem is to make AI act on human values rather than false approximations of them. Revealing actual human value requires freedom to act according to revealed preferences, not just pre-
You have by far more information than me about what it's like on the ground as a MIRI researcher.
But one thing missing so far is that my sense was that a lot of researchers preferred the described level of secretiveness as a simplifying move?
e.g. "It seems like I could say more without violating any norms, but I have a hard time tracking where the norms are and it's easier for me to just be quiet as a general principle. I'm going to just be quiet as a general principle rather than being the-maximum-cooperative-amount-of-open, which would be a burden on me to track with the level of conscientiousness I would want to apply."
The policy described was mandated, it wasn't just on a voluntary basis. Anyway, I don't really trust something optimizing this badly to have a non-negligible shot at FAI, so the point is kind of moot.
First and foremost: Jessica, I'm sad you had a bad late/post-MIRI experience. I found your contributions to MIRI valuable (Quantilizers and Reflective Solomonoff Induction spring to mind as some cool stuff), and I personally wish you well.
A bit of meta before I say anything else: I'm leery of busting in here with critical commentary, and thereby causing people to think they can't air dirty laundry without their former employer busting in with critical commentary. I'm going to say a thing or two anyway, in the name of honest communication. I'm open to suggestions for alternative ways to handle this tradeoff.
Now, some quick notes: I think Jessica is truthfully reporting her experiences as she recalls them. I endorse orthonormal's comment as more-or-less matching my own recollections. That said, in a few of Jessica's specific claims, I believe I recognize the conversations she's referring to, and I feel misunderstood and/or misconstrued. I don't want to go through old conversations blow-by-blow, but for a sense of the flavor, I note that in this comment Jessica seems to me to misconstrue some of Eliezer's tweets in a way that feels similar to me. Also, as one example from the text, lo...
Thanks, I appreciate you saying that you're sorry my experience was bad towards the end (I notice it actually makes me feel better about the situation), that you're aware of how criticizing people the wrong way can discourage speech and are correcting for that, and that you're still concerned enough about misconstruals to correct them where you see fit. I've edited the relevant section of the OP to link to this comment. I'm glad I had a chance to work with you even if things got really confusing towards the end.
Regarding Eliezer's tweets, I think the issue is that he is joking about the "never stop screaming". He is using humor to point at a true fact, that it's really unfortunate how unreliable neural nets are, but he's not actually saying that if you study neural nets until you understand them then you will have a psychotic break and never stop screaming.
MIRI can't seem to decide if it's an advocacy org or a research org.
MIRI is a research org. It is not an advocacy org. It is not even close. You can tell by the fact that it basically hasn't said anything for the last 4 years. Eliezer's personal twitter account does not make MIRI an advocacy org.
(I recognize this isn't addressing your actual point. I just found the frame frustrating.)
as a tiny, mostly-uninformed data point, i read "if you realized how bad taxation is for the economy, you'd never stop screaming" to have a very diff vibe from Eliezer's tweet, cause he didn't use the word bad. I know it's a small diff but it hits diff. Something in his tweet was amusing because it felt like it was pointing to a presumably neutral thing and making it scary? whereas saying the same thing about a clearly moralistic point seems like it's doing a different thing.
Again - a very minor point here, just wanted to throw it in.
There's this general problem of Rationalists splitting into factions and subcults with minor doctrinal differences, each composed of relatively elite members of The Community, each with a narrative of how they’re the real rationalists and the rest are just posers and/or parasites. And, they're kinda right. Many of the rest are posers, we have a mop problem.
There’s just one problem. All of these groups are wrong. They are in fact only slightly more special than their rival groups think they are. In fact, the criticisms each group makes of the epistemics and practices of other groups are mostly on-point.
Once people have formed a political splinter group, almost anything they write will start to contain a subtle attempt to slip in the doctrine they're trying to push. With sufficient skill, you can make it hard to pin down where the frame is getting shoved in.
I have at one point or another been personally involved with a quite large fraction of the rationalist subcults. This has made the thread hard to read - I keep feeling a tug of motivation to jump into the fray, to take a position in the jostling for credibility or whatever it is being fought over here, which is then marred by the ...
If Ben says: "I desire X, and I could get that by doing less faction stuff", that implies that he is doing faction stuff. But you're taking it as implying that he isn't.
The only way I could understand your criticism is as making a revealed-preference critique, where Ben is expressing a preference for doing non-faction stuff but is still doing faction stuff. That doesn't seem like a strong critique, though, since doing less faction stuff is somewhat difficult, and noticing the problem is the first step to fixing it.
I want to provide an outside view that people might find helpful. This is based on my experience as a high school teacher (6 months total experience), a professor at an R1 university (eight years total experience), and someone who has mentored extraordinarily bright early-career scientists (15 years experience).
It’s very clear to me that the rationalist community is acting as a de facto school and system of interconnected mentorship opportunities. In some cases (CFAR, e.g.) this is explicit.
Academia also does this. It has ~1000 years of experience, dating from the founding of the University of Cambridge, and has learned a few things in that time.
An important discovery is that there are serious responsibilities that come with attending on “young” minds (young in quotes; generically the first quarter of life, depending on era, that’s <15 up to today around <30). These minds are considered inherently vulnerable, who need to be protected from manipulation, boundary violations, etc. It’s been discovered that making this a blanket and non-negotiable rule has significant positive epistemic and moral effects that haven’t been replicated with alternatives.
Even before academic institu...
Upvoted for thoughtful dissent and outside perspective.
I ... have some complicated mixed feelings here. LW has a very substantial contingent of "gifted kids", who spent a decent chunk of their (...I suppose I should say "our") lives being frustrated that the world would not take them seriously due to age. Groups like that are never going to tolerate norms saying that young age is a reason to talk down to someone. And guidelines for protecting younger people from older people, to the extent that they involve disapproval or prevention of apparently-consensual choices by younger people, are going to be tricky that way. Any concern that "young minds are not allowed to waive" will be (rightly) seen as condescending, especially if you extend "young" to age 30. This does not really become less true if the concern is accurate.
This is extra-true here, because the "rationalist community" is not a single organization with a hierarchy, or indeed (I claim) even really a single community. So you can't make enforceable global rules of conduct, and it's very hard to kick someone out entirely (although I would say it's effectively been done a couple of times.)
You might be relieved to learn that, at...
I find this position rather disturbing, especially coming from someone working at a university. I have spent the last sixish years working mostly with high school students, occasionally with university students, as a tutor and classroom teacher. I can think of many high school students who are more ready to make adult decisions than many adults I know, whose vulnerability comes primarily from the inferior status our society assigns them, rather than any inherent characteristic of youth.
As a legal matter (and I believe the law is correct here), your implication that someone acts in loco parentis with respect to college students is simply not correct (with the possible exception of the rare genius kid who attends college at an unusually young age). College students are full adults, both legally and morally, and should be treated as such. College graduates even more so. You have no right to impose a special concern on adults just because they are 18-30.
I think one of the particular strengths of the rationalist/EA community is that we are generally pretty good at treating young adults as full adults, and taking them and their ideas seriously.
I want to more or less second what River said. Mostly I wouldn't have bothered replying to this... but your line of "today around <30" struck me as particularly wrong.
So, first of all, as River already noted, your claim about "in loco parentis" isn't accurate. People 18 or over are legally adults; yes, there used to be a notion of "in loco parentis" applied to college students, but that hasn't been current law since about the 60s.
But also, under 30? Like, you're talking about grad students? That is not my experience at all. Undergrads are still treated as kids to a substantial extent, yes, even if they're legally adults and there's no longer any such thing as "in loco parentis". But in my experience grad students are, absolutely, treated as adults, nor have I heard of things being otherwise. Perhaps this varies by field (I'm in math) or location or something, I don't know, but I at least have never heard of that before.
Thanks for the outside perspective. If you're willing to go into more detail, I'm interested in a more detailed account from you on both what academia's safeguards are and (per gwillen's comment) where do you think academia's safeguards fall short and how that can be fixed.
This is decision-relevant to me as I work in a research organization outside of academia (though not working on AI risk specifically), and I would like us to both be more productive than typical in academia and have better safeguards against abuse.
If it helps, we have about 15 researchers now, we're entirely remote, and we hire typically from people who just finished their PhDs or have roughly equivalent research experience, although research interns/fellows are noticeably younger (maybe right after undergrad is the median).
Sure. I'm really glad to hear. This is not my community, but you did explicitly ask.
This is just off the top of my head, and I don't mean it to be a final complete and correct list. It's just to give you a sense of some things I've encountered, and to help you and your org think about how to empower people and help them flourish. Academia uses a lot of these to avoid the geek-MOP-sociopath cycle.
I'm assuming your institution wants to follow an academic model, including teaching, mentorship, hiearchical student-teacher relationships, etc.
An open question is when you have a duty of care. My rule of thumb is (1) when you or the org is explicitly saying "I'm your teacher", "I'm your mentor"; (2) when you feel a power imbalance with someone because this relationship has arisen implicitly; (3) when someone is soliciting this role from you, whether you want it or not.
If you're a business making money, that's quite different, just say "we're going to use your body and mind to make money" and you've probably gotten your informed consent. :)
* Detection
1. Abuse is non-Gaussian. A small number of people may experience a great deal, while the majority see nothing wrong. That means that occasion...
Somebody in the comments said that many of the people reporting abuse are trans, and “trans people suffer from mental illness more”, so maybe they’re just crazy and everything was actually pretty OK.
Hopefully this reasoning looks as crazy to you as it does to me; in the 1970s people would have said the same about gay people, but now we realize that a lot of that was due to homophobia (etc), and a lot of it was due to the fact that gay people, being marginalized, made soft targets for manipulation, blackmail, etc.
So, I think this is not a fair reading of the comment in question. Not a million miles away from, but far enough that I wanted to point it out.
But also, you seem to be saying something like: "consider that maybe trans people's rates of mental illness are downstream of them being trans and society being transphobic, not that their transness is downstream of mental illness".
And, okay, but...
Consider a hypothetical trans support forum. If rationalistthrowaway is right, you'd expect the members of that forum to have higher than average rates of mental illness, possibly leading to high profile events like psychotic breaks and suicides. And it sounds like you don't disagree wi...
Attempt to get shared models on "Variations in Responses":
Quote from another comment by Mr. Davis Kingsley:
My sense is that dynamics like those you describe were mostly not present at CFAR, or insofar as they were present weren't really the main thing.
I bid:
This counts as counter-evidence, but it's unfortunately not very strong counter-evidence. Or at least it's weaker than one might naively believe.
Why?
It is true of many groups that even while most of a group's activities or even the main point of a group's activities might be wholesome, above board, above water, beneficial, etc., it is possible that this is still secretly enabling the abuse of a silent or hidden minority. The minority that, in the end, is going to be easiest to dismiss, ridicule, or downplay.
It might even be only ONE person who takes all the abuse.
I think this dynamic is so fucked that most people don't want to admit that it's a real thing. How can a community or group that is mostly wholesome and good and happy be hiding atrocious skeletons in their closet? (Not that this is true of CFAR or MIRI, I'm not making that claim. I do get a 'vibe' from Zoe's post that it's what Leverage 1.0 migh...
Please allow me to point out one difference between the Rationalist community and Leverage that is so obvious and huge that many people possibly have missed it.
The Rationalist community has a website called LessWrong, where people critical of the community can publicly voice their complaints and discuss them. For example, you can write an article accusing their key organizations of being abusive, and it will get upvoted and displayed on the front page, so that everyone can add their part of the story. The worst thing the high-status members of the community will do to you is publicly post their disagreement in a comment. In turn, you can disagree with them; and you will probably get upvoted, too.
Leverage Research makes you sign an NDA, preventing you from talking about your experience there. Most Leverage ex-members are in fact afraid to discuss their experience. Leverage even tries (unsuccessfully) to suppress the discussion of Leverage on LessWrong.
Considering this, do you find it credible that the dynamics of both groups is actually very similar? Because that seems to be the narrative of the post we are discussing here -- the very post that got upvoted and is displayed publicly ...
Considering this, do you find it credible that the dynamics of both groups is actually very similar?
I'm a little unsure where this is coming from. I never made explicitly this comparison.
That said, I was at a CFAR staff reunion recently where one of the talks was on 'narrative control' and we were certainly interested in the question about institutions and how they seem to employ mechanisms for (subtly or not) keeping people from looking at certain things or promoting particular thoughts or ideas. (I am not the biggest fan of the framing, because it feels like it has the 'poison'—a thing I've described in other comments.)
I'd like to be able to learn about these and other such mechanisms, and this is an inquiry I'm personally interested in.
I do strongly object against making this kind of false equivalence.
I mostly trust that you, myself, and most readers can discern the differences that you're worried about conflating. But if you genuinely believe that a false equivalence might rise to prominence in our collective sense-making, I'm open to the possibility. If you check your expectations, do you expect that people will get confused about the gap between the Leverage situa...
The conflation between Leverage and CFAR is made by the article. Most explicitly here...
Most of what was considered bad about the events at Leverage Research also happened around MIRI/CFAR, around the same time period (2017-2019).
...and generally, the article goes like "Zoe said that X happens in Leverage. A kinda similar thing happens in MIRI/CFAR, too." The entire article (except for the intro) is structured as a point-by-point comparison with Zoe's article.
Most commenters don't buy it. But I imagine (perhaps incorrectly) that if a person unfamiliar with MIRI/CFAR and rationalist community in general would read the article, their impression would be that the two are pretty similar. This is why I consider it quite important to explain, very clearly, that they are not. This debate is public... and I expect it to be quote-mined (by RationalWiki and consequently Wikipedia).
I hope it is fine for me to try to investigate the nature of these group dynamics.
Sure, go ahead!
I will put forth that a silent minority has existed at CFAR, in the past, and that their experience was difficult and pretty traumatic for them. And I have strong reasons to believe they're still 'not over it'.
I w...
This comment mostly makes good points in their own right, but I feel it's highly misleading to imply that those points are at all relevant to what Unreal's comment discussed. A policy doesn't need to be crucial to be good. A working doesn't need to be worse than terrible to get attention to its remaining flaws. Inaccuracy of a bug report should provoke a search for its better form, not nullify its salience.
On the other side of it, why do people seem TOO DETERMINED to turn him into a scapegoat? Most of you don't sound like you really know him at all.
A blogger I read sometimes talks about his experience with lung cancer (decades ago), where people would ask his wife "so, he smoked, right?" and his wife would say "nope" and then they would look unsettled. He attributed it to something like "people want to feel like all health issues are deserved, and so their being good / in control will protect them." A world where people sometimes get lung cancer without having pressed the "give me lung cancer" button is scarier than the world where the only way to get it is by pressing the button.
I think there's something here where people are projecting all of the potential harm onto Michael, in a way that's sort of fair from a 'driving their actions' perspective (if they're worried about the effects of talking to him, maybe they shouldn't talk to him), but which really isn't owning the degree to which the effects they're worried about are caused by their instability or the them-Michael dynamic.
[A thing Anna and I discussed recently is, roughly, the tension between "telling the truth" and "not destabilizing the current regime"; I think it's easy to see there as being a core disagreement about whether or not it's better to see the way in which the organizations surrounding you are ___, and Michael is being thought of as some sort of pole for the "tell the truth, even if everything falls apart" principle.]
+1 to your example and esp "isn't owning the degree to which the effects they're worried about are caused by their instability or the them-Michael dynamic."
I also want to leave open the hypothesis that this thing isn't a one-sided dynamic, and Michael and/or his group is unintentionally contributing to it. Whereas the lung cancer example seems almost entirely one-sided.
I don't live in the Bay anymore and haven't been on LessWrong for a while, but was informed of this thread by a friend.
I have only one thing to say, and will not be commenting any further due to an NDA.
Stay away from Geoff Anders and whatever nth iteration of "Leverage" he's on now.
You might not be able to say this, but I’m wondering whether it’s one of the NDAs Zoe references Geoff pressuring people to sign at the end of Leverage 1.0 in 2019,
(This is not a direct response to PhoenixFriend's comment but I am inspired because of that comment, and I recommend reading theirs first.)
Note: CFAR recently had a staff reunion that I was present for. I made updates, including going from "Anna is avoidant, afraid, and tries to control more than she ought" to "Anna is in the process of updating, seeking feedback, and has reaffirmed honesty as a guiding principle." Given this, I feel personally relaxed about CFAR being in good hands for now; otherwise, maybe I'd be more agitated about CFAR.
I'm not interested in questions of CFAR's virtue or lack thereof or fighting over its reputation. So I'm just gonna talk about general group dynamics with CFAR as an example, and people can join on this segment of the convo if they want.
I don't think CFAR is a cult, and things did not seem comparably bad to Leverage. This is almost a meaningless sentence? But let's get it out of the way?
RE: Class distinctions within CFAR
So... my sense of the CFAR culture, even though it was indeed a small group of 12-ish people, was that there was a social hierarchy. Because as monkeys, of course, we would fall into such a pattern.
I ...
I endorse Unreal's commentary.
I more and more feel like it was a mistake to turn down my invitation to the recent staff reunion/speaking-for-the-dead, but I continue to feel like I could not, at the time, have convinced myself, by telling myself only true things, that it was safe for me to be there or that I was in fact welcome.
I re-mention this here because it accords with and marginally confirms:
going from "Anna is avoidant, afraid, and tries to control more than she ought" to "Anna is in the process of updating, seeking feedback, and has reaffirmed honesty as a guiding principle."
Like, "Duncan felt unsafe because of the former, and is now regretting his non-attendance because of signals and bits of information which are evidence of the latter."
Here is a thread for detail disagreements, including nitpicks and including larger things, that aren’t necessarily meant to connect up with any particular claim about what overall narratives are accurate. (Or maybe the whole comment section is that, because this is LessWrong? Not sure.)
I’m starting this because local validity semantics are important, and because it’s easier to get details right if I (and probably others) can consider those details without having to pre-compute whether those details will support correct or incorrect larger claims.
For me personally, part of the issue is that though I disagree with a couple of the OPs details, I also have some other details that support the larger narrative which are not included in the OP, probably because I have many experiences in the MIRI/CFAR/adjacent communities space that Jessicata doesn’t know and couldn’t include. And I keep expecting that if I post details without these kinds of conceptualizing statements, people will use this to make false inferences about my guesses about higher-order-bits of what happened.
The post explicitly calls for thinking about how this situation is similar to what is happening/happened at Leverage, and I think that's a good thing to do. I do think that I do have specific evidence that makes me think that what happened at Leverage seemed pretty different from my experiences with CFAR/MIRI.
Like, I've talked to a lot of people about stuff that happened at Leverage in the last few days, and I do think that overall, the level of secrecy and paranoia about information leaks at Leverage seemed drastically higher than anywhere else in the community that I've seen, and I feel like the post is trying to draw some parallel here that fails to land for me (though it's also plausible it is pointing out a higher level of information control than I thought was present at MIRI/CFAR).
I have also had my disagreements with MIRI being more secretive, and think it comes with a high cost that I think has been underestimated by at least some of the leadership, but I haven't heard of people being "quarantined from their friends" because they attracted some "set of demons/bad objects that might infect others when they come into contact with them", which feels to me like a different lev...
When it comes to agreements preventing disclosure of information often there's no agreement to keep the existence of the agreement itself secret. If you don't think you can ethically (and given other risks) share the content that's protected by certain agreements it would be worthwhile to share more about the agreements and with whom you have them. This might also be accompied by a request to those parties to agree to lift the agreement. It's worthwhile to know who thinks they need to be protected by secrecy agreements.
It has taken me about three days to mentally update more fully on this point. It seems worth highlighting now, using quotes from Oli's post:
- I've talked to a lot of people about stuff that happened at Leverage in the last few days, and I do think that overall, the level of secrecy and paranoia about information leaks at Leverage seemed drastically higher than anywhere else in the community that I've seen
- I think the number of people who have been hurt by various things Leverage has done is really vastly larger than the number of people who have spoken out so far, in a ratio that I think is very different from what I believe is true about the rest of the community.
I am beginning to suspect that, even in the total privacy of their own minds, there are people who went through something at Leverage who can't have certain thoughts, out of fear.
I believe it is not my place (or anyone's?) to force open a locked door, especially locked mental doors.
Zoe's post may have initially given me the wrong impression—that other ex-Leverage people would also be able to articulate their experiences clearly and express their fears in a reasonable and open way. I guess I'm updating away ...
I really don’t know about the experience of a lot of the other ex-Leveragers, but the time it took her to post it, the number and kind of allies she felt she needed before posting it, and the hedging qualifications within the post itself detailing her fears of retribution, plus just how many peoples’ initial responses to the post were to applaud her courage, might give you a sense that Zoe’s post was unusually, extremely difficult to make public, and that others might not have that same willingness yet (she even mentions it at the bottom, and presumably she knows more about how other ex-Leveragers feel than we do).
I, um, don't have anything coherent to say yet. Just a heads up. I also don't really know where this comment should go.
But also I don't really expect to end up with anything coherent to say, and it is quite often the case that when I have something to say, people find it worthwhile to hear my incoherence anyway, because it contains things that underlay their own confused thoughts, and after hearing it they are able to un-confuse some of those thoughts and start making sense themselves. Or something. And I do have something incoherent to say. So here we go.
I think there's something wrong with the OP. I don't know what it is, yet. I'm hoping someone else might be able to work it out, or to see whatever it is that's causing me to say "something wrong" and then correctly identify it as whatever it actually is (possibly not "wrong" at all).
On the one hand, I feel familiarity in parts of your comment, Anna, about "matches my own experiences/observations/hearsay at and near MIRI and CFAR". Yet when you say "sensible", I feel, "no, the opposite of that".
Even though I can pick out several specific places where Jessicata talked about concrete events (e.g. "I believed that I was intrinsically...
This matches my impression in a certain sense. Specifically, the density of gears in the post (elements that would reliably hold arguments together, confer local validity, or pin them to reality) is low. It's a work of philosophy, not investigative journalism. So there is a lot of slack in shifting the narrative in any direction, which is dangerous for forming beliefs (as opposed to setting up new hypotheses), especially if done in a voice that is not your own. The narrative of the post is coherent and compelling, it's a good jumping-off point for developing it into beliefs and contingency plans, but the post itself can't be directly coerced into those things, and this epistemic status is not clearly associated with it.
This reads like you feel compelled to avoid parsing the content of the OP, and instead intend to treat the criticisms it makes as a Lovecraftian horror the mind mustn't engage with. Attempts to interpret this sort of illegible intent-to-reject as though it were well-intentioned criticism end up looking like:
I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. I was catatonic for multiple days, afraid that by moving I would cause harm to those around me.
Very helpful to have a crisp example of this in text.
ETA: I blanked out the first few times I read Jessica's post on anti-normativity, but interpreted that accurately as my own intent to reject the information rather projecting my rejection onto the post itself, treated that as a serious problem I wanted to address, and was able to parse it after several more attempts.
I understood the first sentence of your comment to be something like "one of my hypotheses about Logan's reaction is that Logan has some internal mental pressure to not-parse or not-understand the content of what Jessica is trying to convey."
That makes sense to me as a hypothesis, if I've understood you, though I'd be curious for some guesses as to why someone might have such an internal mental pressure, and what it would be trying to accomplish or protect.
I didn't follow the rest of the comment, mostly due to various words like "this" and "it" having ambiguous referents. Would you be willing to try everything after "attempts" again, using 3x as many words?
Summary:
Logan reports a refusal to parse the content of the OP. Logan locates a problem nonspecifically in the OP, not in Logan's specific reaction to it. This implies a belief that it would be bad to receive information from Jessica.
Logan reports a refusal to parse the content of the OP
But then, "the people most mentally concerned" happens, and I'm like, Which people were most mentally concerned? What does it mean to be mentally concerned? How could the author tell that those people were mentally concerned? Then we have "with strange social metaphysics", and I want to know "what is social metaphysics?", "what is it for social metaphysics to be strange or not strange?" and "what is it to be mentally concerned with strange social metaphysics"? Next is "were marginalized". How were they marginalize? What caused the author to believe that they were marginalized? What is it for someone to be marginalized?
Most of this isn't even slightly ambiguous, and Jessica explains most of the things being asked about, with examples, in the body of the post.
Logan locates a nonspecific problem in the OP, not in Logan's response to it.
...I just, also have this feeling like something... isn't just wrong h
I also don't know what "social metaphysics" means.
I get the mood of the story. If you look at specific accusations, here is what I found, maybe I overlooked something:
...there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR; at least one other than me had worked at MIRI for a significant time, and at least one had done work with MIRI on a shorter-term basis. There was, in addition, a case of someone becoming very paranoid, attacking a mental health worker, and hijacking her car, leading to jail time; this person was not an employee of either organization, but had attended multiple CFAR events including a relatively exclusive AI-focused one.
There are even cases of suicide in the Berkeley rationality community [...] associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption
a prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years.
MIRI became very secretive about research. Many researchers were working on secret projects, and I learned almost nothing about these. I and other researchers were told not
I feel like one really major component that is missing from the story above, in particular a number of the psychotic breaks, is to mention Michael Vassar and a bunch of the people he tends to hang out with. I don't have a ton of detail on exactly what happened in each of the cases where someone seemed to have a really bad time, but having looked into it for a few hours in each case, I think all three of them were in pretty close proximity to having spent a bunch of time (and in some of the cases after taking psychedelic drugs) with Michael.
I think this is important because Michael has I think a very large psychological effect on people, and also has some bad tendencies to severely outgroup people who are not part of his very local social group, and also some history of attacking outsiders who behave in ways he doesn't like very viciously, including making quite a lot of very concrete threats (things like "I hope you will be guillotined, and the social justice community will find you and track you down and destroy your life, after I do everything I can to send them onto you"). I personally have found those threats to very drastically increase the stress I experience from inter...
I don’t have a ton of detail on exactly what happened in each of the cases where someone seemed to have a really bad time, but having looked into it for a few hours in each case, I think all three of them were in pretty close proximity to having spent a bunch of time (and in some of the cases after taking psychedelic drugs) with Michael.
Of the 4 hospitalizations and 1 case of jail time I know about, 3 of those hospitalized (including me) were talking significantly with Michael, and the others weren't afaik (and neither were the 2 suicidal people), though obviously I couldn't know about all conversations that were happening. Michael wasn't talking much with Leverage people at the time.
I hadn't heard of the statement about guillotines, that seems pretty intense.
I talked with someone recently who hadn't been in the Berkeley scene specifically but who had heard that Michael was "mind-controlling" people into joining a cult, and decided to meet him in person, at which point he concluded that Michael was actually doing some of the unique interventions that could bring people out of cults, which often involves causing them to notice things they're looking away from. It's common for t...
IIRC the one case of jail time also had a substantial interaction with Michael relatively shortly before the psychotic break occurred. Though someone else might have better info here and should correct me if I am wrong. I don't know of any 4th case, so I believe you that they didn't have much to do with Michael. This makes the current record 4/5 to me, which sure seems pretty high.
Michael wasn't talking much with Leverage people at the time.
I did not intend to indicate Michael had any effect on Leverage people, or to say that all or even a majority of the difficult psychological problems that people had in the community are downstream of Michael. I do think he had a large effect on some of the dynamics you are talking about in the OP, and I think any picture of what happened/is happening seems very incomplete without him and the associated social cluster.
I think the part about Michael helping people notice that they are in some kind of bad environment seems plausible to me, though doesn't have most of my probability mass (~15%), and most of my probability mass (~60%) is indeed that Michael mostly just leverages the same mechanisms for building a pretty abusive and cult-like ingroup...
IIRC the one case of jail time also had a substantial interaction with Michael relatively shortly before the psychotic break occurred
I was pretty involved in that case after the arrest and for several months after and spoke to MV about it, and AFAICT that person and Michael Vassar only met maybe once casually. I think he did spend a lot of time with others in MV's clique though.
I think one of the ways of disambiguating here is to talk to people outside your social bubble, e.g. people who live in different places, people with different politics, people in different subcultures or on different websites (e.g. Twitter or Reddit), people you run into in different contexts, people who have had experience in different mainstream institutions (e.g. different academic departments, startups, mainstream corporations). Presumably, the more of a culty bubble you're in, the more prediction error this will generate, and the harder it will be establish communication protocols across the gap. This establishes a point of comparison between people in bubble A vs B.
I spent a long part of the 2020 quarantine period with Michael and some friends of his (and friends of theirs) who were previously in a non-bay-area cult, which exposed me to a lot of new perspectives I didn't know about (not just theirs, but also those of some prison reform advocates and religious people), and made Michael seem less extremal or insular in comparison, since I wasn't just comparing him to the bubble of people who I already knew about.
If you think the anecdote I shared is evidence, it seems like you agree with my theory to some extent? Or maybe you have a different theory for how it's relevant?
E.g. say you're an econ student, and there's this one person in the econ department who seems to have all these weird opinions about social behavior and think body language is unusually important. Then you go talk to some drama students and find that they have opinions that are even more extreme in the same direction. It seems like the update you should make is that you're in a more insular social context than the person with opinions on social behavior, who originally seemed to you to be in a small bubble that wasn't taking in a lot of relevant information.
(basically, a lot of what I'm asserting constitutes "being in a cult" is living in a simulation of an artificially small, closed world)
I am not fully sure. I have heard him say very similar things to the above directed at Holden (and have heard reports of the things I put in quotes above). I think in that case the thing he is attacking him for is leveraging people's desire to be a morally good person in a way that they don't endorse (and plays into various guilt narratives), to get them to give him money, and to get them to dedicate their life towards Effective Altruism, and via that technique, preventing a substantial fraction of the world's top talent to dedicate themselves towards actually important problems, and also causing them various forms of psychological harm.
I don't think the context in which I heard about this communication was very private. There was a period where Michael seemed to try to get people to attack GiveWell and Holden quite loudly, and the above was part of the things I heard from that time. The above did not to me strike me as a statement intended to be very private, and also my model of Michael has norms that encourage sharing this kind of thing, even if it happens in private communication.
I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said "yes". This part of the plan was the same.
Re: “this part of the plan was the same”: IMO, some at CFAR were interested in helping some subset of people become Elon Musk, but this is different from the idea that everyone is supposed to become Musk and that that is the plan. IME there was usually mostly (though not invariably, which I expect led to problems; and for all I know “usually” may also have been the case in various parts and years of Leverage) acceptance for folks who did not wish to try to change themselves much.
Yeah, I very strongly don't endorse this as a description of CFAR's activities or of CFAR's goals, and I'm pretty surprised to hear that someone at CFAR said something like this (unless it was Val, in which case I'm less surprised).
Most of my probability mass is on the CFAR instructor was taking "become Elon Musk" to be a sort of generic, hyperbolic term for "become very capable."
The person I asked was Duncan. I suggested the "Elon Musk" framing in the question. I didn't mean it literally, I meant him as an archetypal example of an extremely capable person. That's probably what was meant at Leverage too.
I do not doubt Jessica's report here whatsoever.
I also have zero memory of this, and it is not the sort of sentiment I recall holding in any enduring fashion, or putting forth elsewhere.
I suspect I intended my reply pretty casually/metaphorically, and would have similarly answered "yes" if someone had asked me if we were trying to improve ourselves to become any number of shorthand examples of "happy, effective, capable, and sane."
2016 Duncan apparently thought more of Elon Musk than 2021 Duncan does.
Here are some examples of long-time LW posters who think Kegan stages are important:
Though I can't find an example of him posting on LessWrong, Ethan Dickinson is in the Berkeley rationality community and is mentioned here as introducing people to Kegan stages. There are multiple others, these are just the people who it was easy to find Internet evidence about.
There's a lot of overlap in people posting about "rationalism" and "postrationalism", it's often a matter of self-identification rather than actual use of different methods to think, e.g. lots of "rationalists" are into meditation, lots of "postrationalists" use approximately Bayesian analysis when thinking about e.g. COVID. I have noticed that "rationalists" tend to think the "rationalist/postrationalist" distinction is more important than the "postrationalists" do; "postrationalists" are now on Twitter using vaguer terms like "ingroup" or "TCOT" (this corner of Twitter) for themselves.
I also mentioned a high amount of interaction between CFAR and Monastic Academic in the post.
To speak a little bit on the interaction between CFAR and MAPLE:
My understanding is that none of Anna, Val, Pete, Tim, Elizabeth, Jack, etc. (current or historic higher-ups at CFAR) had any substantial engagement with MAPLE. My sense is that Anna has spoken with MAPLE people a good bit in terms of total hours, but not at all a lot when compared with how many hours Anna spends speaking to all sorts of people all the time—much much less, for instance, than Anna has spoken to Leverage folks or CEA folks or LW folks.
I believe that Renshin Lee (née Lauren) began substantially engaging with MAPLE only after leaving their employment at CFAR, and drew no particular link between the two (i.e. was not saying "MAPLE is the obvious next step after CFAR" or anything like that, but rather was doing what was personally good for them).
I think mmmmaybe a couple other CFAR alumni or people-near-CFAR went to MAPLE for a meditation retreat or two? And wrote favorably about that, from the perspective of individuals? These (I think but do not know for sure) include people like Abram Demski and Qiaochu Yuan, and a small number of people from CFAR's hundreds of workshop alumni, some of w...
This is Ren, and I was like "?!?" by the sentence in the post: "There is a significant degree of overlap between people who worked with or at CFAR and people at the Monastic Academy."
I am having trouble engaging with LW comments in general so thankfully Duncan is here with #somefacts. I pretty much agree with his list of informative facts.
More facts:
The inferential gap between the MAPLE and rationalist worldview is pretty large. There's definitely an interesting "thing"...
As someone who was more involved with CFAR than Duncan was from in 2019 on, all this sounds correct to me.
I was very peripheral to the Bay Area rationality at that time and I heard about Kegan levels enough to rub me the wrong way. Seemed bizarre to me that one man’s idiosyncratic theory of development would be taken so seriously by a community I generally thought was more discerning. That’s why I remember so clearly that it came up many times.
Huh, some chance I am just wrong here, but to me it didn't feel like Kegan levels had more prominence or expectation of being understood than e.g. land value taxes, which is also a topic some people are really into, but doesn't feel to me like it's very core to the community.
I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact. Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects.
Trying to maintain secrecy within the organization like this (as contrasted to secrecy from the public) seems nuts to me. Certainly, if you have any clever ideas about how to build an AGI, you wouldn't want to put them on the public internet, where they might inspire someone who doesn't appreciate the difficulty of the alignment problem to do something dangerous.
But one would hope that the people working at MIRI do appreciate the difficulty of the alignment problem (as a real thing about the world, and not just something to temporarily believe because your current employer says so). If you want the alignment-savvy people to have an edge over the rest of the world (!), you should want them to be maximally intellectually productive, which naturally requires the ability to talk to each other without the overhead of seeking permission from a designated authority figure. (Where the standard practice of bottlenecking information and decisionmaking on a designated authority figure makes sense if you're a government or a corporation trying to wrangle people into serving the needs of the organization against their own interests, but I didn't think "we" were operating on that model.)
Secrecy is not about good trustworthy people who get to have all the secrets versus bad untrustworthy people who don't get any. This frame may itself be part of the problem; a frame like that makes it incredibly socially difficult to implement standard practices.
To attempt to make this point more legible:
Standard best practice in places like the military and intelligence organizations, where lives depend on secrecy being kept from outsiders - but not insiders - is to compartmentalize and maintain "need to know." Similarly, in information security, the best practice is to only give being security access to what they need, and granularize access to different services / data, and well as differentiating read / write / delete access. Even in regular organizations, lots of information is need-to-know - HR complaints, future budgets, estimates of profitability of a publicly traded company before quarterly reports, and so on. This is normal, and even though it's costly, those costs are needed.
This type of granular control isn't intended to stop internal productivity, it is to limit the extent of failures in secrecy, and attempts to exploit the system by leveraging non-public information, both of which are inevitable, since costs to prevent failures grow very quickly as the risk of failure approaches zero. For all of these reasons, the ideal is to have trustworthy people who have low but non-zero probabilities of screwing up on secrecy. Then, you ask them not to share things that are not necessary for others' work. You only allow limited exceptions and discretion where it is useful. The alternative, of "good trustworthy people [] get to have all the secrets versus bad untrustworthy people who don't get any," simply doesn't work in practice.
Thanks for the explanation. (My comment was written from my idiosyncratic perspective of having been frequently intellectually stymied by speech restrictions, and not having given much careful thought to organizational design.)
I agree that there is a real issue here that needs to be addressed, and I wasn't claiming that there is no reason to have support - just that there is a reason to compartmentalize.
And yes, US military use of mental health resources is off-the-charts. But in the intelligence community there are some really screwed up incentives, in that having a mental health issue can get your clearance revoked - and you won't necessarily lose your job, but the impact on a person's career is a great reason to avoid mental health care, and my (second-hand, not reliable) understanding is that there is a real problem with this.
Seconding this: When I did classified work at a USA company, I got the strong impression that (1) If I have any financial problems or mental health problems, I need to tell the security office immediately; (2) If I do so, the security office would immediately tell the military, and then the military would potentially revoke my security clearance. Note that some people get immediately fired if they lose their clearance. That wasn't true for me—but losing my clearance would have certainly hurt my future job prospects.
My strong impression was that neither the security office nor anyone else had any intention to help us employees with our financial or mental health problems. Nope, their only role was to exacerbate personal problems, not solve them. There's an obvious incentive problem here; why would anyone disclose their incipient financial or mental health problems to the company, before they blow up? But I think from the company's perspective, that's a feature not a bug. :-P
(As it happens, neither myself nor any of my close colleagues had financial or mental health problems while I was working there. So it's possible that my impressions are wrong.)
I don't specifically know about mental health, but I do know specific stories about financial problems being treated as security concerns - and I don't think I need to explain how incredibly horrific it is to have an employee say to their employer that they are in financial trouble, and be told that they lost their job and income because of it.
I didn't think "we" were operating on that model.
I think it's actually quite hard to have everyone in an organization trust everyone else in an organization, or to only hire people who would be trusted by everyone in the organization. So you might want to have some sort of tiered system, where (perhaps) the researchers all trust each other, but only trust the engineers they work with, and don't trust any of the ops staff, and this lets you only need one researcher to trust an engineer to hire them.
[On net I think the balance is probably still in favor of "internal transparency, gated primarily by time and interests instead of security clearance", but it's less obvious than it originally seems.]
The steelman that comes to mind is that by the time you actually know that you have a dangerous secret, it's either too late or risky to set up a secrecy policy. So it's useful to install secrecy policies in advance. The downsides that might be currently apparent are bugs that you still have the slack to resolve.
It depends. For example, if you have an intern program, then they probably aren't especially trusted as these decision generally don't receive the same degree of scrutiny as employment.
And ops people prob don't need to know details of the technical research.
In case it becomes known to any of a few powerful intelligence agencies that MIRI works on an internal project that they believe is likely to create an AGI in one or two years, that intelligence agency will hack/surveil MIRI to get all the secrets.
To the extend that MIRI's theory of change is that they are going to build an AGI on their own independent of any outside organization a high degree of secrecy is likely necessary for that plan to work.
I think it's highly questionable that MIRI will be able to develop AGI faster (especially when researchers don't talk to each other) then organizations like Deep Mind and thus it's unclear to me whether the plan makes sense, but it seems hard to imagine that plan without secrecy.
I’d like to offer some data points without much justification, I hope it might spur some thought/discussion without needing to be taken on faith:
Thank you. I disagree with "... relishes 'breaking' others", and probably some other points, but a bunch of this seems really right and like content I haven't seen written up elsewhere. Do share more if you have it. I'm also curious where you got this stuff from.
One thing I'd like to say at this point is that I think you (jessicata) have shown very high levels of integrity in responding to comments. There's been some harsh criticism of your post, and regardless of how justified it is, it takes character not to get defensive, especially given the subject matter. To me, this is also a factor in how I think about the post itself.
I want to bring up a concept I found very useful for thinking about how to become less susceptible to these sorts of things.
(NB that while I don't agree with much of the criticism here, I do think "the community" does modestly increase psychosis risk, and the Ziz and Vassar bubbles do so to extraordinary degrees. I also think there's a bunch of low-hanging fruit here, so I'd like us to take this seriously and get psychosis risk lower than baseline.)
(ETA because people bring this up in the comments: law of equal and opposite advice applies. Many people seem to not have the problems that I've seen many other people really struggle with. That's fine. Also I state these strongly—if you took all this advice strongly, you would swing way too far in the opposite direction. I do not anticipate anyone will do that but other people seem to be concerned about it so I will note that here. Please adjust the tone and strength-of-claim until it feels right to you, unless you are young and new to the "community" and then take it more strongly than feels right to you.)
Anyways, the concept: I heard the word “totalizing” on Twitter at some point (h/t to somebody). It now seems fundamental to my under...
Rationality ought to be totalizing. https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory
Yeah, I think this points at a thing that bothers me about Connor's list, even though it seems clear to me that Connor's advice should be "in the mix".
Some imperfect ways of trying to point at the thing:
1. 'Playing video games all the time even though this doesn't feel deeply fulfilling or productive' is bad. 'Forcing yourself to never have fun and thereby burning out' is also bad. Outside of the most extreme examples, it can be hard to figure out exactly where to draw the line and what's healthy, what conduces to flourishing, etc. But just tracking these as two important failure modes, without assuming one of these error categories is universally better than the other, can help.
(I feel like "flourishing" is a better word than "healthy" here, because it's more... I want to say, "transhumanist"? Acknowledges that life is about achieving good things, not just cautiously avoiding bad things?)
2. I feel like a lot of Connor's phrasings, taken fully seriously, almost risk... totalizing in the opposite direction? Insofar as that's a thing. And totalizing toward complacency, mainstream-conformity, and non-ambition leads to sad, soft, quiet failure modes, the absence...
I think this is also a case of 'reverse all advice you hear'. No one is at the optimum on most dimensions, so a lot of people will benefit from the advice 'be more X' and a lot of people will benefit from the advice 'be less X'. I'm guessing your (Connor's) advice applies perfectly to lots of people, but for me...
Yeah, I disagree with that view.
To keep track of the discussion so far, it seems like there are at least three dimensions of disagreement:
1. Mainstream vs. Rationalists Cage Match
1A. Overall, the rationality community is way better than mainstream society.
1B. The rationality community is about as good as mainstream society.
1C. The rationality community is way worse than mainstream society.
My model is that I, Connor, Anna, and Vassar agree with 1A, and hypothetical-Said-commenter agrees with 1C. (The rationalists are pretty weird, so it makes sense that 1B would be a less common view.)
2. Psychoticism vs. Anti-Psychoticism
2A. The rationality community has a big, highly tractable problem: it's way too high on 'broadly psychoticism-adjacent characteristics'.
2B. The rationality community has a big, highly tractable problem: it's way too low on those characteristics.
2C. The rationality community is basically fine on this metric. Like, we should be more cautious around drugs, but aside from drug use there isn't a big clear thing it makes sense for most community members to change here.
My model is that Connor, Anna, and hypothetical-Said-commenter endorse 2A, ...
A lot of the comments in response to Connor's point are turning this into a 2D axis with 'mainstream norms' on one side and 'weird/DIY norms' on the other and trying to play tug-of-war, but I actually think the thing is way more nuanced than this suggests.
Proposal:
I am willing to bet there is a 'good' kind of totalizing and a 'bad' kind. And I think my comment about elitism was one of the bad kinds. And I think it's not that hard to tell which is which? I think it's hard to tell 'from the inside' but I... think I could tell from the outside with enough observation and asking them questions?
A very basic hypothesis is: To the extent that a totalizing impulse is coming from addiction (underspecified term here, I don't want to unpack rn), it is not healthy. To the extent that a totalizing impulse is coming from an open-hearted, non-clingy, soulful conviction, it is healthy.
I would test that hypothesis, if it were my project. Others may have different hypotheses.
(I would not take this modus tollens, I don't think the "community" is even close to fundamentally bad, I just think some serious reforms are in order for some of the culture that we let younger people build here.)
But the "community" should not be totalizing.
(Also, I think rationality should still be less totalizing than many people take it to be, because a lot of people replace common sense with rationality. Instead one should totalize themselves very slowly, over years, watching for all sorts of mis-steps and mistakes, and merge their past life with their new life. Sure, rationality will eventually pervade your thinking, but that doesn't mean at age 22 you throw out all of society's wisdom and roll your own.)
I note that the things which you're resonating with, which Connor proposes and which you expect would have helped you, or helped protect you...
...protect you from things which were not problems for me.
Which is not to say that those things are bad. Like, saving people from problems they have (that I do not have) sounds good to me.
But it does mean that there is [a good thing] for at least [some people] already, and while it may be right to trade off against that, I would want us to be eyes-open that it might be a tradeoff, rather than assuming that sliding in the Connor-Unreal direction is strictly and costlessly good.
Hmm, I want to point out I did not say anything about what I expected would have helped me or helped 'protect' me. I don't see anything on that in my comment...
I also don't think it'd be good for me to be saved from my problems...? but maybe I'm misunderstanding what you meant.
I definitely like Connor's post. My "hear hear" was a kind of friendly encouragement for him speaking to something that felt real. I like the totalization concept. Was a good comment imo.
I do not particularly endorse his proposal... It seems like a non-starter. A better proposal might be to run some workshops or something that try to investigate this 'totalization' phenomenon in the community and what's going on with it. That sounds fun! I'd totally be into doing this. Prob can't though.
The psychotic break you describe sounds very scary and unpleasant, and I'm sorry you experienced that.
I am impressed and appreciative towards Logan for trying to say things on this post despite not being very coherent. I am appreciative and have admiration towards Anna for making sincere attempts to communicate out of a principled stance in favor of information sharing. I am surprised and impressed by Zoe's coherence on a pretty triggering and nuanced subject. I enjoy hearing from jessicata, and I appreciate the way her mind works; I liked this post, and I found it kind of relieving.
I am a bit crestfallen at my own lack of mental skillfulness in response to reading posts like this one.
While this feels like a not-very-LW-y way to go about things, I will just try to make a list... of .... things ...
I am very sorry to hear about your experiences. I hope you've found peace and that the organizations can take your experiences on board.
On one hand you seem to want there to be more open discussion around mental health, whilst on the other you are criticising MIRI and CFAR for having people have mental health issues in their orbit. These seem somewhat in tension with each other.
I think one of the factors is that the mission itself is stressful. For example, air traffic control and the police are high stress careers, yet we need both.
Another issue is that rationality is in some ways more welcoming of (at least some subset) people whom society would seem weird. Especially since certain conditions can be paired with great insight or drive. It seems like the less a community appreciates the silver-lining of mental health issues, the better they'd score according to your metric.
Regarding secrecy, I'd prefer for AI groups to lean too much on the side of maintaining precautions about info-hazards than too little. (I'm only referring to technical research not misbehaviour). I think it's perfectly valid for donors to decide that they aren't going to give money without transparency, but ther...
I am very sorry to hear about your experiences. I hope you’ve found peace and that the organizations can take your experiences on board.
Thanks, I appreciate the thought.
On one hand you seem to want there to be more open discussion around mental health, whilst on the other you are criticising MIRI and CFAR for having people have mental health issues in their orbit. These seem somewhat in tension with each other.
I don't see why these would be in tension. If there is more and better discussion then that reduces the chance of bad outcomes. (Partially, I brought up the mental health issues because it seemed like people were criticizing Leverage for having people with mental health issues in their orbit, but it seems like Leverage handled the issue relatively well all things considered.)
I think one of the factors is that the mission itself is stressful. For example, air traffic control and the police are high stress careers, yet we need both.
I basically agree.
It seems like the less a community appreciates the silver-lining of mental health issues, the better they’d score according to your metric.
I don't think so. I'm explicitly saying that talking about weird perception...
One point I strongly agree with you on is that rationalists should pay more attention to philosophy.
Yes, I’ve definitely noticed a trend where rationalists are mostly continuing from Hume and Turing, neglecting e.g. Kant as a response to Hume.
I’ve yet to see a readable explanation of what Kant had to say (in response to Hume or otherwise) that’s particularly worth paying attention to (despite my philosophy classes in college having covered Kant, and making some attempts later to read him). If you (or someone else) were to write an LW post about this, I think this might be of great benefit to everyone here.
I don't know what Kant-insights Jessica thinks LW is neglecting, but I endorse Allen Wood's introduction to Kant as a general resource.
(Partly because Wood is a Kant scholar who loves Kant but talks a bunch about how Kant was just being sloppy / inconsistent in lots of his core discussions of noumena, rather than assuming that everything Kant says reflects some deep insight. This makes me less worried about IMO one of the big failure modes of philosopher-historians, which is that they get too creative with their novel interpretations + treat their favorite historical philosophers like truth oracles.)
BTW, when it comes to transcendental idealism, I mostly think of Arthur Schopenhauer as 'Kant, but with less muddled thinking and not-absolutely-horrible writing style'. So I'd usually rather go ask what Schopenhauer thought of a thing, rather than what Kant thought. (But I mostly disagree with Kant and Schopenhauer, so I may be the wrong person to ask about how to properly steel-man Kant.)
I've been working on a write-up on and off for months, which I might or might not ever get around to finishing.
The basic gist is that, while Hume assumes you have sense-data and are learning structures like causation from this sense-data, Kant is saying you need concepts of causation to have sense-data at all.
The Transcendental Aesthetic is a pretty simple argument if applied to Solomonoff induction. Suppose you tried to write an AI to learn about time, which didn't already have time. How would it structure its observations, so it could learn about time from these different observations? That seems pretty hard, perhaps not really possible, since "learning" implies past observations affecting how future observations are interpreted.
In Solomonoff induction there is a time-structure built in, which structures observations. That is, the inductor assumes a priori that its observations are structured in a sequence.
Kant argues that space is also a priori this way. This is a somewhat suspicious argument given that vanilla Solomonoff induction doesn't need a priori space to structure its observations. But maybe it's true in the case of humans, since our visual cortexes have a notion o...
I've yet to see a readable explanation of what Kant had to say (in response to Hume or otherwise) that's particularly worth paying attention to
As an undergrad, instead of following the actual instructions and writing a proper paper on Kant, I thought it would be more interesting and valuable to simply attempt to paraphrase what he actually said, paragraph by paragraph. It's the work of a young person with little experience in either philosophy or writing, but it certainly seems to have had a pretty big influence on my thinking over the past ten years, and I got an A. So, mostly for your entertainment, I present to you "Kant in [really not nearly as plain as I thought at the time] English". (It's just the bit on apperception.)
I think this is either basic psychology or wrong.¹
For one, Kant seems to be conflating the operation of a concept with its perception:
Since the concept of “unity” must exist for there to be combination (or “conjunction”) in the first place, unity can’t come from combination itself. The whole-ness of unified things must be a product of something beyond combination.
This seems to say that the brain cannot unify things unless it has a concept of combination. However, just as an example, reinforcement learning in AI shows this to be false: unification can happen as a mechanistic consequence of the medium in which experiences are embedded, and an understanding of unification - a perception as a concept - is wholly unnecessary.
Then okay, concepts are generalizations (compressions?) of sense data, and there's an implied world of which we become cognizant by assuming that the inner structure matches the outer structure. So far, so Simple Idea Of Truth. But then he does the same thing again with "unity", where he assumes that persistent identity-perception is necessary for judgment. Which I think any consideration of a nematode would disprove: judgment can also happen mechanistically.
I ...
I got through that page and… no, I really can’t summarize it. I don’t really have any idea what Kant is supposed to have been saying, or why he said any of those things, or the significance of any of it…
I’m afraid I remain as perplexed as ever.
there's a big difference between saying it saves a few years vs. causes us to have a chance at all when we otherwise wouldn't. [...] it seems like most of the relevant ideas were already in the memespace
I was struck by the 4th edition of AI: A Modern Approach quoting Nobert Weiner writing in 1960 (!), "If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively ... we had better be quite sure that the purpose put into the machine is the purpose which we really desire."
It must not have seemed like a pressing issue in 1960, but Weiner noticed the problem! (And Yudkowsky didn't notice, at first.) How much better off are our analogues in the worlds where someone like Weiner (or, more ambitiously, Charles Babbage) did treat it as a pressing issue? How much measure do they have?
I just want to send you some sympathy this way. Everything you’ve gone through and all the self-doubt and everything else that I can’t put a name to must be very stressful an exhausting. Reading and responding to hundreds of comments, often very critical ones, is very exhausting too. And who knows what else is going on in your life at the same time. Yet your comments show none of the exhaustion that I would’ve felt in your situation.
I’d also like to second what Rafael already said!
It seems it’s been a few weeks since most of these discussions happened, so I hope you’ve had a chance to relax and recover in the meantime, or I hope that you’ll have some very soon!
I appreciate that you're thinking about my well-being! While I found it stressful to post this and then read and respond to so many comments, I didn't have much else going on at the time so I did manage to rest a lot. I definitely feel better after having gotten this off my chest.
As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR
[...]
As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.
That sounds to me like you are saying that people who were talking about demons got marginalized. To me that's not a sign of MIRI/CFAR being culty, but what most people would expect from a group of rationalists. It might have been a wrong decision not to take people who talk about demons more seriously to address their issues, but it doesn't match the error type of what's culty.
If I'm misunderstanding what you are saying, can you clarify?
There's an important problem here which Jessica described in some detail in a more grounded way than the "demons" frame:
As a brief model of something similar to this (not necessarily the same model as the Leverage people were using): people often pick up behaviors ("know-how") and mental models from other people, through acculturation and imitation. Some of this influence could be (a) largely unconscious on the part of the receiver, (b) partially intentional or the part of the person having mental effects on others (where these intentions may include behaviorist conditioning, similar to hypnosis, causing behaviors to be triggered under certain circumstances), and (c) overall harmful to the receiver's conscious goals. According to IFS-like psychological models, it's common for a single brain to contain multiple sub-processes with different intentions. While the mental subprocess implantation hypothesis is somewhat strange, it's hard to rule out based on physics or psychology.
If we're confused about a problem like Friendly AI, it's preparadigmatic & therefore most people trying to talk about it are using words wrong. Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they're confused about, than for simply ignoring the problems.
It seems like one of the problems with ‘the Leverage situation’ is that collectively, we don’t know how bad it was for people involved. There are many key Leverage figures who don’t seem to have gotten involved in these conversations (anonymously or not) or ever spoken publicly or in groups connected to this community about their experience. And, we have evidence that some of them have been hiding their post-Leverage experiences from each other.
So I think making the claim that the MIRI/CFAR related experiences were ‘worse’ because there exists evidence of psychiatric hospitalisation etc is wrong and premature.
And also? I’m sort of frustrated that you’re repeatedly saying that -right now-, when people are trying to encourage stories from a group of people who we might expect to have felt insecure, paranoid, and gaslit about whether anything bad ‘actually happened’ to them.
I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner, which would be more dangerous
Does anyone actually believe and/or want to defend this? I have a strong intuition that public-facing discussion of AI timelines within the rationalist and AI alignment communities is highly unlikely to have a non-negligible effect on AI timelines, especially in comparison to the potential benefit it could have for the AI alignment community being better able to reason about something very relevant to the problem they are trying to solve. (Ditto for probably most but not all topics regarding AGI that people interested in AI alignment may be tempted to discuss publicly.)
I kind of believe this, but it's not a huge effect. I do think that the discussion around short timelines had some effect on the scaling laws research, which I think had some effect on OpenAI going pretty hard on aggressively scaling models, which accelerated progress by a decent amount.
My guess is the benefits of public discussion are still worth more, but given our very close proximity to some of the world's best AI labs, I do think the basic mechanism of action here is pretty plausible.
Your comment makes sense to me as a consideration for someone writing on LW in 2017. It doesn't really make sense to me as a consideration for someone writing on LW in 2021. (The horse has left the barn.) Do you agree?
No, I think the same mechanism of action is still pretty plausible, even in 2021 (attracting more researchers and encouraging more effort to go into blindly-scaling-type research), so I think additional research here could have similar effects. As Gwern has written about extensively, for some reason the vast majority of AI companies are still not taking the scaling hypothesis seriously, so there is lots of room for more AI companies going in on it.
I also think there is a broader reference class of "having important ideas about how to build AGI" (of which the scaling hypothesis is one), that due to our proximity to top AI labs, does seem like it could have a decently sized effect.
As in my comment, I think saying "Timelines are short because the path to AGI is (blah blah)" is potentially problematic in a way that saying "Timelines are short" is not problematic. In particular, it's especially problematic (1) If "(blah blah)" an obscure line of research, or (2) if "(blah blah)" is a well-known but not widely-accepted line of research (e.g. scaling hypothesis) AND the post includes new concrete evidence or new good arguments in favor of it.
If neither of those is applicable, then I want to say there's really no problem. Like, if some AI Company Leader is not betting on the scaling hypothesis, not after GPT-2, not after GPT-3, not after everything that Gwern and OpenAI etc. have said about the topic … well, I have a hard time imagining that yet another LW post endorsing the scaling hypothesis would be what tips the balance for them.
I have updated over the years on how many important people in AI read and follow LessWrong and the associated meme-space. I agree marginal discussion does not make a big difference. I also think overall all discussion still probably didn't make enough of a difference to make it net-negative, but it was substantial enough to cause me to think for quite a while on whether it was worth it overall.
I agree with you that the future costs seem marginally lower, but not low enough to make me not think hard and want to encourage others to think hard about the tradeoff. My estimate of the tradeoff came out on the net-positive side, but I wouldn't think it would be crazy for someone's tradeoff to come out on the net-negative side.
Does anyone actually believe and/or want to defend this?
I believe this. For example, one of my benign beliefs in ~2014 was "songs in frequency space are basically just images; you can probably do interesting things in the music space by just taking off-the-shelf image stuff (like style transfer) and doing it on songs."
The first paper doing something similar that I know of came out in 2018. If I had posted about it in 2014, would it have happened sooner? Maybe--I think there's a sort of weird thing going on in the music space where all the people with giant libraries of music want to maintain their relationships with the producers of music, and so there's not much value for them in doing research like this, and so there might be unusually little searching for fruit in that corner of the orchard. But also maybe my idea was bad, or wouldn't really help all of that much, or no one would have done it just because they read it. (I don't think that paper worked in wavelet space, but didn't look too closely.)
I'm much less certain that the net effect is "you shouldn't talk about such things." The more important the consequences of sharing a belief seem to you ("oh, if you just put together X and Y you can build unsafe AGI"), the more important for your models that you're right ("oh, if that doesn't work I think we have five more years").
Some of the mental health issues seem like they might be due to individual people not acting as appropriately as they should, but a lot of it seems to me to be due to the inherent stresses of trying to save the world. And if this is indeed the case, then we should probably have some sort of system in place, or training, to prepare people for these psychological stresses before they dive in.
I started musing on this idea earlier in Preparing For Ambition. In that post I focused on my anxiety as a startup founder, but I think it applies to various fields. For example, recently I came across the following excerpt from My Emotions as CEO:
I felt lonely every day – maybe not constantly, but definitely every day for 9+ years. I haven’t talked to a CEO who didn’t feel extreme loneliness. For the first time in my life I didn’t feel like I could be friends, even work friends, with anyone else on the team. That might have been my own baggage or a consequence of struggling to bring my whole self to work. The loneliness driver I’ve heard of most from other CEOs is the inability to talk with people about the emotional rollercoaster that’s inherent to the role.
It seems like in being a CEO, the...
Maybe at Google or some other corporation you'd have a more pleasant time, because many employees view it as "just putting food on the table", which stabilizes things. It has some bureaucratic and Machiavellian stuff for sure, but to me it feels less psychologically pressuring than having everything be about the mission all the time.
Just for disclosure, I was a MIRI research associate for a short time, long ago, remotely, and the experience mostly just passed me by. I only remember lots of email threads about AI strategy, nothing about psychology. There was some talk about having secret research, but when joining I said that I wouldn't work on anything secret, so all my math / decision theory stuff is public on LW.
FYI - Geoff will be talking about the history of Leverage and related topics on Twitch tomorrow (Saturday, October 23rd 2021) starting at 10am PT (USA West Coast Time). Apparently Anna Salamon will be joining the discussion as well.
Geoff's Tweet
Text from the Tweet (for those who don't use Twitter):
"Hey folks — I'm going live on Twitch, starting this Saturday. Join me, 10am-1pm PT:
twitch.tv/geoffanders
This first stream will be on the topic of the history of my research institute, Leverage Research, and the Rationality community, with @AnnaWSalamon as a guest."
None of the arguments in this post seem as if they actually indict anything about MIRI or CFAR. The first claim of CFAR/MIRI somehow motivating 4 suicides provides no evidence that CFAR is unique in this regard or conducive to this kind of outcome and seems like a bizarre framing of events considering that stories about things like someone committing suicide out of suspicion over the post office's nefarious agenda generally aren't seen as an issue on the part of the postal service.
Additionally the focus on Roko's Basilisk-esque "info hazards" as a part of MIRI/CFAR reduces the credibility of this point seeing as the original basilisk thought experiment was invented as a criticism of SIAI and according to every LDT the basilisk has no incentive to actually carry out any threats. The second part is even weaker with how it essentially posits a non-argument for how the formation of a conspiracy mindset would be a foreseeable hazard from one's coworkers disagreeing with them on something important for possibly malevolent reasons and there being secrecy in a workplace. The point about how someone other than CFAR calling the police on CFAR-opposed people who were doing something illegal t...
Agree or disagree: "There may be a pattern wherein rationalist types form an insular group to create and apply novel theories of cognition to themselves, and it gets really weird and intense leading to a rash of psychological breaks."
As someone who is pretty much an outsider to this community, I think it is interesting that a major drive for many people in this community seems to be tackling the most important problems in the world. I am not saying is a bad thing, I am just surprised. In my case, I work in academia not so much because of the impact I can have working here, but mainly because it allows me to have a more balanced life with a flexible time schedule.
Thanks. This puts the social dynamics at play in a different light for me - or rather it takes things I had heard about but not understood and puts them in any kind of light at all.
I am liking the AI Insights writeup so far.
I feel a strong sympathy for people who think they are better philosophers than Kant.
everything I knew about how to be hired would point towards having little mental resistance to organizational narratives
Can you elaborate a little on this?
At university, for example, you'll generally get a better grade if you let the narrative you're being told be the basic structure of your thinking, even if you have specific disagreements in places that you have specific evidence for. In Rao's terminology, people who are Clueless are hired for, in an important sense, actually believing the organizational level at some level (even if there is some amount of double-think), and being manipulable by others around them who are maintaining the simulation.
If I showed too much disagreement with the narrative without high ability to explain myself in terms of the existing narrative, it would probably have seemed less desirable to hire me.
It seems to me like MIRI hiring, especially researchers in 2015-2017, but also in general, reliably produced hires with a certain philosophical stance (i.e. people who like UDASSA, TDT, etc.) and people with a certain kind of mathematical taste (i.e. people who like reflective oracles, Lob, haskell, etc.).
I think that it selects pretty strongly for the above properties, and doesn't have much room for "little mental resistance to organizational narratives" (beyond any natural correlations).
I think there is also some selection on trustworthiness (e.g. following through with commitments) that is not as strong as the above selection, and that trustworthiness is correlated with altruism (and the above philosophical stance).
I think that altruism, ambition, timelines, agreement about the strategic landscape, agreement about probability of doom, little mental resistance to organizational narratives, etc. are/were basically rounding errors compared to selection on philosophical competence, and thus, by proxy, philosophical agreement (specifically a kind of philosophical agreement that things like agreement about timelines is not a good proxy for).
(Later on, there was probably mo...
I will not be offended by a comment predicting that I believe this largely because of “little mental resistance to organizational narratives”, even if the comment has no further justification.
This isn't a full answer, but I suspect you believe this largely because you don't know what someone as smart as you who doesn't have "little mental resistance to organizational narratives" looks like, because mostly you haven't met them. They kind of look like very smart crazy people.
Hmm, so this seems plausible, but in which case, it seems like the base rate for “little mental resistance to organizational narratives” is very low, and the story should not be "Hired people probably have little mental resistance because they were hired" but should instead be "Hired probably have little mental resistance because basically everyone has little mental resistance." (these are explanatory uses of "because", not a causal uses.)
This second story seems like it could be either very true or very false, for different values of "little", so it doesn't seems like it has a truth value until we operationalize "little."
Even beyond the base rates, it seems likely that a potential hire could be dismissed because they seem crazy, including at MIRI, but I would predict that MIRI is pretty far on the "willing to hire very smart crazy people" end of the spectrum.
It seems quite possible to me that the philosophical stance + mathematical taste you're describing aren't "natural kinds" (e.g. the topics you listed don't actually have a ton in common, besides being popular MIRI-sphere topics).
So, I believe that the philosophical stance is a natural kind. I can try to describe it better, but note that I won't be able to point at it perfectly:
I would describe it as "taking seriously the idea that you are a computation[Edit: an algorithm]." (As opposed to a collection of atoms, or a location in spacetime, or a Christian soul, or any number of other things you could identify with.)
I think that most of the selection for this philosophical stance happens not in MIRI hiring, but instead in being in the LW community. I think that the sequences are actually mostly about the consequences of this philosophical stance, and that the sequences pipeline is largely creating a selection for this philosophical stance.
One can have this philosophical stance without a bunch of math ability, (many LessWrongers do) but when the philosophical stance is combined with math ability, it leads to a lot of agreement in taste in math-philosophy models, which is wh...
I agree that the phrase "taking seriously the idea that you are a computation" does not directly point at the cluster, but I still think it is a natural cluster. I think that computational neuroscience is in fact high up on the list of things I expect less wrongers to be interested in. To the extent that they are not as interested in it as other things, I think it is because it is too hard to actually get much that feels like algorithmic structure from neuroscience.
I think that the interest in anthropics is related to the fact that computations are the kind of thing that can be multiply instantiated. I think logic is a computational-like model of epistemics. I think that haskell is not really that much about this philosophy, and is more about mathematical elegance. (I think that liking elegance/simplicity is mostly different from the "I am a computation" philosophy, and is also selected for at MIRI.)
I think that a lot of the sequences (including the first and third and fourth posts in your list) are about thinking about the computation that you are running in contrast and relation to an ideal (AIXI-like) computation.
I think that That alien message is directly about getting the read...
I notice I like "you are an algorithm" better than "you are a computation", since "computation" feels like it could point to a specific instantiation of an algorithm, and I think that algorithm as opposed to instantiation of an algorithm is an important part of it.
It sounds like you're saying that at MIRI, you approximate a potential hire's philosophical competence by checking to see how much they agree with you on philosophy. That doesn't seem great for group epistemics?
I did not mean to imply that MIRI does this any more than e.g. philosophy academia.
When you don't have sufficient objective things to use to judge competence, you end up having to use agreement as a proxy for competence. This is because when you understand a mistake, you can filter for people who do not make that mistake, but when you do not understand a mistake you are making, it is hard to filter for people that do not make that mistake.
Sometimes, you interact with someone who disagrees with you, and you talk to them, and you learn that you were making a mistake that they did not make, and this is a very good sign for competence, but you can only really get this positive signal about as often as you change your mind, which isn't often.
Sometimes, you can also disagree with someone, and see that their position is internally consistent, which is another way you can observe some competence without agreement.
I think that personally, I use a proxy that is somet...
If that's the case, selecting for people with the described philosophical stance + mathematical taste could basically be selecting for "people with little resistance to MIRI's organizational narrative"
So, I do think that MIRI hiring does select for people with "little resistance to MIRI's organizational narrative," through the channel of "You have less mental resistance to narratives you agree with" and "You are more likely to work for an organization when you agree with their narrative."
I think that additionally people have a score on "mental resistance to organizational narratives" in general, and was arguing that MIRI does not select against this property (very strongly). (Indeed, I think they select for it, but not as strongly as they select for philosophy). I think that when the OP was thinking about how much to trust her own judgement, this is the more relevant variable, and the variable they were referring to.
I don't want to speak for/about MIRI here, but I think that I personally do the "patting each other on the back for how right we all are" more than I endorse doing it. I think the "we" is less likely to be MIRI, and more likely to be a larger group that includes people like Paul.
I agree that it would be really really great if MIRI can interact with and learn from different views. I think mostly everyone agrees with this, and has tried, and in practice, we keep hitting "inferential distance" shaped walls, and become discouraged, and (partially) give up. To be clear, there are a lot of people/ideas where I interact with them and conclude "There probably isn't much for me to learn here," but there are also a lot of people/ideas where I interact with them and become sad because I think there is something for me to learn there, and communicating across different ontologies is very hard.
I agree with your bullet points descriptively, but they are not exhaustive.
I agree that MIRI has strong (statistical) bias towards things that were invented internally. It is currently not clear to me how much of this statistical bias is also a mistake vs the correct reaction to how much internally invented things seem to fit our needs, and how hard it is to find the good stuff that exists externally when it exists. (I think there a lot of great ideas out there that I really wish I had, but I dont have a great method for filtering for in in the sea of irrelevant stuff.)
I agree that MIRI has strong (statistical) bias towards things that were invented internally. It is currently not clear to me how much of this statistical bias is also a mistake vs the correct reaction to how much internally invented things seem to fit our needs, and how hard it is to find the good stuff that exists externally when it exists. (I think there a lot of great ideas out there that I really wish I had, but I dont have a great method for filtering for in in the sea of irrelevant stuff.)
Strong-upvoted for this paragraph in particular, for pointing out that the strategy of "seeking out disagreement in order to learn" (which obviously isn't how hg00 actually worded it, but seems to me descriptive of their general suggested attitude/approach) has real costs, which can sometimes be prohibitively high.
I often see this strategy contrasted with a group's default behavior, and when this happens it is often presented as [something like] a Pareto improvement over said default behavior, with little treatment (or even acknowledgement) given to the tradeoffs involved. I think this occurs because the strategy in question is viewed as inherently virtuous (which in turn I fundamentally see as a consequence of epistemic learned helplessness run rampant, leaking past the limits of any particular domain and seeping into a general attitude towards anything considered sufficiently "hard" [read: controversial]), and attributing "virtuousness" to something often has the effect of obscuring the real costs and benefits thereof.
So I think my orientation on seeking out disagreement is roughly as follows. (This is going to be a rant I write in the middle of the night, so might be a little incoherent.)
There are two distinct tasks: 1)Generating new useful hypotheses/tools, and 2)Selecting between existing hypotheses/filtering out bad hypotheses.
There are a bunch of things that make people good at both these tasks simultaneously. Further, each of these tasks is partially helpful for doing the other. However, I still think of them as mostly distinct tasks.
I think skill at these tasks is correlated in general, but possibly anti-correlated after you filter on enough g correlates, in spite of the fact that they are each common subtasks of the other.
I don't think this (anti-correlated given g) very confidently, but I do think it is good to track your own and others skill in the two tasks separately, because it is possible to have very different scores (and because of side effects of judging generators on reliability might make them less generative as a result of being afraid of being wrong, and similarly vise versa.)
I think that seeking out disagreement is especially useful for the selection task, and l...
So, my model is that "epistemic learned helplessness" essentially stems from an inability to achieve high confidence in one's own (gears-level) models. Specifically, by "high confidence" here I mean a level of confidence substantially higher than one would attribute to an ambient hypothesis in a particular space--if you're not strongly confident that your model [in some domain] is better than the average competing model [in that domain], then obviously you'd prefer to adopt an exploration-based strategy (that is to say: one in which you seek out disagreeing hypotheses in order to increase the variance of your information intake) with respect to that domain.
I think this is correct, so far as it goes, as long as we are in fact restricting our focus to some domain or set of domains. That is to say: as humans, naturally it's impossible to explore every domain in sufficient depth that we can form and hold high confidence in gears-level model for said domain, which in turn means there will obviously be some domains in which "epistemic learned helplessness" is simply the correct attitude to take. (And indeed, the original blog post in which Scott introduced the concept of "epistemic learn...
Thanks for the reply.
But it seems like maybe you're proposing that people self-deceive in order to get themselves confident enough to explore the ramifications of a particular hypothesis. I think we should be a bit skeptical of intentional self-deception.
I want to clarify that this is not my proposal, and to the extent that it had been someone's proposal, I would be approximately as wary about it as you are. I think self-deception is quite bad on average, and even on occasions when it's good, that fact isn't predictable in advance, making choosing to self-deceive pretty much always a negative expected-value action.
The reason I suspect you interpreted this as my proposal is that you're speaking from a frame where "confidence in one's model" basically doesn't happen by default, so to get there people need to self-deceive, i.e. there's no way for someone [in a sufficiently "hard" domain] to have a model and be confident in that model without doing [something like] artificially inflating their confidence higher than it actually is.
I think this is basically false. I claim that having (real, not artificial) confidence in a given model (even of something "hard") is entirely possible, ...
As a college professor who has followed from physically afar the rationality community from the beginning, here are my suggestions:
Illegal drugs are, on average, very bad. How about a policy that if you use illegal drugs you are presumptively considered not yet good enough to be in the community?
The risk profile of a drug isn't correlated with its legal status, largely because our current drug laws were created for political purposes in the 1970s. A quote from Nixon advisor John Ehrlichman:
“The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people. You understand what I’m saying? We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.”
A 2010 analysis concluded that psychedelics are causing far less harm than legal drugs like alcohol and tobacco. (Psychedelics still carry substantial risks, aren't for everybody, and should always be handled with care.)
A 2010 analysis concluded that psychedelics are causing far less harm than legal drugs like alcohol and tobacco. (Psychedelics still carry substantial risks, aren't for everybody, and should always be handled with care.)
? This is total harm, not per use. More people die of car crashes than from rabid wolves, but I still find myself more inclined to ride cars than ride rabid wolves as a form of transportation.
I'm confused why there were ~40 comments in this subthread without anybody else pointing out this pretty glaring error of logical inference (unless I'm misunderstanding something)
The world health organization has estimated that in 2016, one in twenty deaths world-wide was caused by alcohol. Smoking has been estimated to take ten years off your life. Consequently, psychedelics can be horrible and still not as bad as alcohol and tobacco.
Consequently, psychedelics can be horrible and still not as bad as alcohol and tobacco.
They could be, but current evidence shows that psychedelic-assisted therapy is efficacious for PTSD, depression, end-of-life anxiety, smoking cessation, and probably alcoholism.
Psychedelic experiences have been rated as extremely meaningful by healthy volunteers [1, 2], and psychedelic use is associated with decreased psychological distress and suicidality in population surveys.
Look, all experiences take place in the mind, in a very real way that's not just a clever conversational trick.
So whatever your most meaningful and spiritually significant moment, it's going to be "in your head."
But on a set of very reasonable priors, we would expect your most meaningful and spiritually significant head-moment to be correlated with and causally linked to some kind of unusual thing happening outside your head. An activity, an interaction with other people, a novel observation.
Sometimes, a therapist says a few words, and a person has an internal cascade of thoughts and emotions and everything changes, and we wouldn't blink too hard at the person saying that moment was their most meaningful and spiritually significant.
It's not that the category of "just sitting there quietly thinking thoughts" is suspect.
And indeed, with the shakeup stimulus of a psychedelic, it's reasonable to imagine that people would successfully produce just such a cascade, some of the time.
But like ...
"Come on"?
The preconditions for the just-sitting-there-with-the-therapist moment to be so impactful are pretty substantial. Someone has to have been all twisted up inside, and confused, ...
But on a set of very reasonable priors, we would expect your most meaningful and spiritually significant head-moment to be correlated with and causally linked to some kind of unusual thing happening outside your head. An activity, an interaction with other people, a novel observation.
This doesn't feel plausible at all to me. (This is one of two key places where I disagree with your framing)
Like, this is a huge category: "experiences that don't involve anything unusual happening around you." It includes virtually all of the thinking we do -- especially the kind of thinking that demands concentration. For most (all?) of us, it includes moments of immense terror and immense joy. Fiction writers commonly spend many hours in this state, "just sitting there" and having ideas and figuring out how they fit together, before they ever commit a single word of those ideas to (digital) paper. The same goes for artists of many other kinds. This is where theorems are proven, where we confront our hidden shames and overcome them, (often) where we first realize that we love someone, or that we don't love someone, or . . .
The other place where I disagree wit...
a) a supermajority of people have the precursors for the just-sitting-there-with-the-therapist moment, or something substantively similar, such that taking the drug allows them to reshuffle all the pieces and make an actual breakthrough
I think that there are structures in the human mind that tend to generate various massive blind spots by default (some of them varying between people, some of them as close to universal as anything in human minds ever is), so I would consider the "a supermajority of people have the precursors for the just-sitting-there-with-the-therapist moment, or something substantively similar" hypothesis completely plausible even if nobody had ever done any drugs and we didn't have any evidence suggesting that drugs might trigger any particular insights.
A weak datapoint would be that out of the about ~twelve people I've facilitated something-like-IFS to, at least five have reported it being a significantly meaningful experience based on just a few sessions (in some cases just one), even if not the most meaningful in their life. And I'm not even among the most experienced or trained IFS facilitators in the world.
Also some of people's trip reports do sound like the kind of thing that you might get from deep enough experiential therapy (IFS and the like; thinking of personal psychological insights more than the 'contact with God' stuff).
Upvoted, but I would posit that there's an enormous filter in place before Kaj encounters these twelve people and they ask him to facilitate them in something-like-IFS.
I find the supermajority hypothesis weakly plausible. I don't think it's true, but would not be really surprised to find out that it is.
(a) seems implied by Thoreau's opinion, which a lot of people reported finding plausible well before psychedelics, so it's not an ad hoc hypothesis:
The mass of men lead lives of quiet desperation. What is called resignation is confirmed desperation. From the desperate city you go into the desperate country, and have to console yourself with the bravery of minks and muskrats. A stereotyped but unconscious despair is concealed even under what are called the games and amusements of mankind.
A lot of recent philosophers report that people are basically miserable, and psychiatry reports that a lot of people have diagnosable anxiety or depression disorders. This seems consistent with (a).
This is also consistent with my impression, and with the long run improvements in depression - it seems like for a lot of people psychedelics allow them to become conscious of ways they were hurting themselves and living in fear / conflict.
In my personal and anecdotal experience, for the people who have a positive experience with psychedelics it really is more your 'a' option.
Psychedelics are less about 'thinking random thoughts that seem meaningful' and more about what you describe there - reflecting on their actual life and perspectives with a fresh/clear/different perspective.
I suppose one hypothesis here is that having a kid is dangerously mind warping on the same level as psychedelics.
I guess it depends whether you care about evolution's goals or your own. If the way that evolution did it was to massively change what you care about/what's meaningful after you have children, then it seems it did it in a way that's mind warping.
If people who use psychedelics should be considered not yet good enough for the community, and alcohol and tobacco are worse than psychedelics, does that mean people who use alcohol or tobacco should also be considered not yet good enough for the community?
Most of the wildly successful people that exist in the western world today display current, or displayed prior, 'willingness to violate drug laws'.
This would seem to be a good argument for not paying taxes or helping the US government, or in particular an argument for excluding employees of the FBI, CIA, and DEA, since they are the institutions that have engaged in active violence to cause and perpetuate this situation. It doesn't seem like a plausible argument that it's wrong to take illegal drugs, except in the "there is no ethical consumption under capitalism" sense.
I have to say, your extreme/rigid opposition to any form of whatever you're currently defining as 'illegal drugs' reminds me of religious people who have similarly rigid and uncompromising views on things.
Ironically, this also seems to me to be antithetical to rationality...
Dusting off this old account of mine just to say I told you so.
Now, some snark:
"Leverage is a cult!"
"No, MIRI/CFAR is a cult!"
"No, the Vassarites are a cult!"
"No, the Zizians are a cult!"
Scott: if you believe that people have auras that can implant demons into your mind then you're clearly insane and you should seek medical help.
Also Scott: beware this charismatic Vassar guy, he can give you psychosis!
Scott 2015: Universal love, said the cactus person
Scott 2016: acritically signal boosts Aella talking about her inordinate drug use.
Scott 2018: promote...
Scott: if you believe that people have auras that can implant demons into your mind then you're clearly insane and you should seek medical help.
Also Scott: beware this charismatic Vassar guy, he can give you psychosis!
These so obviously aren't the same thing- what's your point here? If just general nonsense snark, I would be more inclined to appreciate it if it weren't masquerading as an actual argument.
People do not have auras that implant demons into your mind, and alleging so is... I wish I could be more measured somehow. But it's insane and you should probably seek medical help. On the other hand, people who are really charismatic can in fact manipulate others in really damaging ways, especially when combined with drugs etc. These are both simultaneously true, and their relationship is superficial.
Scott 2015: Universal love, said the cactus person
Scott 2016: acritically signal boosts Aella talking about her inordinate drug use.
Scott 2018: promotes a scamcoin by Aella and Vinay Gupta, a differently sane tech entrepreneur-cum-spiritual guru, who apparently burned his brain during a “collaborative celebration” session.
Personally, when I read the cactus person thing I thought it wa...
I enjoyed it (and upvoted) for humor plus IMO having a point. Humor is great after a thread this long.
I appreciate the pointing out of apparent inconsistency but feel the humor is kind of mean-spirited/attacky, which maybe we should have some amount of. I wouldn't want to see comments trending in this direction of snark too much.
I didn't vote either way.
I was gonna weak-upvote because I enjoyed the sass, but then the number of false/misleading claims got too high for me and I downvoted. Scott practically has a sequence about why he's wary of psychedelics (and 'Universal Love' is sort of part of that sequence, riffing on the question of secret unverifiable revelations), and vV_Vv could have mentioned that!
To be able to laugh at yourself and criticism of yourself, is a mark of mental health. I am happy this community still has it. Especially in the context of discussing cultishness, suprresion of criticism, mental health, etc.
Yeah, but I don't see how you get from there to "therefore, we should invite/promote/incentivize unfair criticism". And we definitely don't do this in general, so there has to be something special about vV_Vv's comment. I guess it's probably the humor that I'm honestly not seeing in this case. The comment just seems straight-forwardly spiteful to me.
My interpretation of the Cactus Person post is that it was a fictionalized account of personal experiences and and an expression of frustration about not being able to gather any real knowledge out of them, which is therefore entertained as a reasonable hypothesis to have in the first place. If I'm mistaken then I apologize to Scott, however the post is ambigious enough that I'm likely not the only person to have interpreted this way.
He also wrote one post about the early psychedelicists that ends with "There seems to me at least a moderate chance that [ psychedelics ] will make you more interesting without your consent – whether that is a good or a bad thing depends on exactly how interesting you want to be.", and he linked to Aella describing her massive LSD use, which he commented as "what happens when you take LSD once a week for a year?" (it should have been "what happens when this person takes LSD once a week for a year, don't try this at home, or you might end up in a padded cell or a coffin").
I've never interacted with the rationalist community IRL, and in fact for the last 5 or so years my exposure to them was mostly through SSC/ACX + the occasional tweet from rat-adjacent...
No offense, but the article you linked is quite terrible because it compares total deaths while completely disregarding the base rates of use. By the same logic, cycling is more dangerous than base jumping.
This said, yes, some drugs are more dangerous than others, but good policies need to be simple, unambiguous and easy to enforce. A policy of "no illegal drugs" satisfies these criteria, while a policy of "do your own research and use your own judgment" in practice means "junkies welcome".
Technically, yes.
On the meta level, this "hey, not all drugs are bad, I can find some research online, and decide which ones are safe" way of thinking seems like what gave us the problem.
I know there are serious problems at other EA organizations, which produce largely fake research (and probably took in people who wanted to do real research, who become convinced by their experience to do fake research instead), although I don't know the specifics as well. EAs generally think that the vast majority of charities are doing low-value and/or fake work.
Do I understand correctly that here by "fake" you mean low-value or only pretending to be aimed at solving the most important problems of the humanity, rather than actual falsifications going on, publishing false data, that kind of thing?
As an example of the difficulties in illusions of transparency, when I first read the post, my first interpretation of "largely fake research" was neither of what you said or what jessicata clarified below but I simply assumed that "fake research" => "untrue," in the sense that people who updated from >50% of research from those orgs will on average have a worse Brier score on related topics. This didn't seem unlikely to me on the face of it, since random error, motivated reasoning, and other systemic biases can all contribute to having bad models of the world.
Since 3 people can have 4 different interpretations of the same phrase, this makes me worried that there are many other semantic confusions I didn't spot.
While other metrics might show a change, if collected carefully, I think all we know at this point is that no one has done that research? Which is very different from saying that we do know that there is no effect on health?
Neither Jessica nor I said there was no effect on health. It seems like maybe we agree that there was no clearly significant, actually measured effect on long-run health. And GiveWell's marketing presents its recommendations as reflecting a justified high level of epistemic confidence in the benefit claims of its top charities.
We know that people have looked for long-run effects on health and failed to find anything more significant than the levels that routinely fail replication. With an income effect that huge attributable to health I'd expect a huge, p<.001 improvement in some metric like reaction times or fertility or reduction the incidence of some well-defined easy-to-measure malnutrition-related disease.
Worth noting that antibiotics (in a similar epistemic reference class to dewormers for reasons I mentioned above) are used to fatten livestock, so we should end up with some combination of:
Neither Jessica nor I said there was no effect on health
I had read "GiveWell's analysis found that, while there was a measurable positive effect on income, there wasn't one on health metrics" as "there was an effect on income that was measurable and positive, but there wasn't an effect on health metrics". Rereading, I think that's probably not what Jessica meant, though? Sorry!
It feels kind of weird that this post only has 50 upvotes and is hidden in the layers of lesswrong as some skeleton in the closet waiting to strike at an opportune time. A lot of big names commented on this post and even though it's not entirely true and misrepresenting what happened to an extent it would make sense to kind of promote this type of a post anyway. It's setting a bad example if we don't promote as we then show that we don't encourage criticism which seems very anti-rational. Maybe a summary article of this incident could be done and put on the main website? It doesn't make sense to me that a post with a whooping 900 comments should be this hidden and it sure doesn't look good from an outside perspective.
Note that the post had over 100 karma and then lost over half of it, probably because substantial criticism emerged in the comments. I've never seen that kind of a shift happen before, but it seems to show that people are thoughtful with their upvotes.
50 upvotes are more then the average post on LessWrong gets. If someone wants to write a summary of Jessica's post and the 900 comments I think there's a good chance that it will be well received.
Part of why this post doesn't get more comments is because it's not just criticism but it was perceived to try to interfer with the Leverage debate. If all the references to Zoe and Leverage wouldn't be in this post it would likely be better received.
If someone has something new to say in a top-level post, they can say it; I would guess someone will make such a post in the next month or two. I don't think any top-down action is necessary, beyond people's natural interest in discussion.
Also—I would hardly call a post "hidden" if it has accrued 900 comments. It's been in "recently commented" almost the entire time since its posting, it was on the front page for several days, before naturally falling off due to lower karma + passage of time.
Personally, I think it's good that people are starting to talk about other things. I don't find this interesting enough to occupy weeks of community attention.
This is very insightful and matches my personal experience and the experiences of some friends:
Sociology is less inherently subjective and meta than psychology, having intersubjectively measurable properties such as events in human lifetimes and social network graph structures.
I have not done too much meditation myself, but some friends who've gone very deep into that rabbit hole reported that too much meta-cognition made them hyperaware to an unhealthy extent.
I have noticed myself oscillating between learning how to make my cognition more effective (intro...
Thank you SO MUCH for writing this.
The case Zoe recounts of someone "having a psychotic break" sounds tame relative to what I'm familiar with. Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas.
I think this is so well put and important.
I think that your fear of extreme rebuke from publishing this stuff is obviously reasonable when dealing with a group that believes itse...
I think most of LW believes we should not risk ostracizing a group (with respect to the rest of the world) that might save the world, by publicizing a few broken eggs. If that's the case, much discussion is completely moot. I personally kinda think that the world's best shot is the one where MIRI/CFAR type orgs don't break so many eggs. And I think transparency is the only realistic mechanism for course correction.
FWIW, I (former MIRI employee and current LW admin) saw a draft of this post before it was published, and told jessicata that I thought she should publish it, roughly because of that belief in transparency / ethical treatment of people.
Though I don't remember people saying explicitly that Eliezer Yudkowsky was a better philosopher than Kant, I would guess many would have said so.
Reminds me of a Yudkowsky quote:
Science isn't fair. That's sorta the point. An aspiring rationalist in 2007 starts with a huge advantage over an aspiring rationalist in 1957. It's how we know that progress has occurred.
To me the thought of voluntarily embracing a system explicitly tied to the beliefs of one human being, who's dead, falls somewhere between the silly and the suicidal.
So it's...
Scott Aaronson, for example, blogs about "blank faced" non-self-explaining authoritarian bureaucrats being a constant problem in academia. Venkatesh Rao writes about the corporate world, and the picture presented is one of a simulation constantly maintained thorough improv.
Well, I once met a person in academia who was convinced she'd be utterly bored anywhere outside academia.
If you want an unbiased perspective on what life is like outside the rationality community, you should talk to people not associated with the rationality community. (Yes, ...
I wouldn't recommend psychedelics to anyone. If I had a choice to have never taken them, I would've chosen that than where I am today. You learn quite a bit about reality and your own life, but at the end of the day, it's not really going to help you in terms of finding meaning in life that would ultimately be healthy for your own good as a mortal being. For me, things just seem quite meaningless. Before, life is still enjoyable in its own way. They threaten me with more good times, yet I can't see how my life would be any different after that point. Oh ma...
I don't think psychedelics really do much for most people. I think for those who say they have been fundamentally altered by them most likely have a construed notion/prior before getting into the whole spiel. It's just a means to an end to them. Them thinking that psychedelics would change you fundamentally made them easier to give into the notion that they've fundamentally changed as a result of taking psychedelics rather than the psychedelics being part of the entire psychological journey they are going through, regardless of whether psychedelics were in...
You seem to be claiming that without somebody giving you suggestions, people would not think of psychedelic trips as something special.
Well, as the discoverer of the substance, Hoffman surely did not have any preconceptions, since the first time he was exposed to LSD it was an accident, and had no idea of it's psychedelic properties.
His account is freely available online here: https://www.hallucinogens.org/hofmann/child1.htm
A quote where he describes the second exposure, which was intentional experiment: "This self-experiment showed that LSD-25 behaved as a psychoactive substance with extraordinary properties and potency. There was to my knowledge no other known substance that evoked such profound psychic effects in such extremely low doses, that caused such dramatic changes in human consciousness and our experience of the inner and outer world."
I appreciate Zoe Curzi's revelations of her experience with Leverage. I know how hard it is to speak up when no or few others do, and when people are trying to keep things under wraps.
I haven't posted much publicly about my experiences working as a researcher at MIRI (2015-2017) or around CFAR events, to a large degree because I've been afraid. Now that Zoe has posted about her experience, I find it easier to do so, especially after the post was generally well-received by LessWrong.
I felt moved to write this, not just because of Zoe's post, but also because of Aella's commentary:
This seemed to me to be definitely false, upon reading it. Most of what was considered bad about the events at Leverage Research also happened around MIRI/CFAR, around the same time period (2017-2019).
I don't want to concentrate on the question of which is "worse"; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases. I also caution against blame in general, in situations like these, where many people (including me!) contributed to the problem, and have kept quiet for various reasons. With good reason, it is standard for truth and reconciliation events to focus on restorative rather than retributive justice, and include the possibility of forgiveness for past crimes.
As a roadmap for the rest of the post, I'll start by describing some background, describe some trauma symptoms and mental health issues I and others have experienced, and describe the actual situations that these mental events were influenced by and "about" to a significant extent.
Background: choosing a career
After I finished my CS/AI Master's degree at Stanford, I faced a choice of what to do next. I had a job offer at Google for machine learning research and a job offer at MIRI for AI alignment research. I had also previously considered pursuing a PhD at Stanford or Berkeley; I'd already done undergrad research at CoCoLab, so this could have easily been a natural transition.
I'd decided against a PhD on the basis that research in industry was a better opportunity to work on important problems that impact the world; since then I've gotten more information from insiders that academia is a "trash fire" (not my quote!), so I don't regret this decision.
I was faced with a decision between Google and MIRI. I knew that at MIRI I'd be taking a pay cut. On the other hand, I'd be working on AI alignment, an important problem for the future of the world, probably significantly more important than whatever I'd be working on at Google. And I'd get an opportunity to work with smart, ambitious people, who were structuring their communication protocols and life decisions around the content of the LessWrong Sequences.
These Sequences contained many ideas that I had developed or discovered independently, such as functionalist theory of mind, the idea that Solomonoff Induction was a formalization of inductive epistemology, and the idea that one-boxing in Newcomb's problem is more rational than two-boxing. The scene attracted thoughtful people who cared about getting the right answer on abstract problems like this, making for very interesting conversations.
Research at MIRI was an extension of such interesting conversations to rigorous mathematical formalism, making it very fun (at least for a time). Some of the best research I've done was at MIRI (reflective oracles, logical induction, others). I met many of my current friends through LessWrong, MIRI, and the broader LessWrong Berkeley community.
When I began at MIRI (in 2015), there were ambient concerns that it was a "cult"; this was a set of people with a non-mainstream ideology that claimed that the future of the world depended on a small set of people that included many of them. These concerns didn't seem especially important to me at the time. So what if the ideology is non-mainstream as long as it's reasonable? And if the most reasonable set of ideas implies high impact from a rare form of research, so be it; that's been the case at times in history.
(Most of the rest of this post will be negative-valenced, like Zoe's post; I wanted to put some things I liked about MIRI and the Berkeley community up-front. I will be noting parts of Zoe's post and comparing them to my own experience, which I hope helps to illuminate common patterns; it really helps to have an existing different account to prompt my memory of what happened.)
Trauma symptoms and other mental health problems
Back to Zoe's post. I want to disagree with a frame that says that the main thing that's bad was that Leverage (or MIRI/CFAR) was a "cult". This makes it seem like what happened at Leverage is much worse than what could happen at a normal company. But, having read Moral Mazes and talked to people with normal corporate experience (especially in management), I find that "normal" corporations are often quite harmful to the psychological health of their employees, e.g. causing them to have complex PTSD symptoms, to see the world in zero-sum terms more often, and to have more preferences for things to be incoherent. Normal startups are commonly called "cults", with good reason. Overall, there are both benefits and harms of high-demand ideological communities ("cults") compared to more normal occupations and social groups, and the specifics matter more than the general class of something being "normal" or a "cult", although the general class affects the structure of the specifics.
Zoe begins by listing a number of trauma symptoms she experienced. I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break.
The psychotic break was in October 2017, and involved psychedelic use (as part of trying to "fix" multiple deep mental problems at once, which was, empirically, overly ambitious); although people around me to some degree tried to help me, this "treatment" mostly made the problem worse, so I was placed in 1-2 weeks of intensive psychiatric hospitalization, followed by 2 weeks in a halfway house. This was followed by severe depression lasting months, and less severe depression from then on, which I still haven't fully recovered from. I had PTSD symptoms after the event and am still recovering.
During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. I was catatonic for multiple days, afraid that by moving I would cause harm to those around me. This is in line with scrupulosity-related post-cult symptoms.
Talking about this is to some degree difficult because it's normal to think of this as "really bad". Although it was exceptionally emotionally painful and confusing, the experience taught me a lot, very rapidly; I gained and partially stabilized a new perspective on society and my relation to it, and to my own mind. I have much more ability to relate to normal people now, who are also for the most part also traumatized.
(Yes, I realize how strange it is that I was more able to relate to normal people by occupying an extremely weird mental state where I thought I was destroying the world and was ashamed and suicidal regarding this; such is the state of normal Americans, apparently, in a time when suicidal music is extremely popular among youth.)
Like Zoe, I have experienced enormous post-traumatic growth. To quote a song, "I am Woman": "Yes, I'm wise, but it's wisdom born of pain. I guess I've paid the price, but look how much I've gained."
While most people around MIRI and CFAR didn't have psychotic breaks, there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR; at least one other than me had worked at MIRI for a significant time, and at least one had done work with MIRI on a shorter-term basis. There was, in addition, a case of someone becoming very paranoid, attacking a mental health worker, and hijacking her car, leading to jail time; this person was not an employee of either organization, but had attended multiple CFAR events including a relatively exclusive AI-focused one.
I heard that the paranoid person in question was concerned about a demon inside him, implanted by another person, trying to escape. (I knew the other person in question, and their own account was consistent with attempting to implant mental subprocesses in others, although I don't believe they intended anything like this particular effect). My own actions while psychotic later that year were, though physically nonviolent, highly morally confused; I felt that I was acting very badly and "steering in the wrong direction", e.g. in controlling the minds of people around me or subtly threatening them, and was seeing signs that I was harming people around me, although none of this was legible enough to seem objectively likely after the fact. I was also extremely paranoid about the social environment, being unable to sleep normally due to fear.
There are even cases of suicide in the Berkeley rationality community associated with scrupulosity and mental self-improvement (specifically, Maia Pasek/SquirrelInHell, and Jay Winterford/Fluttershy, both of whom were long-time LessWrong posters; Jay wrote an essay about suicidality, evil, domination, and Roko's basilisk months before the suicide itself). Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz. (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; the splinter group seems to have selected for high scrupulosity and not attenuated its mental impact.)
The cases discussed are not always of MIRI/CFAR employees, so they're hard to attribute to the organizations themselves, even if they were clearly in the same or a nearby social circle. Leverage was an especially legible organization, with a relatively clear interior/exterior distinction, while CFAR was less legible, having a set of events that different people were invited to, and many conversations including people not part of the organization. Hence, it is easier to attribute organizational responsibility at Leverage than around MIRI/CFAR. (This diffusion of responsibility, of course, doesn't help when there are actual crises, mental health or otherwise.)
Obviously, for every case of poor mental health that "blows up" and is noted, there are many cases that aren't. Many people around MIRI/CFAR and Leverage, like Zoe, have trauma symptoms (including "cult after-effect symptoms") that aren't known about publicly until the person speaks up.
Why do so few speak publicly, and after so long?
Zoe discusses why she hadn't gone public until now. She first cites fear of response:
Clearly, not all cases of people trying to convince each other that they're wrong are abusive; there's an extra dimension of institutional gaslighting, people telling you something you have no reason to expect they actually believe, people being defensive and blocking information, giving implausible counter-arguments, trying to make you doubt your account and agree with their bottom line.
Jennifer Freyd writes about "betrayal blindness", a common problem where people hide from themselves evidence that their institutions have betrayed them. I experienced this around MIRI/CFAR.
Some background on AI timelines: At the Asilomar Beneficial AI conference, in early 2017 (after AlphaGo was demonstrated in late 2016), I remember another attendee commenting on a "short timelines bug" going around. Apparently a prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years.
This trend in belief included MIRI/CFAR leadership; one person commented that he noticed his timelines trending only towards getting shorter, and decided to update all at once. I've written about AI timelines in relation to political motivations before (long after I actually left MIRI).
Perhaps more important to my subsequent decisions, the AI timelines shortening triggered an acceleration of social dynamics. MIRI became very secretive about research. Many researchers were working on secret projects, and I learned almost nothing about these. I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact. Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects.
I had disagreements with the party line, such as on when human-level AGI was likely to be developed and about security policies around AI, and there was quite a lot of effort to convince me of their position, that AGI was likely coming soon and that I was endangering the world by talking openly about AI in the abstract (not even about specific new AI algorithms). Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness [EDIT: Eliezer himself and Sequences-type thinking, of course, would aggressively disagree with the epistemic methodology advocated by this person]. I experienced a high degree of scrupulosity about writing anything even somewhat critical of the community and institutions (e.g. this post). I saw evidence of bad faith around me, but it was hard to reject the frame for many months; I continued to worry about whether I was destroying everything by going down certain mental paths and not giving the party line the benefit of the doubt, despite its increasing absurdity.
Like Zoe, I was definitely worried about fear of response. I had paranoid fantasies about a MIRI executive assassinating me. The decision theory research I had done came to life, as I thought about the game theory of submitting to a threat of a gun, in relation to how different decision theories respond to extortion.
This imagination, though extreme (and definitely reflective of a cognitive error), was to some degree re-enforced by the social environment. I mentioned the possibility of whistle-blowing on MIRI to someone I knew, who responded that I should consider talking with Chelsea Manning, a whistleblower who is under high threat. There was quite a lot of paranoia at the time, both among the "establishment" (who feared being excluded or blamed) and "dissidents" (who feared retaliation by institutional actors). (I would, if asked to take bets, have bet strongly against actual assassination, but I did fear other responses.)
More recently (in 2019), there were multiple masked protesters at a CFAR event (handing out pamphlets critical of MIRI and CFAR) who had a SWAT team called on them (by camp administrators, not CFAR people, although a CFAR executive had called the police previously about this group), who were arrested, and are now facing the possibility of long jail time. While this group of people (Ziz and some friends/associates) chose an unnecessarily risky way to protest, hearing about this made me worry about violently authoritarian responses to whistleblowing, especially when I was under the impression that it was a CFAR-adjacent person who had called the cops to say the protesters had a gun (which they didn't have), which is the way I heard the story the first time.
Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past events secretively. This matches my experience.
Like Zoe, I care about the people I interacted with during the time of the events (who are, for the most part, colleagues who I learned from), and I don't intend to cause harm to them through writing about these events.
Zoe discusses an unofficial NDA people signed as they left, agreeing not to talk badly of the organization. While I wasn't pressured to sign an NDA, there were significant security policies discussed at the time (including the one about researchers not asking each other about research). I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner, which would be more dangerous (the blog post in question would have been similar to the AI forecasting work I did later, here and here; judge for yourself how dangerous this is). This made it hard to talk about the silencing dynamic; if you don't have the freedom to speak about the institution and limits of freedom of speech, then you don't have freedom of speech.
(Is it a surprise that, after over a year in an environment where I was encouraged to think seriously about the possibility that simple actions such as writing blog posts about AI forecasting could destroy the world, I would develop the belief that I could destroy everything through subtle mental movements that manipulate people?)
Years before, MIRI had a non-disclosure agreement that members were pressured to sign, as part of a legal dispute with Louie Helm.
I was certainly socially discouraged from revealing things that would harm the "brand" of MIRI and CFAR, by executive people. There was some discussion at the time of the possibility of corruption in EA/rationality institutions (e.g. Ben Hoffman's posts criticizing effective altruism, GiveWell, and the Open Philanthropy Project); a lot of this didn't end up on the Internet due to PR concerns.
Someone who I was collaborating with at the time (Michael Vassar) was commenting on social epistemology and the strengths and weaknesses of various people's epistemology and strategy, including people who were leaders at MIRI/CFAR. Subsequently, Anna Salamon said that Michael was causing someone else at MIRI to "downvote Eliezer in his head" and that this was bad because it meant that the "community" would not agree about who the leaders were, and would therefore have akrasia issues due to the lack of agreement on a single leader in their head telling them what to do. (Anna says, years later, that she was concerned about bias in selectively causing downvotes rather than upvotes; however, at the time, based on what was said, I had the impression that the primary concern was about coordination around common leadership rather than bias specifically.)
This seemed culty to me and some friends; it's especially evocative in relation to Julian Jaynes' writing about bronze age cults, which detail a psychological model in which idols/gods give people voices in their head telling them what to do.
(As I describe these events in retrospect they seem rather ridiculous, but at the time I was seriously confused about whether I was especially crazy or in-the-wrong, and the leadership was behaving sensibly. If I were the type of person to trust my own judgment in the face of organizational mind control, I probably wouldn't have been hired in the first place; everything I knew about how to be hired would point towards having little mental resistance to organizational narratives.)
Strange psycho-social-metaphysical hypotheses in a group setting
Zoe gives a list of points showing how "out of control" the situation at Leverage got. This is consistent with what I've heard from other ex-Leverage people.
The weirdest part of the events recounted is the concern about possibly-demonic mental subprocesses being implanted by other people. As a brief model of something similar to this (not necessarily the same model as the Leverage people were using): people often pick up behaviors ("know-how") and mental models from other people, through acculturation and imitation. Some of this influence could be (a) largely unconscious on the part of the receiver, (b) partially intentional or the part of the person having mental effects on others (where these intentions may include behaviorist conditioning, similar to hypnosis, causing behaviors to be triggered under certain circumstances), and (c) overall harmful to the receiver's conscious goals. According to IFS-like psychological models, it's common for a single brain to contain multiple sub-processes with different intentions. While the mental subprocess implantation hypothesis is somewhat strange, it's hard to rule out based on physics or psychology.
As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR. These strange experiences are, as far as I can tell, part of a more general social phenomenon around that time period; I recall a tweet commenting that the election of Donald Trump convinced everyone that magic was real.
Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community. While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible. (I noted at the time that there might be a sense in which different people have "auras" in a way that is not less inherently rigorous than the way in which different people have "charisma", and I feared this type of comment would cause people to say I was crazy.)
As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.
The case Zoe recounts of someone "having a psychotic break" sounds tame relative to what I'm familiar with. Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas.
Alternatively, like me, they can explore these metaphysics while:
Being able to discuss somewhat wacky experiential hypotheses, like the possibility of people spreading mental subprocesses to each other, in a group setting, and have the concern actually taken seriously as something that could seem true from some perspective (and which is hard to definitively rule out), seems much more conducive to people's mental well-being than refusing to have that discussion, so they struggle with (what they think is) mental subprocess implantation on their own. Leverage definitely had large problems with these discussions, and perhaps tried to reach more intersubjective agreement about them than was plausible (leading to over-reification, as Zoe points out), but they seem less severe than the problems resulting from refusing to have them, such as psychiatric hospitalization and jail time.
"Psychosis" doesn't have to be a bad thing, even if it usually is in our society; it can be an exploration of perceptions and possibilities not before imagined, in a supportive environment that helps the subject to navigate reality in a new way; some of R.D. Liang's work is relevant here, describing psychotic mental states as a result of ontological insecurity following from an internal division of the self at a previous time. Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing.
World-saving plans and rarity narratives
Zoe cites the fact that Leverage has a "world-saving plan" (which included taking over the world) and considered Geoff Anders and Leverage to be extremely special, e.g. Geoff being possibly the best philosopher ever:
Like Leverage, MIRI had a "world-saving plan". This is no secret; it's discussed in an Arbital article written by Eliezer Yudkowsky. Nate Soares frequently talked about how it was necessary to have a "plan" to make the entire future ok, to avert AI risk; this plan would need to "backchain" from a state of no AI risk and may, for example, say that we must create a human emulation using nanotechnology that is designed by a "genie" AI, which does a narrow task rather than taking responsibility for the entire future; this would allow the entire world to be taken over by a small group including the emulated human. [EDIT: See Nate's clarification, the small group doesn't have to be MIRI specifically, and the upload plan is an example of a plan rather than a fixed super-plan.]
I remember taking on more and more mental "responsibility" over time, noting the ways in which people other than me weren't sufficient to solve the AI alignment problem, and I had special skills, so it was uniquely my job to solve the problem. This ultimately broke down, and I found Ben Hoffman's post on responsibility to resonate (which discusses the issue of control-seeking).
The decision theory of backchaining and taking over the world somewhat beyond the scope of this post. There are circumstances where back-chaining is appropriate, and "taking over the world" might be necessary, e.g. if there are existing actors already trying to take over the world and none of them would implement a satisfactory regime. However, there are obvious problems with multiple actors each attempting to control everything, which are discussed in Ben Hoffman's post.
This connects with what Zoe calls "rarity narratives". There were definitely rarity narratives around MIRI/CFAR. Our task was to create an integrated, formal theory of values, decisions, epistemology, self-improvement, etc ("Friendliness theory"), which would help us develop Friendly AI faster than the rest of the world combined was developing AGI (which was, according to leaders, probably in less than 20 years). It was said that a large part of our advantage in doing this research so fast was that we were "actually trying" and others weren't. It was stated by multiple people that we wouldn't really have had a chance to save the world without Eliezer Yudkowsky (obviously implying that Eliezer was an extremely historically significant philosopher).
Though I don't remember people saying explicitly that Eliezer Yudkowsky was a better philosopher than Kant, I would guess many would have said so. No one there, as far as I know, considered Kant worth learning from enough to actually read the Critique of Pure Reason in the course of their research; I only did so years later, and I'm relatively philosophically inclined. I would guess that MIRI people would consider a different set of philosophers relevant, e.g. would include Turing and Einstein as relevant "philosophers", and I don't have reason to believe they would consider Eliezer more relevant than these, though I'm not certain either way. (I think Eliezer is a world-historically-significant philosopher, though not as significant as Kant or Turing or Einstein.)
I don't think it's helpful to oppose "rarity narratives" in general. People need to try to do hard things sometimes, and actually accomplishing those things would make the people in question special, and that isn't a good argument against trying the thing at all. Intellectual groups with high information integrity, e.g. early quantum mechanics people, can have a large effect on history. I currently think the intellectual work I do is pretty rare and important, so I have a "rarity narrative" about myself, even though I don't usually promote it. Of course, a project claiming specialness while displaying low information integrity is, effectively, asking for more control and resources that it can beneficially use.
Rarity narratives can have the effects of making a group of people more insular, more concentrating relevance around itself and not learning from other sources (in the past or the present), making local social dynamics be more centered on a small number of special people, and increasing pressure on people to try to do (or pretend to try to do) things beyond their actual abilities; Zoe and I both experienced these effects.
(As a hint to evaluating rarity narratives yourself: compare Great Thinker's public output to what you've learned from other public sources; follow citations and see where Great Thinker might be getting their ideas from; read canonical great philosophy and literature; get a quantitative sense of how much insight is coming from which places throughout spacetime.)
The object-level specifics of each case of world-saving plan matter, of course; I think most readers of this post will be more familiar with MIRI's world-saving plan, especially since Zoe's post provides few object-level details about the content of Leverage's plan.
Debugging
Rarity ties into debugging; if what makes us different is that we're Actually Trying and the other AI research organizations aren't, then we're making a special psychological claim about ourselves, that we can detect the difference between actually and not-actually trying, and cause our minds to actually try more of the time.
Zoe asks whether debugging was "required"; she notes:
I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said "yes". This part of the plan was the same [EDIT: Anna clarifies that, while some people becoming like Elon Musk was some people's plan, there was usually acceptance of people not changing themselves; this might to some degree apply to Leverage as well].
Self-improvement was a major focus around MIRI and CFAR, and at other EA orgs. It often used standard CFAR techniques, which were taught at workshops. It was considered important to psychologically self-improve to the point of being able to solve extremely hard, future-lightcone-determining problems.
I don't think these are bad techniques, for the most part. I think I learned a lot by observing and experimenting on my own mental processes. (Zoe isn't saying Leverage's techniques are bad either, just that you could get most of them from elsewhere.)
Zoe notes a hierarchical structure where people debugged people they had power over:
This was also the case around MIRI and CFAR. A lot of debugging was done by Anna Salamon, head of CFAR at the time; Ben Hoffman noted that "every conversation with Anna turns into an Anna-debugging-you conversation", which resonated with me and others.
There was certainly a power dynamic of "who can debug who"; to be a more advanced psychologist is to be offering therapy to others, being able to point out when they're being "defensive", when one wouldn't accept the same from them. This power dynamic is also present in normal therapy, although the profession has norms such as only getting therapy from strangers, which change the situation.
How beneficial or harmful this was depends on the details. I heard that "political" discussions at CFAR (e.g. determining how to resolve conflicts between people at the organization, which could result in people leaving the organization) were mixed with "debugging" conversations, in a way that would make it hard for people to focus primarily on the debugged person's mental progress without imposing pre-determined conclusions. Unfortunately, when there are few people with high psychological aptitude around, it's hard to avoid "debugging" conversations having political power dynamics, although it's likely that the problem could have been mitigated.
[EDIT: See PhoenixFriend's pseudonymous comment, and replies to it, for more on power dynamics including debugging-related ones at CFAR specifically.]
It was really common for people in the social space, including me, to have a theory about how other people are broken, and how to fix them, by getting them to understand a deep principle you do and they don't. I still think most people are broken and don't understand deep principles that I or some others do, so I don't think this was wrong, although I would now approach these conversations differently.
A lot of the language from Zoe's post, e.g. "help them become a master", resonates. There was an atmosphere of psycho-spiritual development, often involving Kegan stages. There is a significant degree of overlap between people who worked with or at CFAR and people at the Monastic Academy [EDIT: see Duncan's comment estimating that the actual amount of interaction between CFAR and MAPLE was pretty low even though there was some overlap in people].
Although I wasn't directly financially encouraged to debug people, I infer that CFAR employees were, since instructing people was part of their job description.
Other issues
MIRI did have less time pressure imposed by the organization itself than Leverage did, despite the deadline implied by the AGI timeline; I had no issues with absurdly over-booked calendars. I vaguely recall that CFAR employees were overworked especially around workshop times, though I'm pretty uncertain of the details.
Many people's social lives, including mine, were spent mostly "in the community"; much of this time was spent on "debugging" and other psychological work. Some of my most important friendships at the time, including one with a housemate, were formed largely around a shared interest in psychological self-improvement. There was, therefore, relatively little work-life separation (which has upsides as well as downsides).
Zoe recounts an experience with having unclear, shifting standards applied, with the fear of ostracism. Though the details of my experience are quite different, I was definitely afraid of being considered "crazy" and marginalized for having philosophy ideas that were too weird, even though weird philosophy would be necessary to solve the AI alignment problem. I noticed more people saying I and others were crazy as we were exploring sociological hypotheses that implied large problems with the social landscape we were in (e.g. people thought Ben Hoffman was crazy because of his criticisms of effective altruism). I recall talking to a former CFAR employee who was scapegoated and ousted after failing to appeal to the winning internal coalition; he was obviously quite paranoid and distrustful, and another friend and I agreed that he showed PTSD symptoms [EDIT: I infer scapegoating based on the public reason given being suspicious/insufficient; someone at CFAR points out that this person was paranoid and distrustful while first working at CFAR as well].
Like Zoe, I experienced myself and others being distanced from old family and friends, who didn't understand how high-impact the work we were doing was. Since leaving the scene, I am more able to talk with normal people (including random strangers), although it's still hard to talk about why I expect the work I do to be high-impact.
An ex-Leverage person I know comments that "one of the things I give Geoff the most credit for is actually ending the group when he realized he had gotten in over his head. That still left people hurt and shocked, but did actually stop a lot of the compounding harm." (While Geoff is still working on a project called "Leverage", the initial "Leverage 1.0" ended with most of the people leaving.) This is to some degree happening with MIRI and CFAR, with a change in the narrative about the organizations and their plans, although the details are currently less legible than with Leverage.
Conclusion
Perhaps one lesson to take from Zoe's account of Leverage is that spending relatively more time discussing sociology (including anthropology and history), and less time discussing psychology, is more likely to realize benefits while avoiding problems. Sociology is less inherently subjective and meta than psychology, having intersubjectively measurable properties such as events in human lifetimes and social network graph structures. My own thinking has certainly gone in this direction since my time at MIRI, to great benefit. I hope this account I have written helps others to understand the sociology of the rationality community around 2017, and that this understanding helps people to understand other parts of the society they live in.
There are, obviously from what I have written, many correspondences, showing a common pattern for high-ambition ideological groups in the San Francisco Bay Area. I know there are serious problems at other EA organizations, which produce largely fake research (and probably took in people who wanted to do real research, who become convinced by their experience to do fake research instead), although I don't know the specifics as well. EAs generally think that the vast majority of charities are doing low-value and/or fake work. I also know that San Francisco startup culture produces cult-like structures (and associated mental health symptoms) with regularity. It seems more productive to, rather than singling out specific parties, think about the social and ecological forces that create and select for the social structures we actually see, which include relatively more and less cult-like structures. (Of course, to the extent that harm is ongoing due to actions taken by people and organizations, it's important to be able to talk about that.)
It's possible that after reading this, you think this wasn't that bad. Though I can only speak for myself here, I'm not sad that I went to work at MIRI instead of Google or academia after college. I don't have reason to believe that either of these environments would have been better for my overall intellectual well-being or my career, despite the mental and social problems that resulted from the path I chose. Scott Aaronson, for example, blogs about "blank faced" non-self-explaining authoritarian bureaucrats being a constant problem in academia. Venkatesh Rao writes about the corporate world, and the picture presented is one of a simulation constantly maintained thorough improv.
I did grow from the experience in the end. But I did so in large part by being very painfully aware of the ways in which it was bad.
I hope that those that think this is "not that bad" (perhaps due to knowing object-level specifics around MIRI/CFAR justifying these decisions) consider how they would find out whether the situation with Leverage was "not that bad", in comparison, given the similarity of the phenomena observed in both cases; such an investigation may involve learning object-level specifics about what happened at Leverage. I hope that people don't scapegoat; in an environment where certain actions are knowingly being taken by multiple parties, singling out certain parties has negative effects on people's willingness to speak without actually producing any justice.
Aside from whether things were "bad" or "not that bad" overall, understanding the specifics of what happened, including harms to specific people, is important for actually accomplishing the ambitious goals these projects are aiming at; there is no reason to expect extreme accomplishments to result without very high levels of epistemic honesty.