Thanks for your courage, Zoe!
Personally, I've tried to maintain anonymity in online discussion of this topic for years. I dipped my toe into openly commenting last week, and immediately received an email that made it more difficult to maintain anonymity - I was told "Geoff has previously speculated to me that you are 'throwaway', the author of the 2018 basic facts post". Firstly, I very much don't appreciate my ability to maintain anonymity being narrowed like this. Rather, anonymity is a helpful defense in any sensitive online discussion, not least this one. But yes, throwaway/anonymoose is me - I posted anonymously so as to avoid adverse consequences from friends who got more involved than me. But I'm not throwaway2, anonymous, or BayAreaHuman - those three are bringing evidence that is independent from me at least.
I only visited Leverage for a couple months, back in 2014. One thing that resonated strongly with me about your post is that the discussion is badly confused by lack of public knowledge and strong narratives, about whether people are too harsh on Leverage, what biases one might have, and so on. This is why I think we often retreat to just stating "basic" or "comm...
What's frustrating about still hearing noisy debate on this topic, so many years later, is that Leverage being a really bad org seems overdetermined at this point. On the one hand, if I ranked MIRI, CFAR, CEA, FHI, and several startups I've visited, in terms of how reality-distorting they can be, Leverage would score ~9, while no other would surpass ~7. (It manages to be nontransparent and cultlike in other ways too!). While on the other hand, their productive output was... also like a 2/10? It's indefensible. But still only a fraction of the relevant information is in the open.
One thing to note is that if you "read the room" instead of only looking at the explicit arguments, it's noticeable that a lot of people left Leverage and the new org ("Leverage 2.0") completely switched research directions, which to me seems like tacit acknowledgement that their old methods etc aren't as good.
As far as people leaving organizations I'd love to have good data for MIRI, CFAR, CEA and FHI.
I think I could write down a full history of employment for all of these orgs (except maybe FHI, which I've had fewer tabs on), in an hour or two of effort. It's somewhat costly for me (in terms of time), but if lots of people are interested, I would be happy to do it.
I'm personally interested, and also I think having information like this collected in one place makes it much easier for everyone to understand the history and shape of the movement. IMO an employment history of those orgs would make for a very valuable top-level post.
Full-time at CFAR in Oct 2015 when Pete Michaud and I arrived:
Anna Salamon, Val Smith, Kenzi Amodei, Julia Galef, Dan Keys, Davis Kingsley
Full-time at one point or another during my tenure:
Morgan Davis, Renshin Lee, Harmanas Chopra, Adom Hartell, Lyra Sancetta
(Kenzi, Julia, Davis, and Val all left while I was there, in that order.)
Notable part-timers (e.g. welcome at CFAR's weekly colloquium):
Steph Zolayvar, Qiaochu Yuan, Gail Hernandez
At CFAR in Oct 2018 when I left:
Anna Salamon (part time), Tim Telleen-Lawton, Dan Keys, Jack Carroll, Elizabeth Garrett, Adam Scholl, Luke Raskopf, Eli Tyre (part time), Logan Strohl (part time)
... I may have missed an Important Person or two but that's a decent initial sketch of those three years.
As someone who's been close to these, some had a few related issues, but Leverage seemed much more extreme in many of these dimensions to me.
However, now there are like 50 small EA/rationalist groups out there, and I am legitimately worried about quality control.
I generally worry about all kinds of potential bad actors associating themselves with EA/rationalists.
There seems to be a general pattern where new people come to an EA/LW/ACX/whatever meetup or seminar, trusting the community, and there they meet someone who abuses this trust and tries to extract free work / recruit them for their org / abuse them sexually, and the new person trusts them as representatives of the EA/rationalist community (they can easily pretend to be), while the actual representatives of EA/rationalist community probably don't even notice that this happens, or maybe feel like it's not their job to go reminding everyone "hey, don't blindly trust everyone you meet here".
I assume the illusion of transparency plays a big role here, where the existing members generally know who is important and who is a nobody, who plays a role in the movement and who is just hanging out there, what kind of behavior is approved and what kind is not... but the new member has no idea about anything, and may assume that if someone acts high-status then the person actually is high-status in the movement, and that whatever such person does has an approval of the community.
To put it bluntly...
I very much agree about the worry, My original comment was to make the easiest case quickly, but I think more extensive cases apply to. For example, I’m sure there have been substantial problems even in the other notable orgs, and in expectation we should expect there to continue to be so. (I’m not saying this based on particular evidence about these orgs, more that the base rate for similar projects seems bad, and these orgs don’t strike me as absolutely above these issues.)
One solution (of a few) that I’m in favor of is to just have more public knowledge about the capabilities and problems of orgs.
I think it’s pretty easy for orgs of about any quality level to seem exciting to new people and recruit them or take advantage of them. Right now, some orgs have poor reputations among those “in the know” (generally for producing poor quality output), but this isn’t made apparent publicly.[1] One solution is to have specialized systems that actually present negative information publicly; this could be public rating or evaluation systems.
This post by Nuno was partially meant as a test for this:
https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist...
[1] I don’t particularly blame them, consider the alternative.
I think the alternative is actually much better than silence!
For example I think the EA Hotel is great and that many "in the know" think it is not so great. I think that the little those in the know have surfaced about their beliefs has been very valuable information to the EA Hotel and to the community. I wish that more would be surfaced.
Simply put, if you are actually trying to make a good org, being silently blackballed by those "in the know" is actually not so fun. Of course there are other considerations, such as backlash, but IDK I think transparency is good on all sorts of angles. The opinions of those "in the know" matter; they lead, and I think its better for everyone if that leadership happens in the light.
Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have.
I think this is more than warranted at this point, yeah. I wonder who might be trusted enough to lead something like that.
I agree that it would have been really nice for grantmakers to communicate with the EA Hotel more, and other orgs more, about their issues. This is often a really challenging conversation to have ("we think your org isn't that great, for these reasons"), and we currently have very few grantmaker hours for the scope of the work, so I think grantmakers don't have much time now to spend on this. However, there does seem to be a real gap here to me. I represent a small org and have been around other small orgs, and the lack of communication with small grantmakers is a big issue. (And I probably have it much easier than most groups, knowing many of the individuals responsible)
I think the fact that we have so few grantmakers right now is a big bottleneck that I'm sure basically everyone would love to see improved. (The situation isn't great for current grantmakers, who often have to work long hours). But "figuring out how to scale grantmaking" is a bit of a separate discussion.
Around making the information public specifically, that's a whole different matter. Imagine the value proposition, "If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see." Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund.
(Note: I was a guest manager on the LTFF for a few months, earlier this year)
Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund.
I think that it would be very interesting to have a fund that has that policy. Yes, that might reduce in fewer people applying but people applying might itself be a signal that their project is worth funding.
"If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see."
I feel confident that Greg of EA Hotel would very much prefer this in the case of EA Hotel. It can be optional, maybe.
To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously -- from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them. —-
For what it’s worth, I think this is true for basically all intense and moral communities out there. The EA/rationalist groups generally seem better than many religious and intense political groups in these areas, to me. However, even “better” is probably not at all good enough.
Which thing are you claiming here? I am a bit confused by the double negative (you're saying there's "widely known evidence that it isn't true that representatives don't even notice when abuse happens", I think; might you rephrase?).
I've made stupid and harmful errors at various time, and e.g. should've been much quicker on the uptake about Brent, and asked more questions when Robert brought me info about his having been "bad at consent" as he put it. I don't wish to be and don't think I should be one of the main people trying to safeguard victims' rights; I don't think I have needed eyes/skill for it. (Separately, I am not putting in the time and effort required to safeguard a community of many hundreds, nor is anyone that I know of, nor do I know if we know how or if there's much agreement on what kinds of 'safeguarding' are even good ideas, so there are whole piles of technical debt and gaps in common knowledge and so on here.)
Nonetheless, I don't and didn't view abuse as acceptable, nor did I intend to tolerate serious harms. Parts of Jay's account of the meeting with me are inaccurate (differ from what I'm really pretty sure I remember, and also from what Robert and his hu...
Based on how you wrote your comment, it seems that the email you received may have come across as intimidating.
I think the important information here is how did Geoff / Leverage Research handle similar criticism in the past. (I have no idea. I assume both you and Ryan probably know more about this.) As they say, past behavior is the best predictor of future behavior. The wording of the e-mail is not so important.
I rather liked the idea of making a timeline!
Geoff currently had a short doc on timing of changes in org structure, but it currently doesn't include much else.
Depending on how discussion here goes, I might transfer/transform this into its own post in the future. Will link them, if so.
Nobody has talked much in public about the most dysfunctional things, yet? I am going to switch strategies out of dark-hinting and anonymity at this point, and put my cards down on the table.
This will be a sketch of the parts of this story that I know about. I do not have exact dates, and these are just broad-strokes of some of the key incidents here.
And not all of these are my story to tell? So sometimes, I will really only feel comfortable providing the broad-strokes.
(If someone has a better 2-3 sentence summary, or the full story, for some of these? Do chime in.)
These are each things I feel pretty solid about believing in. I think these incidents belong somewhere on any good consensus-timeline, but are not the full set of relevant events.
(I only have about 3-6 relevant contacts at the moment, but I've gotten at least 2 points of confirmation on each of these. It was not...
The observation might be correct but I don't love the tone. It has some feeling of "haha, got you!" that doesn't feel appropriate to these discussions.
After discussing the matter with some other (non-Leverage) EAs, we've decided to wire $15,000 to Zoe Curzi (within 35 days).
A number of ex-Leveragers seem to be worried about suffering (financial, reputational, etc.) harm if they come forward with information that makes Leverage look bad (and some also seem worried about suffering harm if they come forward with information that makes Leverage look good). This gift to Zoe is an attempt to signal support for people who come forward with accounts like hers, so that people in Zoe's reference class are more inclined to come forward.
We've temporarily set aside $85,000 in case others write up similar accounts -- in particular, accounts where it would be similarly useful to offset the incentives against speaking up. We plan to use our judgment to assess reports on a case-by-case basis, rather than having an official set of criteria. (It's hard to design formal criteria that aren't gameable, and we were a bit wary of potentially setting up an incentive for people to try to make up false bad narratives about organizations, etc.)
Note that my goal isn't to evaluate harms caused by Leverage and try to offset such harms. Instead, it's trying to ...
Note that my goal isn't to evaluate harms caused by Leverage and try to offset such harms. Instead, it's trying to offset any incentives against sharing risky honest accounts like Zoe's.
I like the careful disambiguation here.
FWIW, I independently proposed something similar to a friend in the Lightcone office last week, with an intention that was related to offsetting harm. My reasoning:
There's often a problem in difficult "justice" situations, where people have only a single bucket for "make the sufferer feel better" and "address the wrong that was done."
This is quite bad—it often causes people to either do too little for victims, or too much to offenders, because they're trying to achieve two goals at once and one goal dominates the calculation. Not helping someone materially because the harm proved unintentional, or punishing the active party way in excess of what they "deserve" because that's what it takes to make the injured party feel better, that sort of thing.
Separating it out into "we're still figuring out the Leverage situation but in the meantime, let's try to make this person's life a little better" is excellent.
Reiterating that I understand that's not what you are doing, here. But I think that would separately have also been a good thing.
A few quick thoughts:
1) This seems great, and I'm impressed by the agency and speed.
2) From reading the comments, it seems like several people were actively afraid of how Leverage could retaliate. I imagine similar for accusations/whistleblowing for other organizations. I think this is both very, very bad, and unnecessary; as a whole, the community is much more powerful than individual groups, so it seems poorly managed when the community is scared of a specific group. Resources should be spent to cancel this out.
In light of this, if more money were available, it seems easy to justify a fair bit more. Or even better could be something like, "We'll help fund lawyers in case you're attacked legally, or anti-harassing teams if you're harassed or trolled". This is similar to how the EFF helps with cases from small people/groups being attacked by big companies.
I don't mean to complain; I think any steps here, especially so quickly are fantastic.
3) I'm afraid this will get lost in this comment section. I'd be excited about a list of "things to keep in mind" like this to be repeatedly made prominent somehow. For example, I could imagine that at community events or similar, there could be ...
Many of these things seem broadly congruent with my experiences at Pareto, although significantly more extreme. Especially: ideas about psychology being arbitrarily changeable, Leverage having the most powerful psychology/self-improvement tools, Leverage being approximately the only place you could make real progress, extreme focus on introspection and other techniques to 'resolve issues in your psyche', (one participant's 'research project' involved introspecting about how they changed their mind for 2 months) and general weird dynamics (e.g. instructors sleeping with fellows; Geoff doing lectures or meeting individually with participants in a way that felt very loaded with attempts to persuade and rhetorical tricks), and paranoia (for example: participants being concerned that the things they said during charting/debugging would be used to blackmail or manipulate them; or suspecting that the private slack channels for each participant involved discussion of how useful the participants were in various ways and how to 'make use of them' in future). On the other hand, I didn't see any of the demons/objects/occult stuff, although I think people were excited about 'energy healers'/'body work', not actually believing that there was any 'energy' going on, but thinking that something interesting in the realm of psychology/sociology was going on there. Also, I benefitted from the program in many ways, many of the techniques/attitudes were very useful, and the instructors generally seemed genuinely altruistic and interested in helping fellows learn.
Edit: (One person reading this reports below that this made them more reluctant to come forward with their story, and so that seems bad to me. I have mentally updated as a result. More relevant discussion below.)
I notice that there's not that much information public about what Geoff actually Did and Did Not Do. Or what he instigated and what he did not. Or what he intended or what he did not intend.
Um, I would like more direct evidence of what he actually did and did not do. This is cruxy for me in terms of what should happen next.
Right now, based just on the Medium post, one plausible take is that the people in Geoff's immediate circle may have been taking advantage of their relative power in the hierarchy to abuse the people under them.
See this example from Zoe:
A few weeks after this big success, this person told me my funding was in question — they had done all they could do to train me and thought I might be too blocked to sufficiently progress into a Master on the project. They and Geoff were questioning my commitment to and understanding of the project, and they had concerns about my debugging trajectory.
"They and Geoff" makes it sound like Zoe's super...
The most directly 'damning' thing, as far as I can tell, is Geoff pressuring people to sign NDAs.
I received an email from a Paradigm board member on behalf of Paradigm and Leverage that aims to provide some additional clarity on the information-sharing situation here. Since the email specifies that it can be shared, I've uploaded it to my Google Drive (with some names and email addresses redacted). You can view it here.
The email also links to the text of the information-sharing agreement in question with some additional annotations.
[Disclosure: I work at Leverage, but did not work at Leverage during Leverage 1.0. I'm sharing this email in a personal rather than a professional capacity.]
Thanks for sharing this. !
I believe this is public information if I look for your 990s, but could you or someone list the Board members of Leverage / Paradigm, including changes over time?
I don't know how realistic this worry is, but I'm a bit worried about scenarios like:
[...] The most important thing we want to clarify is that as far as we are concerned, at least, individuals should feel free to share their experiences or criticise Geoff or the organisations.
[... T]his document was never legally binding, was only signed by just over half of you, and almost none of you are current employees, so you are under no obligation to follow this document or the clarified interpretation here. [...]
I'm really happy to see this! Though I was momentarily confused by the "so" here -- why would there be less moral obligation to uphold an agreement, just because the agreement isn't legally binding, some other people involved didn't sign it, and the signatory has switched jobs? Were those stipulated as things that would void the agreement?
My current interpretation is that Matt's trying to say something more like 'We never took this agreement super seriously and didn't expect you to take it super seriously either, given the wording; we just wanted it as a temporary band-aid in the immediate aftermath of Leverage 1.0 dissolving, to avoid anyone taking hasty action while tensions were still high. Here's a bunch of indirect signs that the agreement is no big deal and doesn't have moral force years later in a very different context: (blah).' It's Bayesian evidence that the agreement is no big deal, not a deductive proof that the agreement is ~void. Is that right?
Another thing I want to mentally watch out for:
It might be tempting for some ex-Leverage people to use Geoff as the primary scapegoat rather than implicating themselves fully. So as more stories come out, I plan to be somewhat delicate with the evidence. The temptation to scapegoat a leader is pretty high and may even seem justifiable in a "ends justifies the means" kind of thinking.
I don't seem to personally be OK with using misleading information or lies to bolster a case against a person, even if this ends up "saving" a lot of people. (I don't think it actually saves them... people should come to grips with their own errors, not hide behind a fallback person.)
So... Leverage, I'm looking at you as a whole community! You're not helpless peons of Geoff Anders.
When spiritual gurus go out of control, it's not a one-man operation; there are corroborators, enablers, people who hid information, yes-men and sycophants, those too afraid to do the right thing or speak out against wrongdoing, those too protective of personal benefits they may be receiving (status, friends, food, housing), etc.
There's stages of 'coming to terms' with something difficult. And a v...
I basically agree with this.
But also, I think pretty close to ZERO people who were deeply affected (aside from Zoe, who hasn't engaged beyond the post) have come forward in this thread. And I... guess we should talk about that.
I know from firsthand, that there were some pretty bad experiences in the incident that tore Leverage 1.0 apart, which nobody appears to feel able to talk about.
I am currently not at all optimistic that we're managing to balance this correctly? I also want this to go right. I'm not quite sure how to do it.
That's pretty fair. I am open to taking down this comment, or other comments I've made. (Not deleting them forever, I'll save them offline or something.) Your feedback is helpful here and revealing to me, and I feel myself updating because of it.
I have commented somewhere else that I do not like LessWrong for this discussion... because a) It seems bad for justice to be served. and b) It removes a bunch of context data that I personally think is super relevant (including emotional, physical layers) and c) LW is absolutely not a place designed for healing or reconciliation... and it also seems only 'okay' for sense-making as a community. It is maybe better for sense-making at the individual intellectual level. So... I guess LW isn't my favorite place for this discussion to be happening... I wonder what you think.
(Separately) I care about folks from Leverage. I am very fond of the ones I've met. Zoe charted me once, and I feel fondly about that. I've been charted a number of times at Leverage, and it was good, and I personally love CT charting / Belief Reporting and use, reference, and teach it to others to this day. Although it's my own version now. I went to a Paradigm workshop once, as well as several parties or gatherings.
My felt sense of my time at the workshop (especially during more casual hang-out-y parts of it) is like a sense of sad distance... like, oh I would like to be friends with these people... but mentally / emotionally they seem "hard to access."
I'm feeling compassion towards the ones who have suffered and are suffering. I don't need to be personal friends with anyone, but ... if there's a way I can be of service, I am interested.
Open and free invitation: If anyone involved in the Leverage stuff in some way wants someone to hold space for you as you process things, I am open to offer that, over Zoom, in a confidential manner. (I am not very involved in the community normally, as I am committed to being at the Monastic Academy in Vermont for a long while, ...
Since it's mostly just pointers to stuff I've already said/implied... I'll throw out a quick comment.
I would like it if somebody started something like a carefully-moderated private Facebook group, mostly of core people who were there, to come to grips with their experiences? I think this could be good.
I am slightly concerned that people who are still in the grips of "Leverage PR campaigning" tendencies, will start trying to take it over or otherwise poison the well? (Edit: Or conversely, that people who still feel really hurt or confused about it might lash out more than I'd wish. I personally, am more worried about the former.) I still think it might be good, overall.
Be sure to be clear EARLY about who you are inviting, and who you are excluding! It changes what people are willing to talk about.
...I am not personally the right person to do this, though.
(It is too easy to "other" me, if that makes sense.)
I feel like one of the only things the public LW thread could do here?
Is ensuring public awareness of some of the unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms, and showing a public ramp-down of opportunities to do so in the future.
Along with doing what we can, to signal that we generally stand against people over-simplistically demonizing the people and organizations involved in this.
... unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms ...
Hmm. This seems worth highlighting.
The NDAs (plus pressure to sign) point to this.
...
...
The rest of the truth-suppressive measures I can only speculate. Here's a list of possible speculative mechanisms that come to mind, some of which were corroborated by Zoe's report but not all:
Do you have a suggestion for another forum that you think would be better?
In particular, do you have pointers to online forums that do incorporate the emotional and physical layers ("in a non-toxic way", he adds, thinking of twitter). Or do you think that the best way to do this is just not online at all?
I see what you're doing? And I really appreciate that you are doing it.
...but simultaneously? You are definitely making me feel less safe to talk about my personal shit.
(My position on this is, and has always been: "I got a scar from Leverage 1.0. I am at least somewhat triggered; on both that level, and by echoes from a past experience. I am scared that me talking about my stuff, rather than doing my best to make and hold space, will scare more centrally-affected people off. And I know that some of those people, had an even WORSE experience than I did. In what was, frankly, a surreal and really awful experience for me.")
Multiple times on this thread I've seen you make the point about figuring out what responsibility should fall on Geoff, and what should be attributed to his underlings.
I just want to point out that it is a pattern for powerful bad actors to be VERY GOOD at never explicitly giving a command for a bad thing to happen, while still managing to get all their followers on board and doing the bad thing that they only hinted at/ set up incentive structures for, etc.
I wanted to immediately agree. Now I'm pausing...
It seems good to try to distinguish between:
My current sense? Is that both Unreal and I are basically doing a mix of "take an advocate role" and "using this as an opportunity to get some of what the community got wrong last time -with our own trauma- right." But for different roles, and for different traumas.
It seemed worth being explicit and calling this out. (I don't necessarily think this is bad? I also think both of us seem to have done a LOT of "processing our own shit' already, which helps.)
But doing this is... exhausting for me, all the same. I also, personally, feel like I've taken up too much space for a bit. It's starting to wear on me in ways I don't endorse.
I'm going to take a step back from this for a week, and get myself to focus on living the rest of my life. After a week, I will circle back. In fact, I COMMIT to circling back.
And honestly? I have told several people about the exact nature of my Leverage trauma. I will tell at least several more people about it, before all of this is over.
It's not going to vanish. I've already ensured that it can't. I can't quite commit to "going full public," because that might be the wrong move? But I will not rest on this until I have done something broadly equivalent.
I a...
Epistemic status: I have not been involved with Leverage Research in any way, and have no knowledge of what actually happened beyond what's been discussed on LessWrong. This comment is an observation I have after reading the post.
I had just finished reading Pete Walker's Complex PTSD before coming across this post. In the book, the author describes a list of calm, grounded thoughts to respond to inner critic attacks. A large part of healing is for the survivor to internalize these thoughts so they can psychologically defend themselves.
I see a stark contrast between what the CPTSD book tries to instill and the ideas Leverage Research tried to instill, per Zoe's account. It's as if some of the programs at Leverage Research were trying to unravel almost all of one's sense of self.
A few examples:
Perfectionism
From the CPTSD book:
I do not have to be perfect to be safe or loved in the present. I am letting go of relationships that require perfection. I have a right to make mistakes. Mistakes do not make me a mistake.
From the post:
...We might attain his level of self-efficacy, theoretical & logical precision, and strategic skill only once we were sufficiently transformed via the use
More thoughts:
I really care about the conversation that’s likely to ensue here, like probably a lot of people do.
I want to speak a bit to what I hope happens, and to what I hope doesn’t happen, in that conversation. Because I think it’s gonna be a tricky one.
What I hope happens:
What I hope doesn’t happen:
This is LessWrong; let’s show the world how curiosity/compassion/inquiry is done!
Thanks, Anna!
As a LessWrong mod, I've been sitting and thinking about how to make the conversation go well for days now and have been stuck on what exactly to say. This intention setting is a good start.
I think to your list I would add judging each argument and piece of data on its merits, .i.e., updating on evidence even if it pushes against the position we currently hold.
Phrased alternatively, I'm hoping we don't: treating arguments as soldiers: accepting bad arguments because they favor our preferred conclusion, rejecting good arguments because they don't support our preferred conclusion. I think there's a risk in this cases of knowing which side you're on and then accepting and rejecting all evidence accordingly.
Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.
Are you somehow guaranteeing or confidently predicting that others will not take them in a politicized way, use them as an excuse for false judgments, or otherwise cause harm to those sharing the true relevant facts? If not, why are you asking people not to refrain from sharing such facts?
(My impression is that it is sheer optimism, bordering on wishful thinking, to expect such a thing, that those who have such a fear are correct to have such a fear, and so I am confused that you are requesting it anyway.)
Thanks for the clarifying question, and the push-back. To elaborate my own take: I (like you) predict that some (maybe many) will take shared facts in a politicized way, will use them as an excuse for false or uncareful judgments, etc. I am not guaranteeing, nor predicting, that this won’t occur.
I am intending to myself do inference and conversation in a way that tries to avoids these “politicized speech” patterns, even if it turns out politically costly or socially awkward for me to do so. I am intending to make some attempt (not an infinite amount of effort, but some real effort, at some real cost if needed) to try to make it easier for others to do this too, and/or to ask it of others who I think may be amenable to being asked this, and/or to help coalesce a conversation in what I take to be a better pattern if I can figure out how to do so. I also predict, independently of my own efforts, that a nontrivial number of others will be trying this.
If “reputation management” is a person’s main goal, then the small- to medium-sized efforts I can hope to contribute toward a better conversation, plus the efforts I’m confident in predicting independently of mine, would be insufficien...
I'm also not a fan of requests that presume that the listener ...
From my POV, requests, and statements of what I hope for, aren't advice. I think they don't presume that the listener will want to do them or will be advantaged by them, or anything much else about the listener except that it's okay to communicate my request/hope to them. My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people, and I guess I'm also assuming that my words won't be assumed to be a trustworthy voice of authorities that know where the person's own interests are, or something. That I can be just some person, hoping and talking and expecting to be evaluated by equals.
Is it that you think these assumptions of mine are importantly false, such that I should try to follow some other communication norm, where I more try to only advocate for things that will turn out to be in the other party's interests, or to carefully disclaim if I'm not sure what'll be in their interests? That sounds tricky; I'm not peoples' parents and they shouldn't trust...
I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people,
I feel like this assumption seems false. I do predict that (at least in the world where we didn't have this discussion) your statement would create a social expectation for the people to report true, relevant facts, and that this social expectation would in fact move people in the direction of reporting true, relevant facts.
I immediately made the inference myself on reading your comment. There was no choice in the matter, no execution of a deliberate strategy on my part, just an inference that Anna wants people to give the facts, and doesn't think that fear of reprisal is particularly important to care about. Well, probably, it's hard to remember exactly what I thought, but I think it was something like this. I then thought about why this might be, and how I might have misunderstood. In hindsight, the explanation you gave above should have occurred to me, that is the sort of thing that people who speak literally would do, but it did not.
I think there are lots of LWers who, like me, make these sorts of inferences automatically. (And I note that these kinds of inferences a...
Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can't claim to know much about setting up effective norms for defending whistleblowers though.
I read Anna's request as an attempt to create a self-fulfilling prophecy. It's much easier to bully a few individuals than a large crowd.
Yeah, I also read Anna as trying to create/strengthen local norms to the effect of 'whistleblowers, truth-tellers, and people-saying-the-emperor-has-no-clothes are good community members and to-be-rewarded/protected'. That doesn't make reprisals impossible, but I appreciated the push (as I interpreted it).
I also interpreted Anna as leading by example to some degree -- a lot of orgs wouldn't have their president join a public conversation like this, given the reputational risks. If I felt like Anna was taking on zero risk but was asking others to take on lots of risk, I may have felt differently.
Saying this publicly also (in my mind) creates some accountability for Anna to follow through. Community leaders who advocate value X and then go back on their word are in much more hot water than ones who quietly watch bad things happen.
E.g., suppose this were happening on the EA Forum. People might assume by default that CEA or whoever is opposed to candor about this topic, because they're worried hashing things out in public could damage the EA-brand (or whatever). This creates a default pressure against open and honest truth-seeking. Jumping in to say 'no, actually, having this conversat...
I hesitated a bit before saying this? I thought it might add a little bit of clarity, so I figured I'd bring it up.
(Sorry it got long; I'm still not sure what to cut.)
There are definitely some needs-conflicts. Between (often distant) people who, in the face of this, feel the need to cling to the strong reassurance that "this could not possibly happen to them"/"they will definitely be protected from this," and would feel reassured at seeing Strong Condemning Action as soon as possible...
...and "the people who had this happen." Who might be best-served, if they absorbed that there is always some risk of this sort of shit happening to people. For them, it would probably be best if they felt their truth was genuinely heard, and took away some actionable lessons about what to avoid, without updating their personal identity to "victim" TOO much. And in the future, embraced connections that made them more robust against attaching to this sort of thing in the future.
("Victim" is just not a healthy personal identity in the long-term, for most people.)
Sometimes, these needs are so different, that it warrants having different forums of discussion. But there is some overlap in these needs (w...
There's also the need to learn from what happened, so that when designing organizations in the future the same mistakes aren't repeated.
I would like it if we showed the world how accountability is done, and given your position, I find it disturbing that you have omitted this objective. That is, if I wanted to deflect the conversation away from accountability, I think I would write a post similar to yours.
I would like it if we showed the world how accountability is done
So would I. But to do accountability (as distinguished from scapegoating, less-epistemic blame), we need to know what happened, and we need to accurately trust each other (or at least most of each other) to be able to figure out what happened, and to care what actually happened.
The “figure out what happened” and “get in a position where we can have a non-fucked conversation” steps come first, IMO.
I also sort of don’t expect that much goal divergence on the accountability steps that very-optimistically come after those steps, either, basically because integrity and visible trustworthiness serve most good goals in the long run, and vengeance or temporarily-overextended-trust serves little.
Though, accountability is admittedly a weak point of mine, so I might be missing/omitting something. Maybe spell it out if so?
Some thoughts related to this topic:
*
For someone familiar with Scientology, the similarities are quite funny. There is a unique genius who develops a new theory of human mind called [Dianetics | Connection Theory]. For people familiar with psychology, it's mostly a restatement of some dubious existing theories, with huge simplifications and little evidence. But many people have their minds blown.
The genius starts a group with the goal of providing almost-superpowers such as [perfect memory | becoming another Elon Musk] to his followers, with the ultimate goal of saving the planet. The followers believe this is the only organization capable of achieving such goal. They must regularly submit to having their thoughts checked at [auditing | debugging], where their sincerity is verified using [e-meter | Belief Reporting]. When the leaders runs out of useful or semi-useful ideas to teach, there is always the unending task of exorcising the [body thetans | demons].
The former members are afraid of consequences if they speak about their experience in the organization.
*
Some people expressed epistemic frustration about situation that seems important to understand correctly, but information is...
I wish there were more facts about Leverage out in actual common knowledge.
One thing I’d find really helpful, and that I suspect might be helpful broadly for untangling what happened and making parts of it obvious / common knowledge, is if I/someone/a group could assemble a Leverage timeline that included:
If anyone wants to give me any of this info, either anonymously or with your name attached, I’d be very glad to help assemble this into a timeline. I’m also at least as enthusiastic about anyone else doing this, and would be glad to pay a small amount for someone’s time if that would help. Maybe also it could be cobbled together in common here, if anyone is willing to contribute some of these basic facts in common here.
Is anyone up for collaborating toward this in some form? I’m hoping it might be easier than some kinds of sorting-through, and like it might make some of the harder stuff easier once done.
Zoe - I don’t know if you want me to respond to you or not, and there’s a lot I need to consider, but I do want to say that I’m so, so sorry. However this turns out for me or Leverage, I think it was good that you wrote this essay and spoke out about your experience.
It’s going to take me a while to figure out everything that went wrong, and what I did wrong, because clearly something really bad happened to you, and it is in some way my fault. In terms of what went wrong on the project, one throughline I can see was arrogance, especially my arrogance, which affected so many of the choices we made. We dismissed a lot of the actually useful advice and tools and methods from more typical sources, and it seems that blocking out society made room for extreme and harmful narratives that should have been tempered by a lot more reality. It’s terrible that you felt like your funding, or ability to rest, or take time off, or choose how to interact with your own mind were compromised by Leverage’s narratives, including my own. I totally did not expect this, or the negative effects you experienced after leaving, though maybe I would have, had I not narrowed my attention and basically gotten way too stuck in theoryland.
I agree with you that we shouldn’t skip steps. I’ve updated accordingly. Again I’m truly sorry. I really wanted your experience on the project to be good.
Edit: I got a request to cut the chaff and boil this down to discrete actionables. Let me do that.
Will you release everyone from any NDAs
Will you step down from any management roles (e.g. Leverage and Paradigm)
Will you state for the record, that you commit to not threaten* anyone who comes forward with reports that you do not like, in the course of this process
I get the sense that you have made people afraid to stand against you, historically. Engaging in any further threats, seems likely to impede all of our ability to make sense of, and come to terms with, whatever happened. It could also be quite incriminating on its own.
* For full points, commit to also not make any strong stealthy attempts to socially discredit people.
There's good ways to do this kind of thing and bad ways. I feel that this is a bad way? Unless I'm missing a lot of context about what's happening here.
Other ways to go about this:
I want to suggest that Geoff doesn't need to respond to Spiracular's requests because they contain a lot of assumptions, in the same way the question "Where were you on the night of the murder" contains a lot of assumptions. And this is a bad way to go about justice. Unless, again, I'm missing a bunch of context.
For whatever it's worth, I think "No" is a pretty acceptable answer to some of these.
"No, for reasons X, Y, Z" is a pretty ordinary answer to the NDA concern. I'd still like to see that response.
"Leverage 2.0 was deliberately structured to avoid a lot of the drawbacks of Leverage 1.0" is something I actually think is TRUE. The fact that Leverage 1.0 was sun-setted deliberately, is something that I thought actually reflected well on both Geoff and the people there.
I think from that, an argument could be made that stepping down is not necessary. I can't say I would necessarily agree with it, but I think the argument could be made.
Most of my stance, is that currently most people are too SCARED to talk. And this is actually really worrying to me.
I don't think "introducing a mediator," who would be spending about half of their time with Geoff --the epicenter of a lot of that fear-- would actually completely solve all of that problem. It would surprise me a lot if it worked here.
My #1 most desired commitment, right now? Is actually #3, and I maybe should have put it first.
A commitment to, in the future, not go after people and especially not to threaten them, for talking about their experiences.
That by itself, would be quite meaningful to me.
Well, I am at least gonna name a fraction of the assumptions that are implied by this set of requests. I am not asking you to do anything about this, but I am going to name them out loud, in the hopes that people come away more conscious of what other assumptions might be present.
(In the Duncan-culture version of LW, comments like the above are both commonplace and highly appreciated. I mention this because Unreal has mentioned having a tough time with LW, and imo the above comment demonstrates solidly central LW virtue.)
I appreciate this too. I think this form of push-back, is a potentially highly-productive one.
I may need to think for a bit about how to respond? But it seemed worth expressing my appreciation for it, first.
Meta-note: I tried the longer-form gentler one? But somebody ELSE complained about that structure.
(A piece of me recognizes that I can't make everybody happy here, but it's a little annoying.)
I want to clarify that using the word "threat" in my case would cause one to overestimate the severity by 5-20x or something of the pressure I experienced (more so than "strong pressure"). Not that the word is strictly wrong, but the connotations of it are too strong. I might end up listing the actual behaviors in a bit, maybe after more dialog with the person in question.
I had thought about saying this earlier, for fairness/completeness, but didn't get around to it. I've heard some people feeling wary of speaking positively of Leverage out of vague worry of reprisal.
So... I do want to note
a) I got a lot of personal value from interacting with Geoff personally. In some sense I'm an agent who tries to do ambitious things because of him. He looked at my early projects (Solstice in particular), he understood them, and told me he thought they were valuable. This was an experience that would later feed into my thoughts in this post.
b) I also have gotten some good techniques from the Leverage ecosystem. I'm not 100% sure which ideas came from where, but Belief Reporting in particular has been a valuable tool in my toolkit.
(none of this is meant to be evidence about a bunch of other claims in this thread. Just wanted to somewhat offset the arguments-are-soldiers default)
Piggybacking with additional accurate (albeit somewhat-tangential) positive statements, with a hope of making it seem more possible to say true positive and negative things about Leverage (since I've written mostly negative things, and am writing another negative thing as we speak):
The 2014 EA Retreat, run by Leverage, is still by far the best multi-org EA or rationalist event I've ever been to, and I think it had lots of important positive effects on EA.
I imagine a lot of people want to say a lot of things about Leverage and the dynamics around it, except it’s difficult or costly/risky or hard-to-imagine-being-heard-about or similar.
If anyone is up for saying a bit about how that is for you personally (about what has you reluctant to try to share stuff to do with Leverage, or with EA/Leverage dynamics or whatever, that in some other sense you wish you could share — whether you had much contact with Leverage or not), I think that would be great and would help open up space.
I’d say err on the side of including the obvious.
I interacted with Leverage some over the years. I felt like they had useful theory and techniques, and was disappointed that it was difficult to get access to their knowledge. I enjoyed their parties. I did a Paradigm workshop. I knew people in Leverage to a casual degree.
What's live for me now is that when the other recent post about Leverage was published, I was subjected to strong, repeated pressure by someone close to Geoff to have the post marked as flawed, and asked to lean on BayAreaHuman to approximately retract the post or acknowledge its flaws. (This request was made of me in my new capacity as head of LessWrong.) "I will make a fuss" is what I was told. I agreed that the post has flaws (I commented to that effect in the thread) and this made me feel the pressure wasn't illegitimate despite being unpleasant. Now it seems to be part of a larger concerning pattern.
Further details seem pertinent, but I find myself reluctant to share them (and already apprehensive that this more muted description will have the feared effect) because I just don't want to damage the relationship I have with the person who was pressuring me. I'm unhappy about it, but I still value that relations...
I'm unhappy about it, but I still value that relationship
Positive reinforcement for finding something you could say that (1) protects this sort of value at least somewhat and (2) opens the way for aggregation of the metadata, so to speak; like without your comment, and other hypothetical comments that haven't happened yet for similar reasons, the pattern could go unnoticed.
I wonder if there's an extractable social norm / conceptual structure here. Something like separating [the pattern which your friend was participating in] from [your friend as a whole, the person you have a relationship]. Those things aren't separate exactly, but it feels like it should make sense to think of them separately, e.g. to want to be adversarial towards one but not the other. Like, if there's a pattern of subtly suppressing certain information or thoughts, that's adversarial, and we can be agnostic about the structure/location of the agency behind that pattern while still wanting to respond appropriately in the adversarial frame.
My contact with Leverage over the years was fairly insignificant, which is part of why I don’t feel like it’s right for me to participate in this discussion. But there are some things that have come to mind, and since Anna’s made space for that, I’ll note them now. I still think it’s not really my place to say anything, but here’s my piece anyway. I’m speaking only for myself and my own experience.
I interviewed for an ops position at Leverage/Paradigm in early 2017, when I was still in college. The process took maybe a couple months, and the in-person interview happened the same week as my CFAR workshop; together these were my first contact with the Bay community. Some of the other rationalists I met that week warned me against Leverage in vague terms; I discussed their allegations with the ops team at my interview and came away feeling satisfied that both sides had a point.
I had a positive experience at the interview and with the ops and their team hiring process in general. The ops lead seemed to really believe in me and recommended me to other EA orgs after I didn’t get hired at Paradigm, and that was great. My (short-term) college boyfriend had a good relationship with Leverage...
The obsession with reputation control is super concerning to me, and I wonder how this connects up with Leverage's poor reputation over the years.
Like, I could imagine two simplified stories...
Story 1:
Based on broad-strokes summaries said to me by ex-Leveragers (though admittedly not first-hand experience), I would say that the statement "Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers" rings true to what I have heard.
Some things mentioned to me by Leverage people as typical/archetypal of Geoff's attitude include being willing to lie to people outside Leverage, feeling attacked or at risk of being attacked, and viewing adjacent non-Leverage groups within the broader EA sphere as enemies.
Thanks! To check: did one or more of the ex-Leveragers say Geoff said he was willing to lie? Do you have any detail you can add there? The lying one surprises me more than the others, and is something I'd want to know.
Here is an example:
Zoe's report says of the information-sharing agreement "I am the only person from Leverage who did not sign this, according to Geoff who asked me at least three times to do so, mentioning each time that everyone else had (which read to me like an attempt to pressure me into signing)."
I have spoken to another Leverage member who was asked to sign, and did not.
The email from Matt Fallshaw says the document "was only signed by just over half of you". Note the recipients list includes people (such as Kerry Vaughan) who were probably never asked to sign because they were not present, but I would believe that such people are in the minority; so this isn't strict confirmation, but just increased likelihood, that Geoff was lying to Zoe.
This is lying to someone within the project. I would subjectively anticipate higher willingness to lie to people outside the project, but I don't have anything tangible I can point to about that.
I am more confident that what I heard was "Geoff exhibits willingness to lie". I also wouldn't be surprised if what I heard was "Geoff reports being willing to lie". I didn't tag the information very carefully.
My current feelings are a mixture of the following:
Geoff and Leverage more broadly in the past have said pretty straightforwardly that they will take pretty adversarial action if someone threatens their reputation or brand
I assume there isn't a public record of this anywhere? Could I hear more details about what was said? This sounds atrocious to me.
I similarly feel that I can't trust the exculpatory or positive evidence about Leverage much so long as I know there's pressure to withhold negative information. (Including informal NDAs and such.)
On the other side, there have been a lot of really vicious and aggressive attacks to anyone saying anything pro-leverage for many years, with a strength that I think is overall even greater and harder to predict than what Geoff and Leverage have been doing. It's also been more of a crowd-driven phenomenon, which makes it less predictable and more scary.
I agree with this too, and think it's similarly terrible, but harder to blame any individual for (and harder to fix).
I assume it's to a large extent an extreme example of the 'large inferential gaps + true beliefs that sound weird' afflicting a lot of EA orgs, including MIRI. Though if Leverage has been screwed up for a long time, some of that public reaction may also have been watered over the years by true rumors spreading about the org.
I have no private information to share. I think there is an obvious difference between asking powerful people in the community to stand up for the truth, and asking some rando commentator to de-anonymize.
I think your comments in this thread have been brusque/pushy in a way that's hurting the conversation (others feel free to chime in if that seems wrong to them).
I mentioned in a different comment that I've appreciated some of farp's comments here for pushing back against what I see as a missing mood in this conversation (acknowledgment that the events described in Zoe's account are horrifying, as well as reassurance that people in leadership positions are taking the allegations seriously and might take some actions in response). I also appreciate Ruby's statement that we shouldn't pressure or judge people who might have something relevant to say.
The unitofcaring post on mediators and advocates seems relevant here. I interpret farp (edit: not necessarily in the parent comment, but in various other comments in this thread) as saying that they'd like to see more advocacy in this thread instead of just mediation. I am not someone who has any personal experiences to share about Leverage, but if I imagine how I'd personally feel if I did, I think I agree.
On mediators and advocates: I think order-of-operations MATTERS.
You can start seeking truth, and pivot to advocate, as UOC says.
What people often can't do easily is start with advocate, and pivot to truth.
And with something like this? What you advocated early can do a lot to color both what and who you listen to, and who you hear from.
Thanks. Your comments and mayleaf's do mean a lot to me. Also, I was surprised by negative reaction to that comment and didn't really expect it to come off as admonishment or pressure. Love 2 cheerlead \o/
I will be happy to contribute financially to Zoe's legal defense, if Geoff decides to take revenge.
In the meanwhile, I am curious about what actually happened. The more people talk, the better.
I appreciate this invitation. I'll re-link to some things I already said on my own stance: https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QKKnepsMoZmmhGSe
Beyond what I laid out there:
It was challenging being aware of multiple stories of harm, and feeling compelled to warn people interacting with Geoff, but not wanting to go public with surprising new claims of harm. (I did mention awareness of severe harm very understatedly in the post. I chose instead to focus on "already known" properties that I feel substantially raise the prior on the actually-observed type of harm, and to disclose in the post that my motivation in cherry-picking those statements was to support pattern-matching to a specific template of harm).
After posting, it was emotionally a bit of a drag to receive comments that complained that the information-sharing attempt was not done well enough, and comparatively few comments grateful for attempting to share what I could, as best I could, to the best of my ability at the time, although the upvote patterns felt encouraging. I was pretty much aware that that was what was going to happen. In general, "flinc
Since it sounds like just-upvotes might not be as strong a signal of endorsement as positive engagement...
I want to say that I really appreciate and respect that you were willing to come forward, with facts that were broadly-known in your social graph, but had been systematically excluded from most people's models.
And you were willing to do this, in a pretty adversarial environment! You had to deal with a small invisible intellectual cold-war that ensured, almost alone, without backing down. This counts for even more.
I do have a little bit of sensitive insider information, and on the basis of that: Both your posts and Zoe's have looked very good-faith to me.
In a lot of places, they accord with or expand on what I know. There are a few parts I was not close enough to confirm, but they have broadly looked right to me.
I also have a deep appreciation, for Zoe calling out that different corners of Leverage had very different experiences with it. Because they did! Not all time-slices or sub-groups within it experienced the same problems.
This is probably part of why it was so easy, to systematically play people's personal experiences against each other: Since he knew the context through which Leverage was experienced, Geoff or others could systematically bias whose reports were heard.
(Although I think it will be harder in the future to engage in this kind of bullshit, now that a lot of people are aware of the pattern.)
To those who had one of the better firsthand experiences of Leverage:
I am still interested in hearing your bit! But if you are only engaging with this due to an inducement that probably includes a sampling-bias, I appreciate you including that detail.
(And I am glad to see people in this broader thread, being generally open about that detail.)
I will talk about my own bit with Leverage later, but I don't feel like it's the right time to share it yet.
(But fwiw: I do have some scars, here. I have a little bit of skin in this one. But most of what I'm going to talk about, comes from analogizing this with a different incident.)
A lot of the position I naturally slide into around this, which I have... kind of just embraced, is of trying to relate hard to the people who:
I was once in a similar position, due to my proximity to a past (different) thing. I kinda ended up excruciatingly sensitive, to how some things might read or feel to someone who was close, got a lot of good out of it (with or without the bad), and mostly felt like there was no way their account wouldn't be twisted into something unrecognizable. And who may be struggling, with processing an abrupt shift in their own personal narrative --- although I sincerely hope the 2 years of processing helped to make this less of a thing? But if you are going through it anyway, I am sorry.
And... I want this to go right. It didn't go right then; not entirely. I think I got yelled at by someone I respect, the first time I opened up about it. I'm not quite sure how to make this less scary for them? But I want it to be.
The people I know who got swept up in this includes some exceptionally nice people. There is at least one of them, who I would ordinarily call exceptionally sane. Please don't feel like you're obligated to identify as a bad person, or as a victim, because you were swept up in this. Just because some people might say it about you, doesn't make it who you are.
While I realize I've kinda de-facto "taken a side" by this point (and probably limited who will talk to me as a result)? I was mispronouncing Geoff's name, before this hit; this is pretty indicative of how little I knew him personally. I started out mostly caring about having the consequences-for-him be reached based off of some kind of reasonable assessment, and not caring too much about having it turn out one way or another. I still feel more invested in there being a good process, and in what will generate the best outcomes for the people who worked under him (or will ever work under him), than anything else.
Compared to Brent's end-result of "homeless with health-problems in Hawaii" **? The things I've asked for have felt mild. But I also knew that if I wasn't handling mentioning them, somebody else probably would. In my eyes, we probably needed someone outside of the Leverage ecosystem who knew a lot of the story (despite the substantial information-hiding efforts) to be handling this part of the response.
Pushing for people to publish the information-hiding agreement, and proposing that Geoff maybe shouldn't have a position with a substantial amount of power over others (at lea...
One tool here is for a non-anonymous person to vouch for the anonymous person (because they know the person, and/or can independently verify the account).
In the past, I've been someone who has found it difficult and costly to talk about Leverage and the dynamics around it, or organizations that are or have been affiliated with effective altruism, though the times I've spoken up I've done more than others. I would have done it more but the costs were that some of my friends in effective altruism interacted with me less, seemed to take me less seriously in general and discouraged me from speaking up more often again with what sometimes amounted to nothing more than peer pressure.
That was a few years ago. For lots of reasons, it's easier, less costly, less risky and easier to not feel fear for me now. I don't know yet what I'll say regarding any or all of this related to Leverage because I don't have any sense of how I might be prompted or provoked to respond. Yet I expect I'll have more to say and towards what I might share as relevant I don't have any particular feelings about yet. I'm sensitive to how my statements might impact others but for myself personally I feel almost indifferent.
My general feeling about this is that the information I know is either well-known or otherwise "not my story to tell."
I've had very few direct interactions with Leverage except applying to Pareto, a party or two, and some interactions with Leverage employees (not Geoff) and volunteers. As is common with human interactions, I appreciated many but not all of my interactions.
Like many people in the extended community, I've been exposed to a non-overlapping subset of accounts/secondhand rumors of varying degrees of veracity. For some things it's been long enough that I can't track the degree of confidences I'm supposed to keep, and under which conditions, so it seems better to err on the side of silence.
At any rate, it's ultimately not my story/tragedy. My own interactions with Leverage has not been personally noticeably harmful or beneficial.
FYI - Geoff will be talking about the history of Leverage and related topics on Twitch tomorrow (Saturday, October 23rd 2021) starting at 10am PT (USA West Coast Time). Apparently Anna Salamon will be joining the discussion as well.
Geoff's Tweet
Text from the Tweet (for those who don't use Twitter):
"Hey folks — I'm going live on Twitch, starting this Saturday. Join me, 10am-1pm PT:
twitch.tv/geoffanders
This first stream will be on the topic of the history of my research institute, Leverage Research, and the Rationality community, with @AnnaWSalamon as a guest."
Unfortunately for me, there is apparently no video recording available on Twitch for this stream? (There are two short clips, but not the full broadcast.)
If anyone has a link to it, if you could include it here, that'd be great. !
Alas, no. I'm pretty bummed about it, because I thought the conversation was rather good, but Geoff pushed the "save recording" button after it was started and that didn't work.
Based on the fact Twitch is counter-intuitive about recording (it's caught me out before too) and the technical issues at the start, I made a backup recording just in case – only audio but hope it helps!:
https://drive.google.com/file/d/1Af1dl-v7Q7uJhdX8Al9FsrJDBc4BqM_f/view?usp=sharing
Audio linked on Geoff's website: https://www.dropbox.com/s/p0nah8ulohypexe/Geoff%20Anders%20-%20Twitch%20-%20Leverage%20History%201%20with%20Anna%20Salamon.m4a?dl=0
Video link on Geoff's website, corresponding to ten seconds of dead air plus the first 20:35 of the audio: https://www.dropbox.com/s/pt3q5xejglsgrcr/1st%20Twitch%20Stream%20-%20Leverage%20History%20-%20Beginning.mp4?dl=0#
I re-listened to Anna and Geoff's conversation, which is the main part of the audio that I found interesting. Timestamps for that conversation:
1:57:57 - Early EA history, the EA Summit, and early EA/Leverage interactions
2:13:34 - Narrative addiction and leaders being unable to talk to each other
2:17:20 - Early Geoff cooperativeness
2:19:58 - Possible causes for EA becoming more narrative-addicted
2:22:35 - Conflict causing group insularity
2:24:50 - Anna on narrative businesses, narrative pyramid schemes, and disagreements
2:28:28 - Geoff on narratives, morale and the epistemic sweet spot
2:30:08 - Anna on trying to block out things that would weaken the narrative, and external criticism of Leverage
2:36:30 - More on early Geoff cooperativeness
2:41:44 - 'Stealing donors', Leverage's weird vibe (non-materialism?), Anna/Geoff's early interactions, 'writing off' on philosophical grounds, and keeping weird things at arm's length
2:50:00 - The value of looking at historical details, and narrative addiction collapse
2:52:30 - Geoff wants out of the rationality community; PR and associations; and disruptive narratives
Hi everyone. I wanted to post a note to say first, I find it distressing and am deeply sorry that anyone had such bad experiences. I did not want or intend this at all.
I know these events can be confusing or very taxing for all the people involved, and that includes me. They draw a lot of attention, both from those with deep interest in the matter, where there may be very high stakes, and from onlookers with lower context or less interest in the situation. To hopefully reduce some of the uncertainty and stress, I wanted to share how I will respond.
My current plan (after writing this note) is to post a comment about the above-linked post. I have to think about what to write, but I can say now that it will be brief and positive. I’m not planning to push back or defend. I think the post is basically honest and took incredible courage to write. It deserves to be read.
Separately, I’m going to write a letter in my role as Executive Director of Leverage Research on the topic of harms from our previous psychology research and the structure of the organization.
It may be useful to address the Leverage/Rationality relation or the Leverage/EA relation as well, but discussion of that might distract us from what is most important right now.
Given what the post said about the NDA that people signed when leaving, it seems to me like explictely releasing people from that NDA (maybe with a provision to anonymize names of other people) would be very helpful for having a productive discussion that can integrate the experiences of many people into public knowledge and create a shared understanding of what happened.
Separately, I’m going to write a letter in my role as Executive Director of Leverage Research on the topic of harms from our previous psychology research and the structure of the organization.
Geoff, has this letter been published yet? And if not, when will it be published?
I wanted to note that I think this comment both a) raises a good point (should Leverage pay restitution to people that were hurt by it? Why and how much?) and b) does so in a way that I think is hostile and assumes way more buy-in than it has (or would need to get support for its proposal).
First, I think most observers are still in "figuring out what's happened" mode. Was what happened with Zoe unusually bad or typical, predictable or a surprise? I think it makes sense to hear more stories before jumping to judgment, because the underlying issue isn't that urgent and the more context, the wiser a decision we can make.
Second, I think a series of leading questions asked to specific people in public looks more like norm enforcement than it does like curious information-gathering, and I think the natural response is suspicion and defensiveness. [I think we should go past the defensiveness and steelman.]
Third, I do think that it makes sense for people to make things right with money when possible; I think that this should be proportional to damages done and expectations of care, rather than just 'who has the money.' Suppose, pulling these numbers out of a hat, the total damage done to L...
In retrospect, I apologize for the strident tone and questions in my original comment. I am personally worried about further harm, in uses of money or power by Anders, and from Zoe's post it seems like there were a handful to many more people hurt. If money or tokens are possibly causally downstream of harm, restitution might reduce further harm and address harm that's already taken place. The community is doing ongoing information gathering, though, and my personal rush to judgement isn't keeping in pace with that. I'll leave my above comment as is, since it's already received a constructive reply.
Here's anonymous submission of Leverage's Basic Information Acknowledgement Checklist document. The submitter said "The text of this document has been copied word for word from the original, except with names redacted."
https://we.tl/t-KaDXP3vrW3
I can confirm that this document is legitimate as I've seen a more recent version of the same checklist.
Leverage Research is planning to review and revise its information management policy, as soon as we have time.
Relatedly, a LessWrong user recently reached out to us directly for information about our information management policies and agreements. During the conversation, it became clear that it was difficult for them, as someone seeking information, to formulate which questions to ask and difficult for us as an organization to determine what answers they might find useful, given the differences in information and context. As a result of this conversation, we concluded it might be useful to figure out how to help people request the information that they are looking for, while at the same time protecting the institute’s time, ownership of research, and ability to carry out its mission.
As part of this, we have now set up a request form on our website where it is possible to make information requests of the organization. We expect to respond to genuine inquiries with answers, updates to our FAQ (forthcoming), the release of documents, and more, as our other responsibilities permit.
EDIT: This comment described a bunch of emails between me and Leverage that I think would be relevant here, but I misremembered something about the thread (it was from 2017) and I'm not sure if I should post the full text so people can get the most accurate info (see below discussion), so I've deleted it for now. My apologies for the confusion
I generally feel reasonably comfortable sharing unsolicited emails, unless the email makes some kind of implicit request to not be published, that I judge at least vaguely valid. In general I am against "default confidentiality" norms, especially for requests or things that might be kind of adversarial. I feel like I've seen those kinds of norms weaponized in the past in ways that seems pretty bad, and think that while there is a generally broad default expectation of unsolicited private communication being kept confidential, it's not a particularly sacred protection in my mind (unless explicitly or implicitly requested, in which case I think I would talk to the person first to get a more fully comprehensive understanding for why they requested confidentiality, and would generally err on the side of not publishing, though would feel comfortable overcoming that barrier given sufficient adversarial action)
unless the email makes some kind of implicit request to not be published
What does "implicit request" mean here? There are a lot of email conversations where no one writes a single word that's alluding to 'don't share this', but where it's clearly discussing very sensitive stuff and (for that reason) no one expects it to be posted to Hacker News or whatever later.
Without having seen the emails, I'm guessing Leverage would have viewed their conversation with Alyssa as 'obviously a thing we don't want shared and don't expect you to share', and I'm guessing they'd confirm that now if asked?
I do think that our community is often too cautious about sharing stuff. But I'm a bit worried about the specific case of 'normalizing big infodumps of private emails where no one technically said they didn't want the emails shared'.
(Maybe if you said more about why it's important in this specific case? The way you phrased it sort of made it sound like you think this should be the norm even for sensitive conversations where no one did anything terrible, but I assume that's not your view.)
I would just ask the other party whether they are OK to share rather than speculating about what the implicit expectation is.
Off the cuff thoughts from me listening to the Twitch conversation between Anna and Geoff:
Thanks! I would love follow-up on LW to the twitch stream, if anyone wants to. There were a lot of really interesting things being said in the text chat that we didn’t manage to engage with, for example. Although unfortunately the recording was lost, which is unfortunate because IMO it was a great conversation.
TekhneMakre writes:
This suggests, to me, a (totally conjectural!) story where [Geoff] got into an escalating narrative cold war with the rationality community: first he perceives (possibly correctly) that the community rejects him…
This seems right to me
Anna says there were in the early 2010s rumors that Leverage was trying to fundraise from "other people's donors". And that Leverage/Geoff was trying to recruit, whether ideologically or employfully, employees of other EA/rationality orgs.
Yes. My present view is that Geoff’s reaching out to donors here was legit, and my and others’ complaints were not; donors should be able to hear all the pitches, and it’s messed up to think of “person reached out to donor X to describe a thingy X might want to donate to” as a territorial infringement.
This seems to me like an example of me and others escalating the “narrative cold ...
I have video of the first 22 minutes at the beginning but at the end switched into my password manager (not showing passwords on screens but a series of sides where I'm registered), so I would want to publically post the video but I'm open to share it to individual people if someone wants to write something referencing it.
I wished I would have been more clear about how to do screen recording in a way that only captures one browser window...
Noting that it has been 9 days and Geoff has not yet followed though on publishing the 22-minute video. Thankfully, however, a complete audio recording has been made available by another user.
On https://www.geoffanders.com/ there's the link to https://www.dropbox.com/s/pt3q5xejglsgrcr/1st%20Twitch%20Stream%20-%20Leverage%20History%20-%20Beginning.mp4?dl=0# so he did follow through.
I notice that my comment score above is now zero. I would like others to know that I visited Geoff's website prior to posting my comment to ensure my comment was accurate, and that these links appeared after my above comment.
Geoff was interested in publishing a transcript and a video, so I think Geoff would be happy with you publishing the audio from the recording you have.
I have a recording of 22 minutes. The last minute includes me switching into my password manager and thus I cut it off from the video that I passed on.
Geoff describes being harmed by some sort of initial rejection by the rationality/EA community (around 2011? 2010?).
One of the interesting things about that timeframe is that a lot of the stuff is online; here's the 2012 discussion (Jan 9th, Jan 10th, Sep 19th), for example. (I tried to find his earliest comment that I remembered, but I don't think it was with the Geoff_Anders account or it wasn't on LessWrong; I think it was before Leverage got started, and people responded pretty skeptically then also?)
I'm sort of surprised that you'd interpret that as a mistake. It seems to me like Eliezer is running a probabilistic strategy, which has both type I and type II errors, and so a 'mistake' is something like "setting the level wrong to get a bad balance of errors" instead of "the strategy encountered an error in this instance." But also I don't have the sense that Eliezer was making an error.
Good stuff. Very similar to DeMille's interview about Hubbard. As an aside, I love how the post rejects the usual positive language about "openness to experience" and calls the trait what it is: openness to influence.
While I'm not hugely involved, I've been reading OB/LW since the very beginning. I've likely read 75% of everything that's ever been posted here.
So, I'm way more clued-in to this and related communities than your average human being and...I don't recall having heard of Leverage until a couple of weeks ago.
I'm not exactly sure what that means with regard to PR-esque type considerations.
However. Fair or not, I find having read the recent stuff I've got an ugh field that extended to slightly include LW. (I'm not sure what it means to "include LW"...it's just a website. My first stab at an explanation is it's more like "people engaged in community type stuff who know IRL lots of other people who communicate on LW", but that's not exactly right either.)
I think it'd be good to have some context on why any of this is relevant to LessWrong. The whole thing is generating a ton of activity and it feels like it just came out of nowhere.
Personally I think this story is an important warning about how people with a LW-adjacent mindset can death spiral off the deep end. This is something that happened around this community multiple times, not just in Leverage (I know of at least one other prominent example and suspect there are more), so we should definitely watch out for this and/or think how to prevent this kind of thing.
Also, for the extended Leverage diaspora and people who are somehow connected, LessWrong is probably the most obvious place to have this discussion, even if people familiar with Leverage make up only a small proportion of people who normally contribute here.
There are other conversations happening on Facebook and Twitter but they are all way more fragmented than the ones here.
I originally chose LessWrong, instead of some other venue, to host the Common Knowledge post primarily because (1) I wanted to create a publicly-linkable document pseudonymously, and (2) I expected high-quality continuation of information-sharing and collaborative sense-making in the comments.
As someone part of the social communities, I can confirm that Leverage was definitely a topic of discussion for a long time around Rationalists and Effective Altruists. That said, often the discussion went something like, "What's up with Leverage? They seem so confident, and take in a bunch of employees, but we have very little visibility." I think I experienced basically the same exact conversation about them around 10 times, along these lines.
As people from Leverage have said, several Rationalists/EAs were very hostile around the topic of Leverage, particularly in the last ~4 years or so. (I've heard stories of people getting shouted at just for saying they worked at Leverage at a conference). On the other hand, they definitely had support by a few rationalists/EA orgs and several higher-ups of different kinds.
They've always been secretive, and some of the few public threads didn't go well for them, so it's not too surprising to me that they've had a small LessWrong/EA Forum presence.
I've personally very much enjoyed staying mostly staying away from the controversy, though very arguably I made a mistake there.
(I should also note that I had friends who worked at or worked close to Leverage, I attended like 2 events there early on, and I applied to work from there around 6 years ago)
Sorry, edited. I meant that it was a mistake for me to keep away before, not now.
(That said, this post is still quite safe. It's not like I have scandalous information, more that, technically I (or others) could do more investigation to figure out things better.)
My own experience is somewhat like Linch's here, where mostly I'm vaguely aware of some things that aren't my story to tell.
For most of the past 9ish years I'd found Leverage "weird/sometimes-offputting, but not obviously moreso than other rationality orgs." I have gotten personal value out of the Leverage suite of memes and techniques (Belief Reporting was a particularly valuable thing to have in my toolkit).
I've received one bit of secondhand info about "An ex-leverage employee (not Zoe) had an experience that seemed reasonable to describe as 'the bad kind of cult that was actually harmful'." I was told this as part of a decisionmaking process where it seemed relevant, and asked not to share it further in the past couple years. I think it makes sense to share this much meta-data in this context.
Re: @Ruby on my brusqueness
LW/EA has more "world saving" orgs than just Leverage. Implicit to "world saving" orgs, IMO, is that we should tolerate some impropriety for the greater good. Or that we should handle things quietly in order to not damage the greater mission.
I think that our "world saving" orgs ask a lot of trust from the broader community -- MIRI is a very clear example. I'm not really trying to condemn secrecy I am just pointing out that trust is asked of us.
I recognize that this is inflammatory but I don't see a reason to beat around the bush:
Leverage really seems like a cult. It seems like an unsafe institution doing harmful things. I am not sure how much this stuff about Leverage is really news to people involved in our other "world saving" orgs. I think probably not much. I don't want "world saving" orgs to have solidarity. If you want my trust you have to sell out the cult leaders, the rapists, etcetera, regardless of whether it might damage your "world saving" mission. I'm not confident that that's occurring.
IMO, is that we should tolerate some impropriety for the greater good.
I agree!
I am just pointing out that trust is asked of us.
I agree!
Leverage really seems like a cult. It seems like an unsafe institution doing harmful things.
Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0 (remote team, focus on science history rather than psychology, 4 people).
I am not sure how much this stuff about Leverage is really news to people involved in our other "world saving" orgs.
The information in Zoe's Medium post was significant news to me and others I've spoken to.
(saying the below for general clarity, not just in response to you)
I think everyone (?) in this thread is deeply concerned, but we're hoping to figure out what exactly happened, what went wrong and why (and what maybe to do about it). To do that investigation and postmortem, we can't skip to sentencing (forgive me if that's not your intention, but it reads a bit to me that that's what you want to be happening), nor would it be epistemically virtuous or just to do so.
Some major new information came to light, people need time to process it, surface other releva...
To do that investigation and postmortem, we can't skip to sentencing (forgive me if that's not your intention, but it reads a bit to me that that's what you want to be happening), nor would it be epistemically virtuous or just to do so.
I super agree with this, but also want to note that I feel appreciation for farp's comments here. The conversation on this page feels to me like it has a missing mood: I found myself looking for comments that said something like "wow, this account is really horrifying and tragic; we're taking these claims really seriously, and are investigating what actions we should take in response". Maybe everyone thinks that that's obvious, and so instead is emphasizing the part where we're committed to due process and careful thinking and avoiding mob dynamics. But I think it's still worth stating explicitly, especially from those in leadership positions in the community. I found myself relieved just reading Ruby's response here that "everyone in this thread is deeply concerned".
I super agree with this, but also want to note that I feel appreciation for farp's comments here.
Fair!
I found myself looking for comments that said something like "wow, this account is really horrifying and tragic; we're taking these claims really seriously, and are investigating what actions we should take in response"
My models of most of the people I know in this thread feel that way. I can say on my own behalf that I found Zoe's account shocking. I found it disturbing to think that was going on with people I knew and interacted with. I find it disturbing that if this really is true, how did it not surface until now? (Or how it was ignored until now?) I'm disturbed that Leverage's weirdness (and usually I'm quite okay with weirdness) turned out to enable and hide terrible things, at least for one person and likely more. I'm saddened that it happened, because based on the account, it seems like Leverage were trying to accomplish some ambitious, good things and I wish we lived in a world where the "red flags" (group-living, mental experimentation, etc) were safely ignored in the pursuit in the service of great things.
Suddenly I am in a world more awful than the one I thought I was in, and I'm trying to reorient. Something went wrong and something different needs to happen now. Though I'm confident it will, it's just a matter of ensuring we pick the right different thing.
Thank you, I really appreciate this response. I did guess that this was probably how you and others (like Anna, whose comments have been very measured) felt, but it is really reassuring to have it explicitly verbally confirmed, and not just have to trust that it's probably true.
Sorry, only just now saw that I was mentioned by name here. I agree that Zoe's experiences were horrifying and sad, and that it's worth quite a bit to try to spare others that kind of thing. Not mangling peoples' souls matters, rather a lot, both intrinsically (because people matter) and instrumentally (because we need integrity if we want to do anything real and sustained).
The information in Zoe's Medium post was significant news to me and others I've spoken to.
That's a good thing to assert.
It seems preeeetty likely that some leaders in the community knew more or less what was up. I want people to care about whether that is true or not.
To do that investigation and postmortem, we can't skip to sentencing
I get this sentiment, but at the same time I think it's good to be clear about what is at stake. It's easy for me to interpret comments like "Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0" as essentially claiming that, while post-mortems are useful, the situation is behind us.
Simply put, if I were a victim, I would want to speak up for the sake of accountability, not shared examination and learning. If I spoke up and found that everyone agreed the behavior was bad, but we all learned from it and are ready ot move on, I would be pretty upset by that. And my understanding is that this is how the community's leaders have handled other episodes of abuse (based on 0 private information, only public / second hand information).
But I am coming into this with a lot of assumptions as an outsider. If these assumptions don't resonate with any people who are closer to the situation then I apologize. Regardless sorry for stirring shit up with not much concrete to say.
It's easy for me to interpret comments like "Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0" as essentially claiming that, while post-mortems are useful, the situation is behind us.
Given my high priors on "the past behavior is the best predictor of future behavior", I would assume that the greatest difference will be better OPSEC and PR. Also, more resouces to silence critics.