Here's a long, detailed account of a Leverage experience which, to me, reads as significantly more damning than the above post: https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b
Miscellaneous first-pass thoughts:
Geoff had everyone sign an unofficial NDA upon leaving agreeing not to talk badly about Leverage
I really don't like this. Could I see the NDA somehow? If the wording equally forbids sharing good and bad stuff about Leverage, then I'm much less bothered by this. Likewise if the wording forbids going into certain details, but lets former staff criticize Leverage at a sufficient level of abstraction.
Otherwise, this seems very epistemically distorting to me, and in a direction that things already tend to be distorted (there's pressure against people saying bad stuff about their former employer). How am I supposed to form accurate models of Leverage if former employees can't even publicly say 'yeah, I didn't like working at Leverage'??
One of my supervisors would regularly talk about this as a daunting but inevitable strategic reality (“obviously we’ll do it, and succeed, but seems hard”).
"It" here refers to 'taking over the US government', which I assume means something like 'have lots of smart aligned EAs with very Leverage-y strategic outlooks rise to the top decision-making ranks of the USG'. If I condition on 'Leverage staff have a high probability ...
???? I'm so confused about what happened here. The aliens part (as stated) isn't a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there's something being lost in translation here, and missing context for why people didn't immediately see that this person was having a mental breakdown?
FWIW, my own experience is that people often miss fairly blatant psychotic episodes; so I'm not sure how Leverage-specific the explanation needs to be for this one. For example, once I came to believe that an acquaintance was having a psychotic episode and suggested he see a psychiatrist; the psychiatrist agreed. A friend who'd observed most of the same data I had asked me how I'd known. I said it was several things, but that the bit where our acquaintance said God was talking to him through his cereal box was one of the tip-offs from my POV. My friend's response was "oh, I thought that was a metaphor." I know several different stories like this one, including a later instance where I was among those who missed what in hindsight was fairly blatant evidence that someone was psychotic, none of which involved weird group-level beliefs or practices.
I'd guess that the people in question had a mostly normal air to them during the episode, just starting to say weird things?
Most people's conception of a psychotic episode probably involves a sense of the person acting like a stereotypical obviously crazy person on the street. Whereas if it's someone they already know and trust, just acting slightly more eccentric than normal, people seem likely to filter everything the person says through a lens of "my friend's not crazy so if they do sound crazy, it's probably a metaphor or else I'm misunderstanding what they're trying to say".
???? I'm so confused about what happened here. The aliens part (as stated) isn't a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there's something being lost in translation here, and missing context for why people didn't immediately see that this person was having a mental breakdown?
I would imagine that other people saw his relationship to Kant as something like Kant being a Shoulder Advisor, maybe with additional steps to make it feel more real.
In an enviroment where some people do seances and use crystals to clean negative energy, they might have thought that if you believe in the realness of rituals things get more effective. So someone who manages to get to the position to literally believe they are talking to Kant instead of just to some abstraction of their mind of Kant being more powerful.
I do think they messed up here by not understanding why truth is valuable, but I can see how things played out that way.
If I condition on 'Leverage staff have a high probability of succeeding here', then I could imagine that a lot of the factors justifying confidence are things that I don't know about (e.g., lots of people already in high-ranking positions who are quietly very Leverage-aligned). But absent a lot of hidden factors like that, this seems very overconfident to me, and I'm surprised if this really was a widespread Leverage view.
They seem to have believed that they can turn people into having Musk level competence. A hundred people with Musk level competence might execute a plan like the one Cummings proposed to successfully take over the US government.
If they really could transform people in that way, that might be reasonable. Stories like Zoe's however suggests that they didn't really have an ability to do that and instead their experiments dissolved into strange infighting and losing touch with reality.
Isn't the thing Rob is calling crazy that someone "believed he was learning from Kant himself live across time", rather than believing that e.g. Geoff Anders is a better philosopher than Kant?
Yeah, I wasn't talking about the 'better than Kant' thing.
Regarding the 'better than Kant' thing: I'm not particularly in awe of Kant, so I'm not shocked by the claim that lots of random people have better core philosophical reasoning skills than Kant (even before we factor in the last 240 years of philosophy, psychology, etc. progress, which gives us a big unfair advantage vs. Kant).
The part I'm (really quite) skeptical of is "Geoff is the best philosopher who’s ever lived". What are the major novel breakthroughs being gestured at here?
CFAR recently hosted a “Speaking for the Dead” event, where a bunch of current and former staff got together to try to name as much as we could of what had happened at CFAR, especially anything that there seemed to have been (conscious or unconscious) optimization to keep invisible.
CFAR is not dead, but we took the name anyhow from Orson Scott Card’s novel by the same name, which has quotes like:
“...and when their loved ones died, a believer would arise beside the grave to be the Speaker for the Dead, and say what the dead one would have said, but with full candor, hiding no faults and pretending no virtues.”
...“A strange thing happened then. The Speaker agreed with her that she had made a mistake that night, and she knew when he said the words that it was true, that his judgment was correct. And yet she felt strangely healed, as if simply saying her mistake were enough to purge some of the pain of it. For the first time, then, she caught a glimpse of what the power of speaking might be. It wasn’t a matter of confession, penance, and absolution, like the priests offered. It was something else entirely. Telling the story of who she was, and then realizing that she was no longer th
I felt strong negative emotions reading the above comment.
I think that the description of CFAR’s recent speaking-for-the-dead leaves readers feeling positive and optimistic and warm-fuzzy about the event, and about its striving for something like whole truth.
I do believe Anna's report that it was healing and spacious for those who were there, and I share Anna's hope that something similarly good can happen re: a Leverage conversation.
But I think I see the description of the event as trying to say something like “here’s an example of the sort of good thing that is possible.”
And I wanted anyone updating on that particular example to know that I was invited to the event, and declined the invitation, explaining that I genuinely could not cause myself to believe that I was actually welcome, or that it would be safe for me to be there.
This is a fact about me, not about the event. But it seems relevant, and I believe it changes the impression left by the above comment to be more accurate in a way that feels important.
(I was not the only staff alumnus absent, to be clear.)
I ordinarily would not have left this comment at all, because it feels dangerously ... out of control, or somethi...
The former curriculum director and head-of-workshops for the Center For Applied Rationality would not be welcome or safe at a CFAR event?
What the **** is going on?
It sounds to me like mission failure, but I suppose it could also just be eccentric people not knowing how to get along (which isn't so much different?) 😕
That's right; I am daydreaming of something very difficult being brought together somehow, in person or in writing (probably slightly less easily-visible-across-the-whole-internet writing, if in writing). I’d be interested in helping but don’t have the know-how on my own to pull it off. I agree with you there’re lots of ways to try this and make things worse; I expect it's key to have very limited ambitions and to be clear about how very much one is not attempting/promising.
I vouch that this person is both a LW user who has written IMO some good posts and a member of in-person rationalist/longtermist/EA communities who is in good standing.
Edit: This comment is not meant as an endorsement (nor is this a disendorsement) of the content of the post. I generally support LWers and rationalists being able to post pseudonymously and have their identity as longstanding members of the various communities verified.
Using psychological techniques to experiment on one another, and on the "sociology" of the group itself, was a main purpose of the group. It was understood among members that they were signing up to be guinea pigs for experiments in introspection, altering one's belief structure, and experimental group dynamics.
The Pareto program felt like it had substantial components of this type of social/psychological experimentation, but participants were not aware of this in advance and did not give informed consent. Some (maybe most?) Pareto fellows, including me, were not even aware that Leverage was involved in any way in running the program until they arrived, and found out they were going to be staying in the Leverage house.
Why doesn't the mistake page say anything about Leverage being involved with the Pareto Fellowship? Is that a statement that this part wasn't seen as a mistake?
The basic outline is:
There were ~20 Fellows, mostly undergrad-aged with one younger and a few older.
Stayed in Leverage house for ~3 months in summer 2016 and did various trainings followed by doing a project with mentorship to apply things learnt from trainings
Training was mostly based on Leverage ideas but also included fast-forward versions of CFAR workshop, 80k workshop. Some of the content was taught by Leverage staff and some by CEA staff who were very 'in Leverage's orbit'.
I think most fellows felt that it was really useful in various ways but also weird and sketchy and maybe harmful in various other ways.
Several fellows ended up working for Leverage afterwards; the whole thing felt like a bit of a recruiting drive.
https://web.archive.org/web/20161213021354/http://www.paretofellowship.org/ is the program's self-description.
Hi all, former Leverage 1.0 employee here.
The original post and some of the comments seem epistemically low quality to me compared to the typical LessWrong standard. In particular, on top of a lot of insinuations, there are some false facts. This seems especially problematic given that the post is billed as common knowledge.
There’s a lot of dispute and hate directed towards Leverage, which frankly, has made me hesitant to defend it online. However, a friend of mine in the community recently said something to the effect of, “Well, no former Leverage employee has ever defended it on the attack posts, which I take as an indication of silent agreement.”
That rattled me and so I’ve decided to weigh in. I typically stay quiet about Leverage online because I don’t know how to say nuanced or positive things without fear of that blowing back on me personally. For now, I’d ask to remain anonymous, but if it ever seems like people are willing to approach the Leverage topic differently, I intend to put my name on this post. I don’t expect my opinion alone (especially anonymously) to substantially change anything, but I hope it will be considered and incorporated into a coheren...
Thank you for this.
In retrospect, I could've done more in my post to emphasize:
Different members report very different experiences of Leverage.
Just because these bullets enumerate what is "known" (and "we all know that we all know") among "people who were socially adjacent to Leverage when I was around", does not mean it is 100% accurate or complete. People can "all collectively know" something that ends up being incomplete, misleading, or even basically false.
I think my experience really mismatched the picture of Leverage described by OP.
I fully believe this.
It's also true that I had at least 3 former members, plus a large handful of socially-adjacent people, look over the post, and they all affirmed that what I had written was true to their experience; fairly obvious or uncontroversial; and they expected would be held to be true by dozens of people. Comments on this post attest to this, as well.
I don't advocate for an epistemic standard in which a single person, doing anything less than a singlehanded investigative journalistic dive, is expected to do more than that, epistemic-verification-wise, before sharing their current understanding publicly and soliciting more i...
I don't advocate for an epistemic standard in which a single person, doing anything less than a singlehanded investigative journalistic dive, is expected to do more than that, epistemic-verification-wise, before sharing their current understanding publicly and soliciting more information in the comments.
Sure, but you called the post “Common Knowledge Facts”. If you’d called the post “Me and my friends’ beliefs about Leverage 1.0” or “Basic claims I believe about Leverage 1.0” then that would IMO be a better match for the content and less so claim to universality (that everyone should assume the content of the post as consensus and only question it if strong counter evidence comes in).
Right now, for someone to disagree with the post, they’re in a position where they’re challenging the “facts” of the situation that “everyone knows”. In contrast I think the reality is that if people bring forward their personal impressions as different to the OP, this should in large part be treated as more data, and not a challenge.
Completely fair. I've removed "facts" from the title, and changed the sub-heading "Facts I'd like to be common knowledge" (which in retrospect is too pushy a framing) to "Facts that are common knowledge among people I know"
I totally and completely endorse and co-sign "if people bring forward their personal impressions as different to the OP, this should in large part be treated as more data, and not a challenge."
I appreciate hearing clearly what you'd prefer to engage with.
I also feel that this response doesn't adequately acknowledge how tactically adversarial this context is, and how hard it is to navigate people's desire for privacy.
( ... which makes me feel sad, discouraged, and frustrated. It comes across as "why didn't you just say X", when there are in fact strong reasons why I couldn't "just" say X.)
By "tactically adversarial", I mean that Geoff has an incredibly strong incentive to suppress clarity, and make life harder for people contributing to clarity. Zoe's post goes into more detail about specific fears.
By "desire for privacy", I mean I can't publicly lay out a legible map of where I got information from, or even make claims that are specific enough that they could've only come from one person, because the first-hand sources do not want to be identifiable.
Unlike former members, Pareto fellows, workshop attendees, and other similar commenters here, I did not personally experience anything first-hand that is "truly mine to share".
It was very difficult for me to create a document that I felt comfortable making public, without feeling I was compromising the identity of any primary...
I'm very sorry. Despite trying to closely follow this thread, I missed your reply until now.
I also feel that this response doesn't adequately acknowledge how tactically adversarial this context is, and how hard it is to navigate people's desire for privacy.
You're right, it doesn't. I wasn't that aware or thinking about those elements as much as I could have been. Sorry for that.
It was very difficult for me to create a document that I felt comfortable making public...
It makes sense now that this is the document you ended up writing. I do appreciate you went to the effort to write up a critical document to bring important concerns. It is valuable and important that people do so.
My hope is that inch by inch, step by step, more and more truth and clarity can come out, as more and more people become comfortable sharing their personal experience.
Hear, hear.
--
If you'll forgive me suggesting again what you should have written, I'm thinking the adversarial context might have been it. If I had read that you were aware of a number of severe harms that weren't publicly known, but that you couldn't say anything more specific because of fears of retribution and the need to protect privacy–that would have been a large and important update to me regarding Leverage. And it might have got a conversation going into the situation to figure out whether and what information was being suppressed.
But it's easier to say that in hindsight.
Thanks, this all helps. At the time, I felt that writing this with the meta-disclosures you're describing would've been a tactical error. But I'll think on this more; I appreciate the input, it lands better this time.
I did write both "I know former members who feel severely harmed" and "I don't want to become known as someone saying things this organization might find unflattering". But those are both very, very understated, and purposefully de-emphasized.
Another former Leverage employee here. I agree with the bullet points in Prevlev's post. And my experience of Leverage broadly matches theirs.
It would be useful to have a clarification of these points, to know how different of an org you actually encountered, compared to the one I did when I (briefly) visited in 2014.
It is not true that people were expected to undergo training by their manager.
OK, but did you have any assurance that the information from charting was kept confidential from other Leveragers? I got the impression Geoff charted people who he raised money from, for example, so it at least raises the question whether information gleaned from debugging might be discussed with that person's manager.
“being experimented on” was not my primary purpose in joining nor would I now describe it as a main focus of my time at Leverage.
OK, but would you agree that a primary activity of leverage was to do psych/sociology research, and a major (>=50%) methodology for that was self-experimentation?
I did not find the group to be overly focused on “its own sociology.”
OK, but would you agree that at least ~half of the group spent at least ~half of their time studying psychology and/or sociology, using the group as subjects?
...The stated purpose of Leverage 1.0 was not to literally take over the US and/or global governance
+1 for the detail. Right now there's very little like this explained publicly (or accessible in other ways to people like myself). I found this really helpful.
I agree that the public discussion on the topic has been quite poor.
This is subjective and all, but I met Geoff Anders at our 2012 CFAR workshop, I absolutely had the "this person wants to be a cult leader" vibe from him then, and I've been telling people as much for the entire time since. (To the extent of hurting my previously good friendships with two increasingly-Leverage-enmeshed people, in the mid-2010s.)
I don't know why other people's cult-leader-wannabe-detectors are set so differently from mine, but it's a similar (though less deadly) version of how I quickly felt about a certain person [don't name him, don't summon him] who's been booted from the Berkeley community for good reason.
He's also told me, deadpan, that he would like to be starting a cult if he wasn't running Leverage.
As in, 5+ years ago, around when I'd first visited the Bay, I remember meeting up 1:1 with Geoff in a cafe. One of the things I asked, in order to understand how he thought about EA strategy, was what he would do if he wasn't busy starting Leverage. He said he'd probably start a cult, and I don't remember any indication that he was joking whatsoever. I'd initially drafted my comment as "he told me, unjokingly", except that it's a long time ago, so I don't want to give the impression that I'm quite that certain.
accumulated 30 points of karma from what seems to me to be… unimpressive as presented?
I upvoted on the value of the comment as additional source data (IIRC when the comment had much lower karma). This value shouldn't be diminished by questionable interpretation/attitude bundled with it, since the interpretation can be discarded, but the data can't be magicked up.
This is a general consideration that applies to communications that provoke a much stronger urge to mute them, for example those that defend detestable positions. If such communications bring you new relevant data, even data that doesn't significantly change your understanding of the situation, they are still precious, the effects of processing them and not ignoring them sum up over all such instances. (I think the comment to this post most rich in relevant data is prevlev-anon's, which I strong-upvoted.)
There is an important class of claims detailed enough to either be largely accurate or intentional lies, their distortion can't be achieved with mere lack of understanding or motivated cognition. These can be found even in very strange places, and still be informative when taken out of context.
The claim I see here is that orthonormal used a test for dicey character with reasonable precision. The described collateral damage of just one positive reading signals that it doesn't trigger all the time, and there was at least one solid true positive. The wording also vaguely suggests that there aren't too many other positive readings, in which case the precision is even higher than the collateral damage signals.
Since base rate is lower than the implied precision, a positive reading works as evidence. For the opposite claim, that someone has an OK character, evidence of this form can't have similar strength, since the base rate is already high and there is no room for precision to get significantly higher.
It's still not strong evidence, and directly it's only about character in the sense of low-level intuitive and emotional inclinations. This is in turn only weak evidence of actual behavio...
The culture of Homo Sabiens often clashes pretty hard with the culture of LessWrong, so I can't speak to how this will shake out overall.
But in the culture of Homo Sabiens, and in the-version-of-LessWrong-built-and-populated-by-Duncans, this is an outstanding comment, exhibiting several virtues, and also explicitly prosocial in its treatment of orthonormal and RyanCarey in the process of disagreement (being careful and explicit, providing handholds, preregistering places where you might be wrong, distinguishing between claims about the comments and about the overall people, being honest about hypotheses and willing to accept social disapproval for them, etc.)
I have strong-upvoted and hope further interaction with RyanCarey and orthonormal and other commenters both a) happens, and b) goes well for all involved. I would try to engage more substantively, but I'm currently trying to kill a motte-and-bailey elsewhere.
> some of the people who don’t like us
https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-facts-about-leverage-research-1-0-1?commentId=jSCFY2ypMpvAZr8sy
> However, I would also like to note that Leverage 1.0 has historically been on the receiving end of substantial levels of bullying, harassment, needless cruelty, public ridicule, and more by people who were not engaged in any legitimate epistemic activity. I do not think this is OK. I intend to call out this behavior directly when I see it. I would ask that others do so as well.
https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-facts-about-leverage-research-1-0-1?commentId=hqDXAtk6cnqDStkGC
It would be sad if people came away with the idea that the OP was motivated by hate, jealousy, or tribalism. I think the OP is motivated out of deep compassion for the wider community.
Leverage keeps coming up because Geoff Anders (and associates) emit something epistemically and morally corrosive and are gaslighting the commons about it. And Geoff keeps trying to disingenuously hit the reset button and hide it, to exploit new groups of people. That’s what people are responding to and trying to counterac...
Leverage keeps coming up because Geoff Anders (and associates) emit something epistemically and morally corrosive and are gaslighting the commons about it. And Geoff keeps trying to disingenuously hit the reset button and hide it, to exploit new groups of people. That’s what people are responding to and trying to counteract in posts like the OP.
This seems pretty unfair to me and I believe we’re trying quite hard to not hide the legacy of Leverage 1.0. For example, we (1) specifically chose to keep the Leverage name; (2) are transparent about our intention to stand up for Leverage 1.0; and (3) Geoff’s association with Leverage 1.0 is quite clear from his personal website. Additionally, given the state of Leverage’s PR after Leverage 1.0 ended, the decision to keep the name was quite costly and stemmed from a desire to preserve the legacy of Leverage 1.0.
You know, I'm not necessarily a great backer of Leverage Research, especially some of its past projects, but I feel the level of criticism that it has faced relative to other organizations in the space is a bit bizarre. Many of the things that Leverage is criticized for (such as being secretive, seeing themselves at least in part as saving the world, investing in projects that look crazy to intelligent outsiders, etc.) in my view apply to many rationalist/EA organizations. This is not to say that those other organizations are wrong to do these things necessarily, just that it's weird to me that people go after Leverage-in-particular for reasons that often don't seem to be consistently applied to other projects in the space.
(I have never been an employee of Leverage Research, though at one point they were potentially interested in recruiting me and I was not interested; at another point I checked in re: potentially working there but didn't like the sound of the projects they seemed to be recruiting for at the time.)
EDIT 10/13: My original comment was written before the Medium post from Zoe Curzi. The contents of that Medium post are very concerning to me and seem very unlike what I've encountered in other rationalist or EA organizations.
The new Medium post does imply that Leverage cannot be simply lumped with other EA/Rationalist orgs (I too haven't heard anything that concerning reported of any other org), but I don't think that invalidates your original point that the criticisms in this post, as written, could be levelled at many orgs. (I actually wrote such a damning-sounding list for LessWrong/Lightcone).
I agree, but I wanted to be clear that my original comment was largely in reply to the original post and in my view does not much apply to the Medium post, which I consider much more specific and concerning criticism.
My own strong agreement with the content makes it hard to debias my approval here, but I want to generally massively praise edits that explicitly cross out the existing comment, and state that they've changed their minds, and why they've done so.
(There are totally good reasons to retract without comment, of course, and I'm glad that LW now offers this option. I'm just giving Davis credit for putting his update out there like this.)
Wanna +1 all these things are points I've heard from people who were at Leverage, also. I also have a more negative opinion of Leverage than might be implied by the points alone, for the record.
Speaking personally, based on various friendships with people within Leverage, attending a Leverage-hosted neuroscience reading group for a few months, and having attended a Paradigm Academy weekend workshop.
I think Leverage 1.0 was a genuine good-faith attempt at solving various difficult coordination problems. I can’t say they succeeded or failed; Leverage didn’t obviously hit it out of the park, but I feel they were at least wrong in interesting, generative ways that were uncorrelated with the standard and more ‘boring’ ways most institutions are wrong. Lots of stories I heard sounded weird to me, but most interesting organizations are weird and have fairly strict IP protocols so I mostly withhold judgment.
The stories my friends shared did show a large focus on methodological experimentation, which has benefits and drawbacks. Echoing some of the points, I do think when experiments are done on people, and they fail, there can be a real human cost. I suspect some people did have substantially negative experiences from this. There’s probably also a very large set of experiments where the result was something like, “I don’t know if it was good, or if was bad, but something feels dif...
inventing and spreading various rationality techniques
Besides belief reporting, which rationality techniques did they invent and spread intot he community where they should get credit?
Goal factoring is another that comes to mind, but people who worked at CFAR or Leverage would know the ins and outs of the list better than I.
My understanding is that Geoff Anders and Andrew Critch each independently invented goal factoring, and had even been using the same diagramming software to do it! (I'm not sure which one of them first brought it to CFAR.)
Geoff Anders was the first one to teach it at CFAR workshops, I think in 2013. This is the first time I've heard claims of independent invention, at the time all the CFAR people who mentioned it were synced on the story that Anders was a guest instructor teaching a technique that Leverage had developed. (Andrew Critch worked at CFAR at the time. I don't specifically remember whether or not I heard anything about goal factoring from him.)
Anna & Val taught goal factoring at the first CFAR workshop (May 2012). I'm not sure if they used the term "goal factoring" at the workshop (the title on the schedule was "Microeconomics 1: How to have goals"), but that's what they were calling it before the workshop including in passing on LW. Geoff attended the third CFAR workshop as a participant and first taught goal factoring at the fourth workshop (November 2012), which was also the first time the class was called "Goal Factoring". Geoff was working on similar stuff before 2012, but I don't know enough of the pre-2012 history to know if there was earlier cross-pollination between Geoff & CFAR folks.
Critch developed aversion factoring.
In this video from March 2014 https://www.youtube.com/watch?v=k255UjGEO_c Andrew Critch says he developed "Aversion factoring".
When I learned it from Geoff in 2011, they were recommending yEd Graph Editor. The process is to generally write things you do or want to do as nodes, and then connect them to each other using "achieves or helps to achieve" edges (i.e., if you go to work, that achieves making money, which achieves other things you want).
facts that are more-or-less "common knowledge" among people who spent time socially adjacent to Leverage
Yup, sounds right. As someone who visited the rationality community in the bay a bunch in 2013-2018, almost nothing listed in the bullet points was a surprise to me, and off-hand I can think of dozens of other people who I would assume also know almost everything written above. (I'm sure there are more such people, that I haven't met or wouldn't remember.)
I don't have anything in particular to say about the implications of these facts, just seemed worth mentioning this thing re common knowledge.
(The main thing I hadn't heard about was the sexual relationships bullet point.)
man, i'm kinda mad about something going on with this "knowledge" word. i'd really like to insert some space in here between "lots of people believe a thing" and "lots of people know a thing".
i believed most of the bullet points in a low-confidence, easy-to-change-my-mind kind of way. the real thing is that all the bullet points have been widely rumored. it's not the case that all those rumoring people had justified true belief that everyone else had justified true belief about the bullet points, or whatever. if you announce a bunch of rumors with the word "knowledge" attached, it increases people's confidence and a bunch of switches in their mind flip from "here's a hypothesis i'm holding lightly because it came from the rumor mill" over to "yeah i wasn't surprised to hear those things, yet now i'm even more sure of them".
and like, i do recognize that in the vernacular, "common knowledge" (everyone knows everyone knows) isn't really distinguished from a weaker thing that might be called "common belief" (everyone at-least-somewhat-believes everyone at-least-somewhat-believes). but that doesn't mean we should go around conflating such things all to hell like normal people do.
ugh blerg grump. i am kind of exasperated. i guess i really want the top level post to own a bunch more of its shit, epistemically.
and i didn't really mean to direct all of that right at you, Malcolm, your comment just helped the blergness snap into place in my head enough that i ended up typing things.
Thanks for this. I think these distinctions are important.
Let me clarify: In this post when I say "Common knowledge among people who spent time socially adjacent to Leverage", what I mean is:
I believe there are several dozen people in the set of people this is true of.
So I did mean "People in my circles all know that we all know these things", and by "know" I meant "believe, with sourcing to multiple independent first-hand witnesses".
I do not count you as being in the "common knowledge" set, as your self-report is that you lightly believed these based on third-hand information that was "widely rumored". Rather than having been directly told it by a member; witnessing others being directly told it by members; and having people tell you they were directly tol...
Hi, I'm Olli Payne. I first encountered Leverage in person during the summer of 2018 and worked at Paradigm from August 2019 through April 2020.
I moved to the Bay from NYC in April 2018, after hearing about communities there (EA, Rationality, Leverage, Futurism, etc) that are focused on thinking long-term and having a large positive impact, something that resonated with me and my goals. After attending several EA meetups, I went to a few EA Global afterparties, including one at Leverage's Lake Merritt apartment.
I'd already started to hang out with Leverage employees who I'd met at the afterparty when I requested to be invited to a Paradigm workshop. I attended the workshop in June of 2018 and after finding the tools incredibly useful, I began to pursue a job at Leverage.
During the year before I was hired at Paradigm, I made many friends with employees of both Paradigm and Leverage. We went bouldering, saw movies, played video games, tried to perfect the baking of pies... I'm very happy to say that I'm still close with many of these friends.
This was my take-away from being around Leverage 1.0:
The organization and its members did have the stated goal of "world-saving," but that phras...
Participation in the project involved secrecy / privacy / information-management agreements.
How strong were those agreements? How much were the participants allowed to share privately with friends, family or outside therapists?
Yup. I have known all of these things since 2018-2019, and know or know of maybe a few dozen people who also know these things. I’m glad this bare minimum is being discussed openly, publicly.
Secondhand, I have a very negative view of at least some parts of what happened in Leverage 1.0. My best guess is that the relationships and events that some people have (mostly privately) described as controlling or abusive were not evenly distributed across the whole organisation. So it would have been straightforward for someone to be working at Leverage and never see or get deeply involved with situations that a handful of people have, in private or in semi-public conversations, described clearly as cultic abuse. It seems like there are on the order of dozens of people who probably had a roughly fine time being involved in Leverage for many years, and at least a handful of people who report much more negative experiences.
(I’m @utotranslucence on Twitter; never officially had a LessWrong account before but been around the Bay Area community since 2017. I attended one Paradigm training weekend in early 2018 and some parties at the Lake Merritt building but most of my knowledge comes from conversations with friends who did work there, and there are plenty of things I still don’t know with great clarity.)
It seems plausible that in the future, if there aren't already, there will be many groups that use the language and terminology of rationality to serve more self-interested and orthogonal objectives.
I do worry about "ends justify the means" reasoning when evaluating whether a person or project was or wasn't "good for the world" or "worth supporting". This seems especially likely when using an effective-altruism-flavored lens that only a few people/organizations/interventions will matter orders of magnitude more than others. If one believes that a project is one of very few projects that could possibly matter, and the future of humanity is at stake - and also believes the project is doing something new/experimental that current civilization is inadequate for - there is a risk of using that belief to extend unwarranted tolerance of structurally-unsound organizational decisions, including those typical of "high-demand groups" (such as use of psychological techniques to increase member loyalty, living and working in the same place, non-platonic relationships with subordinates, secrecy, and so on) without proportionate concern for the risks of structuring an organization in that way.
There is (roughly) a sequences post for that. :P
My name is Joe Corabi. I am a philosophy professor at Saint Joseph’s University in Philadelphia and a longtime friend of Geoff Anders. I have known Geoff since we were grad students together at Rutgers. We have also collaborated over the years on a number of philosophical projects, both related to and separate from Leverage Research.
I have been a volunteer off and on at Leverage since its founding and I wanted to share my experience of Leverage in the hopes that it provides some unique evidence about the organization and its history. I was troubled by the recent Less Wrong post and I spoke to Geoff about the possibility of writing something that can hopefully provide some additional context for those looking to evaluate the situation.
I was initially drawn to Leverage by the enthusiasm of its members and Geoff’s vision for the organization. In my view, which is from someone who has spent over 20 years studying philosophy in an intensive way, Geoff is a highly skilled philosopher who has both an expert knowledge of the field and a sensitivity to methodological concerns, the combination of which is quite rare. In my view, professional ana...
For those who think the above description reads like one of a typical cult, it's worse reading how a description of an actual cult reads.
There's currently a cult trying to take over a place that hosts personal development seminars (and I know people personally who went there for seminars unrelated to the cult.
https://metamoderna.org/how-a-psychedelic-sex-cult-infiltrated-a-german-ecovillage/
Am I crazy or was something really similar to this, with the same thing of asking for a LW moderator to vouch, posted like a year ago? I didn't immediately find it by searching.
https://forum.effectivealtruism.org/posts/qYbqX3jX4JnTtHA5f/leverage-research-reviewing-the-basic-facts
This is also a useful resource, and the pingbacks link to other resources.
I want to gesture at "The Plan", linked from Gregory Lewis's comment (https://forum.effectivealtruism.org/posts/qYbqX3jX4JnTtHA5f/leverage-research-reviewing-the-basic-facts?commentId=8goitqWAZfEmEDrBT), as supporting evidence for the explicit "take over the world" vibe, in terms of how exactly beneficial outcomes for humanity were meant to result from the project, best viewable as PDF.
This reminds me of the focusing/circling/NVC discussions, one group (to which I belonged) was like "this is obviously culty mindfuckery, can't you see" and the other group couldn't see, and arguments couldn't bridge that gap. It's like how some people can recognize bullying and others will say "boys will be boys", while looking at the exact same situation.
I can verify that I saw some version of their document The Plan[1] (linked in the EA Forum post below) in either 2018 or 2019 while discussing Leverage IRL with someone rationalist adjacent that I don't want to doxx. While I don't have first hand knowledge (so you might want to treat this as hearsay), my interlocutor did and told me that they believed they were the only one with a workable plan, along with the veneration of Geoff.
[1]: I don't remember all of the exact details, but I do remember the shape of the flowchart and that looks like it. It's possible that my interlocutor also got it from Gregory Lewis, but I don't think so.
Figured I'd chime in too—I'm Jordan Alexander and I was one of the Pareto Fellows back in 2016. I've been involved with the EA and rationality community to various degrees since then (Stanford EA, internship at CHAI, active GWWC pledge) so I thought I'd give my account of the program. I recognize that other people may have had different experiences during the program and that there may have been issues that I was not personally aware of as a participant in the program.
As for my relationship with Leverage: I have a few friends at Leverage, though we're not in close contact. I participated in Paradigm Coaching (essentially a combination of personal and professional one-on-one coaching) for a few months at the end of 2019 and found it incredibly helpful while working on the mundane problem known as "job-hunting". Finally, one of my friends at Leverage reached out and asked me if I was interested in sharing my experience at the Pareto Fellowship after this post popped up. Frankly I'm annoyed that I have to do this but it seems unfair that these sorts of posts reappear every year. I work as a software engineer and have no professional or financial ties to Leverage.
Here's an...
Given the comments that have surfaced it sounds like my annoyance at these posts was unjustified and that 1) I underestimated how long it takes for structural weaknesses to surface and have effects that are clearly visible to outsiders, and 2) underrated how valuable it was to open a space for people to share their experiences with Leverage. Glad that the original post was able to do this in a way that preserved anonymity for people that understandably needed it.
I also want to highlight that while I still stand by my personally positive experience at the Pareto Fellowship in 2016 this is not meant to be a universal account of events [and certainly not of Leverage Research] and a proper judgement of the program itself would involve polling a representative sample of former Pareto Fellows.
Finally, I recognize that it's especially difficult to recount experiences when someone has experienced deep trauma so thanks to Zoe Curzi for the courage involved in telling her story and to anyone else sharing their experiences, anonymously or otherwise.
Thanks for taking the time to recount your experiences there.
I do want to register that I expect the experience afforded to fellows as part of a few-month program to be different, and milder, than want long-term employees would experience.
I'm not sure what the meaning, if any, of the following fact is, but: I notice that I would feel very positively about Leverage as it's portrayed here if there weren't relationships with multiple younger subordinates (e.g. if the leader had been monogamously married), and as it is I feel mildly negative about it on net.
That wasn't necessary evidence for me. The secrecy + "charting/debugging" + "the only organization with a plan that could possibly work, and the only real shot at saving the world" is (if true) adequate to label the organization a cult (in the colloquial sense). This are all ideas/systems/technologies that are consistently and systematically used by manipulative organizations to break a person's ability to think straight. Any two of these might be okay if used extremely carefully (psychiatry uses secrecy + debugging) but having all three brings it solidly into cult territory. Also, psychiatry has lots of rules to prevent abuse, including public, well-established ethical standards.
Are Leverage's standard operating procedures auditable knowledge to outsiders? If not, this is the mother of all red flags and we should default to "cult".
Edit: LarissaRowe didn't reply to this comment because Leverage doesn't have a leg to stand on.
Edit ×2: Shaming someone into a response violates the norms of Less Wrong. The first edit was a mistake. I apologize.
psychiatry uses secrecy
In psychiatry there's no secrecy for treatment protocols and there are no secrecy rules for patients that prevent them from sharing about their experience.
That's a good point. The psychiatrist (who has power) is sworn to secrecy but the patient (who is vulnerable) isn't.
The real problem is to have the belief that you are the only organization with a plan that might work while at the same time requiring secrecy that prevents the participants from feedback from the outside world that might make participants doubt that this is the case. If you then add strong self-modification techniques that also strengthen the belief, that's no good enviroment.
>"the
>Are
Formatting note — if you put a space between the '>' and the next character, it'll format correctly as a proper block quote.
It provides an alternative version for the motivation of the entire project. More disturbingly, the alternative seems to explain some facts better, such as why after all that work and money spent, after all the grandiose secret plans, there is still no tangible output.
EDIT: The part "no tangible output" was not fair, I apologize for that. I am not updating the comment, because it would feel like moving the goalpost.
I appreciate the edit, Viliam.
I know that it was a meme about Leverage 1.0 that it was impossible to understand, but I think that is pretty unfair today. If anyone is curious here are some relevant links:
We're no longer engaged with the Rationality community so this information might not have become common knowledge. Hopefully, this helps.
I think Bismarck Analysis Consulting Company, Paradigm Academy Training, and Reserve Cryptocurrency all came out of Leverage.
I should clarify upfront that I am not a rationalist, and am not a fan of LessWrong.
That said, I have some experience when it comes to... this sort of thing.
So when I was a little younger, I was the figurehead and leader of a sex cult. (Oddly enough, I did this without ever really understanding that it was, in fact, a sex cult. One of my best friends described this as a "Jerry Smith plot", which I found hilarious.) This cult was, in practice, a discord server focused around my erotic hypnosis work. I copied the model from another server that was definitely a sex cult, and tried to strip out all of the culty elements and just leave the aesthetics (because a lot of us liked the aesthetics). But you really can't reconstitute the structure of a high-control group without, in various ways, reconstituting the behavior of a high-control group. It doesn't work - the culty shit works its way back in if you're not extremely careful. And I was not careful, for reasons that may be obvious if you think about the perks one gets as the figurehead of a sex cult. A lot of people got hurt.
Why bring that up? Because hoo boy does this tick a lot of similar boxes.
A lot of things scream "hig...
There's a lot going on in this comment, but I note with interest that this is the first time I've seen someone weigh in on questions of cultish behavior from the perspective of a former cult leader.
I'm fascinated with the claim that if you take on the outer facade of a cult, you now have a strong incentive gradient to turn up the cultishness (maybe because you're now drawing in people who are looking for more of that, and driving away anyone who's put off by it). Obviously the claim needs more than one person's testimony, but it makes sense.
I wonder if some early red flags with Leverage (living together with your superiors who also did belief reporting sessions with you, believing Geoff's theories were the word of god, etc) were explicitly laughed off as "oh, haha, we know we're not a cult, so we can chuckle about our resemblances to cults".
I think from a world and historical perspective, dating subordinates is a very common thing. The American cult bundle of traits is much more specific and rare. For me, the first red flag is shared housing for followers of the idea. Any movement that does it is already kind of weird to me (including the rationalist movement). If there's also some kind of group psychological exercise, that takes it all the way to "nope" (again, including some parts of the rationalist movement).
Hi BayAreaHuman,
I just posted an update on behalf of Leverage Research to LessWrong along with an invite to an AMA with Leverage Research next weekend, as it seems from the comments that there isn’t a lot of common knowledge about our current work or other aspects of our history. I encourage people to read this for additional context, and I hope the OP will be able to update this post to incorporate some of that.
I also want to briefly address some of the items raised here.
Information management policies
Leverage Research has for a long time been concerned about the potential negative consequences of the potential misuse of knowledge garnered from research. These concerns are widely shared for research in the hard sciences (e.g., nuclear physics), but are valid as well for the social sciences.
Starting in 2012, Leverage Research had an information management policy designed to prevent negative intended consequences from the premature dissemination of information. Our information policy from 2012-2016 required permission for the release of longform information on the internet. We had an information approval team, with most information release requests being approved. ...
If anyone is aware of harms or abuses that have taken place involving staff at Leverage Research, please email me, in confidence, at larissa@leverageresearch.org or larissa.e.rowe@gmail.com.
I would suggest that anything in this vein should be reported to Julia Wise, as I believe she is a designated person for reporting concerns about community health, harmful behaviours, abuse, etc. She is unaffiliated with Leverage, and is a trained social worker.
To add detail about my mistake:
When you asked if you could confidentially send me a draft of your post about Will's book to check, I said yes.
The next week you sent me a couple more emails with different versions of the draft. When I saw that the draft was 18 pages of technical material, I realized I wasn't going to be a good person to review it. That's when I forwarded to someone on Will's team asking if they could look at it instead of me.
I should never have done that, because your original email asked me not to share it with anyone. For what it’s worth, the way that this happened is that when I was deciding what to do with the last email in the chain, I didn't remember and didn't check that the first email in the chain requested confidentiality. This was careless of me, and I’m very sorry about it.
I think the underlying mistake I made was not having this kind of situation flagged as sensitive in my mind, which contributed to my forgetting the original confidentiality request. If the initial email had been about some more personal situation, I am much more sure it would have been flagged in my mind as confidential. But because this was a critique of a book, I had it flagged as something like “document review” in my mind. This doesn’t excuse my mistake - and any breach of trust is a serious problem given my role - but I hope it helps show that it wasn’t intentional.
I now try to be much more careful about situations where I might make a similar mistake.
I've now added info on this to the post about being a contact person and to CEA's mistakes page.
Personally, I don't really blame you or think less of you for this screwup. I never got the impression that you are the sort of person who should be sent confidential book review drafts. Maybe you'd disagree, but that seems like a misunderstanding of your role to me.
It seemed clear to me that you made yourself available to confidential reports regarding conflict, abuse, and community health. Not disagreements with a published book. It makes sense that you didn't have a habit of mentally flagging those emails as confidential.
Regardless, I trust that you've been more careful since then, and I appreciate how clearly you own up to this mistake.
I want to offer my +1 that I strongly believe Julia's trustworthy for reports regarding Leverage.
Saying "I'm sorry I broke your trust" without engaging in any consequences for it feels cheap. To me such a mistake feels like you owe something to guzey.
One thing you could have done if you actually cared would have been to advocate for guzey in this exchange even if that goes against your personal positions.
Only admitting the mistake at comments and not in a more visible manner also doesn't feel like you treat it seriously enough. It likely deserves the same treatment as the mistakes on https://www.centreforeffectivealtruism.org/our-mistakes
Only admitting the mistake at comments and not in a more visible manner also doesn't feel like you treat it seriously enough. It likely deserves the same treatment as the mistakes on https://www.centreforeffectivealtruism.org/our-mistakes
For what it's worth, I do think this is probably a serious enough mistake to go on this page.
Wow, that is very bad. Personally I'd still trust Julia as someone to report harms from Leverage to, mostly from generally knowing her and knowing her relationship to Leverage, but I can see why you wouldn't.
One of the negative consequences of our information policy, as we have learned, is the way it made some regular interactions with people outside of the relevant information circles more difficult than intended.
Is Leverage willing to grant a blanket exemption from the NDAs which people evidently signed, to rectify the potential ongoing harms of not having information available? If not, can you share the text of the NDAs?
Hi Larissa -
Dangers and harms from psychological practices
Please consider that the people who most experienced harms from psychological practices at Leverage may not feel comfortable emailing that information to you. Given what they experienced, they might reasonably expect the organization to use any provided information primarily for its own reputational defense, and to discredit the harmed parties.
Dating policies
Thank you for the clarity here.
Charting/debugging was always optional
This is not my understanding. My impression is that a strong expectation was established by individual trainers with their trainees. And that charting was generally done during the hiring process. Even if the stated policy was that it was not required/mandatory.
It seems that Leverage currently in planning to publish a bunch of their techniques and from Leverages point of view, there are considerations that releasing the techniques could be dangerous for people using them. To me that does suggest a sincere desire to use provided information in a useful way.
If you are interested in being involved in the beta testing of the starter pack, or if you have experienced negative effects from psychological experimentation, including with rationality training, meditation, circling, Focusing, IFS, Leverage’s charting or belief reporting tools (or word-of-mouth copies of these tools), or similar techniques please do reach out to us at contact@leverageresearch.org. We are keen to gain as much information as possible on the harms and dangers as we prepare to release our psychology research.
If there are particular people who feel that they have been damaged, it would be great to still have a way that the information reaches Leverage. Maybe, a third-party could be found to mediate the conversation?
Is there anything else you could think of that would be a credible signal that Leverage is sincere about seeking the information about harms?
I didn't downvote ChristianKI's comment, but I feel like it's potentially a bit naive.
>Is there anything else you could think of that would be a credible signal that Leverage is sincere about seeking the information about harms?
In my view, the question isn't so much about whether they genuinely don't want harms to happen (esp. because harming people psychologically often isn't even good for growing the organization, not to mention the reputational risks). I feel like the sort of thing ChristianKI pointed out is just a smart PR move given what people already think about Leverage, and, conditional on the assumption that Leverage is concerned about their reputation in EA, it says nothing about genuine intentions.
Instead, what I'd be curious to know is whether they have the integrity to be proactively transparent about past mistakes, radically changed course when it comes to potentially harmful practices, and refrain from using any potentially harmful practices in cases where it might be advantageous on a Machiavellian-consequentialist assessment. To ascertain those things, one needs to go beyond looking at stated intentions. "Person/organization says nice-sounding thing, so they seem genuinely concerned about nice aims, therefore stop being so negative" is a really low bar and probably leads to massive over-updating in people who are prone to being too charitable.
I think the fact that it is now a four person remote organization doing mostly research on science as opposed to an often-live-in organization with dozens of employees doing intimate psychological experiments as well as following various research paths tells me that you are essentially a different organization and the only commonalities are the name and the fact that Geoff is still the leader.
If you hover over the karma counter, you can see that the comment is sitting at -2 with 12 votes, which means that there is a significant disagreement on how to judge it, not agreement that it should go away.
(It makes some sense to oppose somewhat useful things that aren't as useful as they should be, or as safe as they should be, I think that is the reason for this reaction. And then there is the harmful urge to punish people who don't punish others, or might even dare suggest talking to them.)
I'd rather not say, for the sake of my anonymity - something which is important to me because this:
However, I would also like to note that Leverage 1.0 has historically been on the receiving end of substantial levels of bullying, harassment, needless cruelty, public ridicule, and more by people who were not engaged in any legitimate epistemic activity. I do not think this is OK. I intend to call out this behavior directly when I see it. I would ask that others do so as well.
is a real concern. I've seen it firsthand - people associated with Leverage being ostracized, bullied, and made to feel very unwelcome and uncomfortable at social events and in online spaces by people in nearby communities, including this one.
It seems like a real risk to me that any amount of personal information I give will be used to discover my identity, and I'll be subject to the same.
Which, by the way, is despicable, and I find it alarming that only one person (besides Kerry) in this thread has acknowledged this behavior pattern.
I said in another comment that I didn't make an alt to come here and "defend Leverage" - this instance is the exception to that. These people are human beings.
(quote from Kerry's co...
If people are being bullied, that's extremely bad, and if you see that and call it out you're doing a noble thing.
But all I've seen in this thread -- I can't comment on e.g. what happens in person in the Bay Area, since that's thousands of miles away from where I am -- is people saying negative things about Leverage Research itself and not about individuals associated with it, with the single exception of the person in charge of Leverage, who fairly credibly deserves some criticism if the negative things being said about the organization are correct.
Bullying people is cruel and harmful. I'm not so sure there's anything wrong with "bullying" an organization. Especially if that organization is doing harm, or if there is good reason to think it is likely to do harm in the future.
I've seen someone from a different org, but with a similar valence in the community, get treated quite poorly at a party when they let their association be known. It was like the questioner stopped seeing them as a person with feelings and only treated them as an extension of the organization. I felt gross watching it and regret not saying anything at the time.
It seems overwhelmingly likely to me that Leveragers faced the same thing, and also that some members lumped some legitimate criticisms or refusals to play along in with this unacceptable treatment, because that's a human thing to do.
ETA: I talked to the person in question and they don't remember this, so apparently it made a bigger emotional impression on me than them (they remembered a different convo at the same event that seemed like the same kind of thing, but didn't report it being particularly unpleasant). I maintain that if I were regularly subject to what I saw it would have been quite painful, and imagine that to be true for at least some other people.
I'm not so sure there's anything wrong with "bullying" an organization.
There's a pragmatic question of building reliable theory of what's going on, which requires access to the facts. Even trivial inconvenience for those who have the facts in communicating them does serious damage to this process's ability to understand what's going on.
The most valuable facts are those that contradict the established narrative of the theory, they can actually be relevant for improving it, for there is no improvement without change. Seeing a narrative that contradicts the facts someone has is already disheartening, so everything else that could possibly be done to make sharing easier, and not make it harder, should absolutely be done.
No comment on your larger point but
Saying that someone is in a cult (though I note that most people have been pretty careful not to use quite that terminology) isn't an accusation. Not at the person in question, anyway.
"You are in a cult" is absolutely an accusation directed at the person. I can understand moral reasons why someone might wish for a world in which people assigned blame differently, and technical reasons why this feature of the discourse makes purely descriptive discussions unhelpfully fraught, but none of that changes the empirical fact that "You are in a cult" functions as an accusation in practice, especially when delivered in a public forum. I expect you'll agree if you recall specific conversations-besides-this-one where you've heard someone claim that another participant is in a cult.
If anyone is aware of harms or abuses that have taken place involving staff at Leverage Research, please email me, in confidence, at larissa@leverageresearch.org.
Bullshit. This is not how you prevent abuse of power. This is how you cover it up.
These concerns are widely shared for research in the hard sciences (e.g., nuclear physics), but are valid as well for the social sciences.
Social science infohazards are not a thing because they must be implemented by an organization to work and organizations leak like a sieve. Even nuclear secrets leak. This demand for secrecy is an blatant excuse used to obstruct oversight and to prevent peer review. What you're doing is the opposite of science.
I want to draw attention to the fact that "Kerry Vaughan" is a brand new account that has made exactly three comments, all of them on this thread. "Kerry Vaughan" is associated with Leverage. "Kerry Vaughan"'s use of "they" to describe Leverage is deliberately misleading.
I'm not hiding my connection to Leverage which is why I used my real name, mentioned that I work at Leverage in other comments, and used "we" in connection with a link to Leverage's case studies. I used "they" to refer to Leverage 1.0 since I didn't work at Leverage during that time.
I want to draw attention to the fact that "Kerry Vaughan" is a brand new account that has made exactly three comments, all of them on this thread. "Kerry Vaughan" is associated with Leverage. "Kerry Vaughan"'s use of "they" to describe Leverage is deliberately misleading.
To be fair, KV was open about that association in both previous comments, using 'we' in the first and including this disclaimer in the second --
(I currently work at Leverage research but did not work at Leverage during Leverage 1.0 (although I interacted with Leverage 1.0 and know many of the people involved). Before working at Leverage I did EA community building at CEA between Summer 2014 and early 2019.)
-- which also seems to explain the use of 'they' in KV's third comment, which referred specifically to "Leverage 1.0".
(I hope this goes without saying on LW, but I don't mean this as a general defense of Leverage or of KV's opinions. I know nothing about either beyond what I've read here, and I haven't even read all the relevant comments. Personally I wouldn't get involved with an organisation like Leverage.)
When I hear that a few people within Leverage ended up with serious negative consequences because of charting, it's unclear for me from the outside what that means.
It's my understanding that Leverage did a lot of experiments. It could be that some experiments ended messing up some of the participants. It could also be that "normal charting" without doing any experiments messed the people up.
I would offer that "normal charting" as offered to external clients was being done in a different incentive landscape than "normal charting" as conducted on trainees within the organization. I mean both incentives on the trainer, and incentives on the trainee.
Concretely, incentives-wise:
Hi all,
During my years in the Bay I spent some of my time as an employee of Paradigm, a Leverage 1.0 affiliate. I also spent a good amount of time living and hanging out at the Leverage house/offices.
I'm writing here from a coffeeshop in Berlin because...why? I think because I get frustrated by the balance of coverage that Leverage gets. When I consider what sorts of things produce value, they tend to start off being very high-variance. They tend to have very weird-seeming history.
For instance: Whispers have it that – before AI X-Risk was a respectable, well-known cause backed by people like Elon Musk – that a high school drop-out named "Eliezer Yudkowsky" wrote a Harry Potter fan-fiction to bring hundreds of people into a rationalist movement that might someday save the world from runaway algorithms. Did you know Trump-supporter Peter Thiel was an early funder of one of its main organizations?! Did you know that many rationalists have become affiliated with Neoreaction, an alt-right group with members that support authoritarianism?!! Don't get me started on a different now-respectable org – one staffed by many rationalists – that bootstrapped itself in part through astroturf...
Follow-up: I wanted to acknowledge that some other people who spent time at Leverage had much worse experiences than I did. I don't want to downplay that. My experience may have been unique since I focused on building an external company and since my social circle in the Bay Area was mostly non-Leveragers.
All that said, I still stand by what I wrote above. I was reacting mainly to the original post wearing a guise of objectivity. I think I would have no gripe with it if the title was, "I have beef with Leverage and so here are some biasing facts I'd like to highlight about them" – though, to be fair, that's a really long title, and also I could be projecting.
I think Leverage is worthy of deep criticisms (and thought so even before yesterday's Medium post) but also what you say about "guise of objectivity" is something that bothered me about this post and I'm glad you voiced it.
oh ps, I'm sure this has already been mentioned in the 100+ comments I haven't read, but it's weird to call Leverage a "high-demand" group since – during my time there – people were regularly complaining about basically having too much freedom. I can't remember a single day there that anyone demanded I do anything, in the way a manager demands of employees or guru makes demands of disciples. (Actually there might have been a few where, eg, there was a mandatory meeting that we all install a good password manager so we don't get hacked. But often people missed these "mandatory" meetings.) Most days I just did whatever I wanted. Often people felt like they were floating and wanted *more* directives.
My current model is that this changed around 2017 or so. At least my sense was that people from before that time often had tons of side-projects and free time, and afterwards people seemed to have really full schedules and were constantly busy.
Update from Leverage Research: a reminder about our AMA & other ways to get updates
For anyone in this thread who still has questions about Leverage Research, I just wanted to remind you about the AMA we are running at our virtual office tomorrow (Saturday, October 2, at 12 PM PT).
The event is open to anyone interested in our work and is designed to allow people to ask questions about our history, current work, and future plans. See this comment for further details.
Beyond that, we're currently exploring different ways to ensure we hear from people who were part of the Leverage 1.0 ecosystem about their experiences, especially before we release some of our psychology tools and as we write our FAQ on our history (see this post for more details on these two initiatives). This includes looking into neutral third-party moderators and ways of gathering anonymous feedback. If you want to stay up to date on the steps we're taking, or our current work in general, subscribe to our quarterly newsletter or follow us on Twitter.
How do you say "this a cult" without literally saying the words "this is a cult"? (In the common colloquial sense of the word "cult", as opposed the historical academic sense of the word.)
I've never heard of this organization until now and I'd be happy never to hear about them in the future. (This isn't a criticism of OP.)
It's been helpful for me to think of three signs of a cult:
1. Transformational experiences that can only be gotten through The Way.
2. Leader has Ring of Power that gives access to The Way.
3. Clear In-Group vs. Out-Group dynamics to achieve The Way.
Leverage 1.0 seems to have all three:
1. Transformational experiences through charting and bodywork.
2. Geoff as Powerful Theorist.
3. In-group with Leverage as "only powerful group".
Given this, I'm most curious about what Geoff has done to reflect/improve and what the ~rationalist community would want to see from h...
I think the way the term cult (or euphemisms like “high-demand group”) has been used by the OP and by many commenters in this thread is extremely unhelpful and, I suspect, not in keeping with the epistemic standards of this community.
At its core, labeling a group as a cult is an out-grouping power move used to distance the audience from that group’s perspective. You don’t need to understand their thoughts, explain their behavior, form a judgment on their merits. They’re a cult.
This might be easier to see when you consider how, from an outside perspec...
This might be easier to see when you consider how, from an outside perspective, many behaviors of the Rationality community that are, in fact, fine might seem cultish. Consider, for example, the numerous group houses, hero-worship of Eliezer, the tendency among Rationalists to hang out only with other Rationalists, the literal take over the world plan (AI), the prevalence of unusual psychological techniques (e.g., rationality training, circling), and the large number of other unusual cultural practices that are common in this community. To the outside world, these are cult-like behaviors. They do not seem cultish to Rationalists because the Rationality community is a well-liked ingroup and not a distrusted outgroup.
I think there's actually been a whole lot of discourse and thought about Are Rationalists A Cult, focusing on some of this same stuff? I think the most reasonable and true answers to this are generally along the lines of "the word 'cult' bundles together some weird but neutral stuff and some legitimately concerning stuff and some actually horrifying stuff, and rationalists-as-a-whole do some of the weird neutral stuff and occasionally (possibly more often tha...
There is a huge difference between "tendency to hang out with other Rationalists" and having mandatory therapy sessions with your supervisor or having to ask for permission to write a personal blog.
Yeah, 'cult' is a vague term often overused. Yeah, a lot of rationality norms can be viewed as cultish.
How would you suggest referring to an 'actual' cult - or, if you prefer not to use that term at all, how would you suggest we describe something like scientology or nxivm? Obviously those are quite extreme, but I'm wondering if there is 'any' degree of group-controlling traits that you would be comfortable assigning the word cult to? Or if I refer to scientology as a cult, do you consider this an out-grouping power move used to distance people from scientology's perspective?
I think the way the term cult (or euphemisms like “high-demand group”) has been used by the OP and by many commenters in this thread is extremely unhelpful and, I suspect, not in keeping with the epistemic standards of this community.
No. As demonstrated by this comment by Viliam, the word "cult" refers is a well-defined set of practices used to break people's ability to think rationally. Leverage does not deny using these practices. To the contrary, it appears flagrantly indifferent to the abuse potential. Cult techniques of brainwashing an attractor of human social behavior. Eliezer Yudkowsky warned about this attractor. Your attempt to redefine cult more broadly is a signal you're bullshitting us.
I learned belief reporting from a person who attended a Leverage workshop and haven't had any direct face-to-face exposure to Leverage.
Belief reporting is a debugging technique. You have a personal issue you want to address. Then you look at related beliefs.
Leverage found that if someone sets an intention of "I will tell the truth" and then speaks out of a belief like "I'm a capable person" and they don't believe that (at a deep level), they will have a physical sensation of resistance.
Afterwards, there's an attempt to trace the belief to it's roots. The person can then speak out various forms of "I'm not a capable person because X" and "I'm not a capable person because Y". Then recursively the process gets applied to seek for the root. Often that allows uncovering that there's some confusion at the base of the belief and then after having uncovered the confusion it's possible to work the tree up again to get rid of the "I'm not a capable person" belief and switch it into "I'm a capable person".
This often leads to discovering that one holds beliefs at a deep level that one's system II considers silly but that still are the base of other beliefs and that affect our actions.
Thanks for the description!
In my opinion, this sounds interesting as a confidential voluntary therapy, but Orwellian when:
Members who were on payroll were expected to undergo charting/debugging sessions with a supervisory "trainer", and to "train" other members. The role of trainer is something like "manager + therapist": that is, both "is evaluating your job performance" and "is doing therapy on you".
So, your supervisor is debugging your beliefs, possibly related to your job performance, and you are supposed to not only tell the truth, but also "seek for the root"... and yet, in your opinion, this does not imply "having to confess violation of the rules or committed sins"?
What exactly happens when you start having doubts about the organization or the leader, and as a result your job performance drops, and then you are having the session with your manager? Would you admit, truthfully, "you know, recently I started having some doubts about whether we are really doing our best to improve the world, or just using the effective altruist community as a finshing pond for people who are idealistic and willing to sacrifice... and I guess these thoughts distract me from my tasks", and then your therapist/manager is going to say... what?
you're the one with the new account made only to defend Leverage
The social pressure against defending Leverage is in the air, so anonymity shouldn't be held against someone who does that, it's already bad enough that there is a reason for anonymity.
I've spoken to people recently who were unaware of some basic facts about Leverage Research 1.0; facts that are more-or-less "common knowledge" among people who spent time socially adjacent to Leverage, and are not particularly secret or surprising in Leverage-adjacent circles, but aren't attested publicly in one place anywhere.
Today, Geoff Anders and Leverage 2.0 are moving into the "Progress Studies" space, and seeking funding in this area (see: Geoff recently got a small grant from Emergent Ventures). This seems like an important time to contribute to common knowledge about Leverage 1.0.
You might conclude that I'm trying to discredit people who were involved, but that's not my aim here. My friends who were involved in Leverage 1.0 are people who I respect greatly. Rather, I just keep being surprised that people haven't heard certain specific, more-or-less legible facts about the past, that seem well-known or obvious to me, and that I feel should be taken into account when evaluating Leverage as a player in the current landscape. I would like to create here a publicly-linkable document containing these statements.
Facts that are common knowledge among people I know:
Members of Leverage 1.0 lived and worked in the same Leverage-run building, an apartment complex near Lake Merritt. (Living there was not required, but perhaps half the members did, and new members were particularly encouraged to.)
Participation in the project involved secrecy / privacy / information-management agreements. People were asked to sign an agreement that prohibited publishing almost anything (for example, in one case someone I know starting a personal blog on unrelated topics without permission led to a stern reprimand).
Geoff developed a therapy technique, "charting". He says he developed it based on his novel and complete theory of psychology, called "Connection Theory". In my estimation, "charting" is in the same rough family of psychotherapy techniques as Internal Family Systems, Coherence Therapy, Core Transformation, and similar. Like those techniques, it leads to shifts in clients' beliefs and moods. I know people from outside Leverage who did charting sessions with a "coach" from Paradigm Academy, and reported it helped them greatly. I've also heard people who did lots of charting within Leverage report that it led to dissociation and fragmentation, that they have found difficult to reverse.
Members who were on payroll were expected to undergo charting/debugging sessions with a supervisory "trainer", and to "train" other members. The role of trainer is something like "manager + therapist": that is, both "is evaluating your job performance" and "is doing therapy on you".
Another type of practice done at the organization, and offered to some people outside the organization, was "bodywork", which involved physical contact between the trainer and the trainee. "Bodywork" could in other contexts be a synonym for "massage", but that's not what's meant here; descriptions I heard of sessions sounded to me more like "energy work". People I've spoken to say it was reported to produce deeper and less legible change.
Using psychological techniques to experiment on one another, and on the "sociology" of the group itself, was a main purpose of the group. It was understood among members that they were signing up to be guinea pigs for experiments in introspection, altering one's belief structure, and experimental group dynamics.
The stated purpose of the group was to discover more theories of human behavior and civilization by "theorizing", while building power, and then literally take over US and/or global governance (the vibe was "take over the world"). The purpose of gaining global power was to lead to better coordination and better outcomes for humanity.
The narrative within the group was that they were the only organization with a plan that could possibly work, and the only real shot at saving the world; that there could be no possibility of success at one's goal of saving the world outside the organization.
Many in the group felt that Geoff was among the best and most powerful "theorists" in the world. Geoff's power and prowess as leader was a central theme.
Paradigm Academy is a for-profit entity, and Leverage is a non-profit entity. Both were part of “the ecosystem”, which was the Geoff-led project behind Paradigm and Leverage. Reserve (a cryptocurrency) was founded by ecosystem members, with a goal of raising money for Leverage/Paradigm.
[substantial edits, moved to end of list] Geoff, as the leader of the organization, dated employees/subordinates. I'm aware of 3 women over the course of 10 years he had a sexual or non-platonic relationship with. I have no reason to believe these were non-consensual; I view these as questionable management decisions, not necessarily tangible harms. I refer people to Larissa's comment. The specific section on "Dating policies" is clear, stated by a formal spokesperson for the organization, and accords with my understanding. I do not have evidence of any further pattern of non-platonic interactions with employees. I am glad that the nonexistence of any policy on dating within the reporting chain of the organization is now a matter of official record.
Why these particular facts?
One reason I feel it is important to make these particular facts more legibly known is because these pertain to the characteristics of a "high-demand group" (which is a more specific term than "cult", as people claim all kinds of subcultures and ideologies are a "cult").
You can compare some of the above bullets with the ICSA checklist of characteristics: https://www.icsahome.com/articles/characteristics.
There are many good reasons to structure groups in ways that have some of these characteristics, and to get involved in groups that have these characteristics. But it alarms me if the presence of these characteristics is simply not known by people interacting with Geoff or with Leverage 2.0 in its new and updated mission, and so this information is not taken into account in an evaluation.
How I know these things
Between 2016 and 2018 I became friends with a few Leverage members. I do not feel I was harmed by Leverage in any substantive way. None of the facts above are things that I got from a single point-of-contact; everything I state above is largely already known among people who were socially adjacent to Leverage when I was around.
Focus on structural properties, not impacts or on-net "worth-it-ness".
I try to focus my points above on structural facts about how the organization was set up, rather than what the result was.
I know former members who feel severely harmed by their participation in Leverage 1.0. I also know former members who view Leverage 1.0 as having been a deeply worthwhile experiment in world-improving. I don't think it's even remotely clear how "good" or "bad" the on-net impact of Leverage 1.0 was, and I don't aim here to speak to that. Nor do I aim to judge whether that organization structure was, or was not, "worth trying" because of the potential of "enormous upside".
I do worry about "ends justify the means" reasoning when evaluating whether a person or project was or wasn't "good for the world" or "worth supporting". This seems especially likely when using an effective-altruism-flavored lens that only a few people/organizations/interventions will matter orders of magnitude more than others. If one believes that a project is one of very few projects that could possibly matter, and the future of humanity is at stake - and also believes the project is doing something new/experimental that current civilization is inadequate for - there is a risk of using that belief to extend unwarranted tolerance of structurally-unsound organizational decisions, including those typical of "high-demand groups" (such as use of psychological techniques to increase member loyalty, living and working in the same place, non-platonic relationships with subordinates, secrecy, and so on) without proportionate concern for the risks of structuring an organization in that way.
Going forward
I'm posting this anonymously because, at the moment, this is all I have to say and I don't want to discuss the topic at length. Also, I don't want to become known as someone saying things this organization might find unflattering. If you happen to know who wrote this post, please don't spread that knowledge. I have asked in advance for a LW moderator to vouch in a comment that I'm someone known to them, who they broadly trust to be epistemically reasonable, and to have written good posts in the past.
If anyone would like to share other information about Leverage 1.0, feel free to do so in the comments section.