All of BayAreaHuman's Comments + Replies

Here is an example:

  • Zoe's report says of the information-sharing agreement "I am the only person from Leverage who did not sign this, according to Geoff who asked me at least three times to do so, mentioning each time that everyone else had (which read to me like an attempt to pressure me into signing)."

  • I have spoken to another Leverage member who was asked to sign, and did not.

  • The email from Matt Fallshaw says the document "was only signed by just over half of you". Note the recipients list includes people (such as Kerry Vaughan) who were probably

... (read more)

I am more confident that what I heard was "Geoff exhibits willingness to lie". I also wouldn't be surprised if what I heard was "Geoff reports being willing to lie". I didn't tag the information very carefully.

Based on broad-strokes summaries said to me by ex-Leveragers (though admittedly not first-hand experience), I would say that the statement "Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers" rings true to what I have heard.

Some things mentioned to me by Leverage people as typical/archetypal of Geoff's attitude include being willing to lie to people outside Leverage, feeling attacked or at risk of being attacked, and viewing adjacent non-Leverage groups within the broader EA sphere as enemies.

Thanks! To check: did one or more of the ex-Leveragers say Geoff said he was willing to lie? Do you have any detail you can add there? The lying one surprises me more than the others, and is something I'd want to know.

Thanks, this all helps. At the time, I felt that writing this with the meta-disclosures you're describing would've been a tactical error. But I'll think on this more; I appreciate the input, it lands better this time.

I did write both "I know former members who feel severely harmed" and "I don't want to become known as someone saying things this organization might find unflattering". But those are both very, very understated, and purposefully de-emphasized.

I appreciate this invitation. I'll re-link to some things I already said on my own stance: https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QKKnepsMoZmmhGSe

Beyond what I laid out there:

  • It was challenging being aware of multiple stories of harm, and feeling compelled to warn people interacting with Geoff, but not wanting to go public with surprising new claims of harm. (I did mention awareness of severe harm very understatedly in the post. I chose instead to focus on "already known" properties that

... (read more)

Since it sounds like just-upvotes might not be as strong a signal of endorsement as positive engagement...

I want to say that I really appreciate and respect that you were willing to come forward, with facts that were broadly-known in your social graph, but had been systematically excluded from most people's models.

And you were willing to do this, in a pretty adversarial environment! You had to deal with a small invisible intellectual cold-war that ensured, almost alone, without backing down. This ​counts for even more.


I do have a little bit of sensitive ... (read more)

7TekhneMakre
I don't have anything to add, but I just want to say I felt a pronounced pang of warmth/empathy towards you reading this part. Not sure why, something about fear/bravery/aloneless/fog-of-war.

I originally chose LessWrong, instead of some other venue, to host the Common Knowledge post primarily because (1) I wanted to create a publicly-linkable document pseudonymously, and (2) I expected high-quality continuation of information-sharing and collaborative sense-making in the comments.

I appreciate hearing clearly what you'd prefer to engage with.

I also feel that this response doesn't adequately acknowledge how tactically adversarial this context is, and how hard it is to navigate people's desire for privacy.

( ... which makes me feel sad, discouraged, and frustrated. It comes across as "why didn't you just say X", when there are in fact strong reasons why I couldn't "just" say X.)

By "tactically adversarial", I mean that Geoff has an incredibly strong incentive to suppress clarity, and make life harder for people contributing to clarity. ... (read more)

Ruby220

I'm very sorry. Despite trying to closely follow this thread, I missed your reply until now.

I also feel that this response doesn't adequately acknowledge how tactically adversarial this context is, and how hard it is to navigate people's desire for privacy.

You're right, it doesn't. I wasn't that aware or thinking about those elements as much as I could have been. Sorry for that.

It was very difficult for me to create a document that I felt comfortable making public...

It makes sense now that this is the document you ended up writing. I do appreciate you went... (read more)

Thanks for this. I think these distinctions are important.

Let me clarify: In this post when I say "Common knowledge among people who spent time socially adjacent to Leverage", what I mean is:

  • I heard these directly from multiple different Leverage members.
  • When I said these to others, they shared they had also heard the same things directly from other Leverage members, including members other than the ones I had spoken to.
  • I was in groups of people where we all discussed that we had all heard these things directly from Leverage members. Some of these disc
... (read more)

Completely fair. I've removed "facts" from the title, and changed the sub-heading "Facts I'd like to be common knowledge" (which in retrospect is too pushy a framing) to "Facts that are common knowledge among people I know"

I totally and completely endorse and co-sign "if people bring forward their personal impressions as different to the OP, this should in large part be treated as more data, and not a challenge."

5Ruby
It feels like the "common knowledge" framing is functioning as some form of evidence claim? "Evidence for the truth of these statements is that lots of people believe them". And if it's true that lots of people believe them, that is legitimate Bayesian evidence. At the same time, it's kind of hard to engage with and I think saying "everyone knows" make it feel harder to argue with.  A framing I like (although I'm not sure if entirely helps here with ease of engagement) is the "this is what I believe and how I came to believe it" approach, as advocated here. So you'd start of with "I believe Leverage Research 1.0 has many of the properties of a high-demand group such as" proceeding to "I believe this because of X things I observed and Y things that I heard and were corroborated by groups A and B", etc.

Appreciate you editing the post, that seems like an improvement to me.

Thank you for this.

In retrospect, I could've done more in my post to emphasize:

  1. Different members report very different experiences of Leverage.

  2. Just because these bullets enumerate what is "known" (and "we all know that we all know") among "people who were socially adjacent to Leverage when I was around", does not mean it is 100% accurate or complete. People can "all collectively know" something that ends up being incomplete, misleading, or even basically false.

I think my experience really mismatched the picture of Leverage described by OP.

I ful... (read more)

I don't advocate for an epistemic standard in which a single person, doing anything less than a singlehanded investigative journalistic dive, is expected to do more than that, epistemic-verification-wise, before sharing their current understanding publicly and soliciting more information in the comments.

Sure, but you called the post “Common Knowledge Facts”. If you’d called the post “Me and my friends’ beliefs about Leverage 1.0” or “Basic claims I believe about Leverage 1.0” then that would IMO be a better match for the content and less so claim to uni... (read more)

I have now made an even more substantial edit to that bullet point.

Hi Larissa -

Dangers and harms from psychological practices

Please consider that the people who most experienced harms from psychological practices at Leverage may not feel comfortable emailing that information to you. Given what they experienced, they might reasonably expect the organization to use any provided information primarily for its own reputational defense, and to discredit the harmed parties.

Dating policies

Thank you for the clarity here.

Charting/debugging was always optional

This is not my understanding. My impression is that a strong e... (read more)

It seems that Leverage currently in planning to publish a bunch of their techniques and from Leverages point of view, there are considerations that releasing the techniques could be dangerous for people using them. To me that does suggest a sincere desire to use provided information in a useful way. 

See from https://www.lesswrong.com/posts/3GgoJ2nCj8PiD4FSi/updates-from-leverage-research-history-and-recent-progress :

If you are interested in being involved in the beta testing of the starter pack, or if you have experienced negative effects from psychol

... (read more)

I would offer that "normal charting" as offered to external clients was being done in a different incentive landscape than "normal charting" as conducted on trainees within the organization. I mean both incentives on the trainer, and incentives on the trainee.

Concretely, incentives-wise:

  • The organization has an interest in ensuring that the trainee updates their mind and beliefs to accord with what the organization thinks is right/good/true, what the organization thinks makes a person "effective", and what the organization needs from the member.
  • The trainee may reasonably believe they could be de-funded, or at least reduced in status/power in the org, if they do not go along.

I added a sub-bullet to the main post, to clarify my epistemic status on that point.

I have now made an even more substantial edit to that bullet point.

This is also a useful resource, and the pingbacks link to other resources.

I want to gesture at "The Plan", linked from Gregory Lewis's comment (https://forum.effectivealtruism.org/posts/qYbqX3jX4JnTtHA5f/leverage-research-reviewing-the-basic-facts?commentId=8goitqWAZfEmEDrBT), as supporting evidence for the explicit "take over the world" vibe, in terms of how exactly beneficial outcomes for humanity were meant to result from the project, best viewable as PDF.