Review

Over the past seven months, I've been working part-time on an investigation of Nonlinear, culminating in last week's post. As I'm wrapping up this project, I want to share my personal perspective, and share some final thoughts.

This post mostly has some thoughts and context that didn't fit into the previous post. I also wish to accurately set expectations that I'm not working on this investigation any more.

Why I Got Into Doing an Investigation

From literally the very first day, my goal has been to openly share some credible allegations I had heard, so as to contribute to a communal epistemic accounting. 

On the Tuesday of the week Kat Woods first visited (March 7th), someone in the office contacted me with concerns about their presence (the second person in good standing to do so). I replied proposing to post the following one-paragraph draft in a public Lightcone Offices slack channel.

I have heard anonymized reports from prior employees that they felt very much taken advantage of while working at Nonlinear under Kat. I can't vouch for them personally, I don't know the people, but I take them pretty seriously and think it's more likely than not that something seriously bad happened. I don't think uncheckable anonymized reports should be sufficient to boot someone from community spaces, especially when they've invested a bunch into this ecosystem and seems to me to plausibly be doing pretty good work, so I'm still inviting them here, but I would feel bad not warning people that working with them might go pretty badly.

(Note that I don't think the above is a great message, nonetheless I'm sharing it here as info about my thinking at the time.)

That would not have represented any particular vendetta against Nonlinear. It would not have been an especially unusual act, or even much of a call out. Rather it was intended as the kind of normal sharing of information that I would expect from any member of an epistemic community that is trying to collectively figure out what's true.

But the person who shared the concerns with me recommended that I not post that, because it could trigger severe repercussions for Alice and Chloe. They responded as follows.

Person A: I'm trying to formulate my thoughts on this, but something about this makes me very uncomfortable.

...

Person A: In the time that I have been involved in EA spaces I have gotten the sense that unless abuse is extremely public and well documented nothing much gets done about it. I understand the "innocent until proven guilty" mentality, and I'm not disagreeing with that, but the result of this is a strong bias toward letting the perpetrators of abuse off the hook, and continue to take advantage of what should be safe spaces. I don't think that we should condemn people on the basis of hearsay, but I think we have a responsibility to counteract this bias in every other way possible. It is very scary to be a victim, when the perpetrator has status and influence and can so easily destroy your career and reputation (especially given that they have directly threatened one of my friends with this).

Could you please not speak to Kat directly? One of my friends is very worried about direct reprisal.

BP: I'm afraid I can't do that, insofar as I'm considering uninviting her, I want to talk to her and give her a space to say her piece to me. Also I already brought up these concerns with her when I told her she was invited.

I am not going to name you or anyone else who raised concerns to me, and I don't plan to give any info that isn't essentially already in the EA Forum thread. I don't know who the people are who are starting this info.

This first instance is an example of a generalized dynamic. At virtually every step of this process, I wanted to share, publicly, what information I had, but there kept being (in my opinion, legitimate) reasons why I couldn't.

(I've added a few more example chat logs in the footnotes here[1][2][3][4][5][6][7][8][9].) 

Eventually, after getting to talk with Alice and Chloe, it seemed to me Alice and Chloe would be satisfied to share a post containing accusations that were received as credible. They expected that the default trajectory, if someone wrote up a post, was that the community wouldn't take any serious action, that Nonlinear would be angry for "bad-mouthing" them, and quietly retaliate against them (by, for instance, reaching out to their employer and recommending firing them, and confidentially sharing very negative stories). They wanted to be confident that any accusations made would be strong enough that people wouldn't just shrug and move on with their lives. If that happened, the main effect would be to hurt them further and drive them out of the ecosystem.

It seemed to me that I could not personally vouch for any of the claims (at the time), but also that if I did vouch for them, then people would take them seriously. I didn't know either Alice or Chloe before, and I didn't know Nonlinear, so I needed to do a relatively effortful investigation to get a better picture of what Nonlinear was like, in order to share the accusations that I had heard.

I did not work on this post because it was easy. I worked on it because I thought it would be easy. I kept wanting to just share what I'd learned. I ended up spending about ~320 hours (two months of work), over the span of six calendar months, to get to a place where I was personally confident of the basic dynamics (even though I expect I have some of the details wrong), and that Alice and Chloe felt comfortable with my publishing.

On June 15th I completed the first draft of the post, which I'd roughly say had ~40% overlap in terms of content with the final post. On Wednesday August 30th, after several more edits, I received private written consent from both Alice and Chloe to publish. A week later I published.

I worked on this for far too long. Had I been correctly calibrated about how much work this was at the beginning, I likely wouldn't have pursued it. But once I got started I couldn't see a way to share what I knew without finishing, and I didn't want to let down Alice and Chloe.

My goal here was not to punish Nonlinear, per se. My goal was to get to the point where the accusations I'd found credible could be discussed, openly, at all. 

When I saw on Monday that Chloe had decided to write a comment on the post, I felt a sense of "Ah, the job is done." That's all I wanted. For both sides to be able to share their perspective openly without getting dismissed, and for others to be able to come to their own conclusions.

I have no plans to do more investigations of this sort. I am not investigating Nonlinear further. If someone else wants to pick it up, well, now you know a lot of what I know!

Please don't think that because I took the time to follow up on these accusations on this occasion, that there is "a lifeguard on duty", that either bad behavior or info suppression will be reliably noticed or called out. We've shut down the Lightcone Offices, I've no plans to do this again, and don't particularly want to.

My sense is that there are a good number more injustices and predators in the EA ecosystem, most of which do not look exactly like this case. But it is not my job to uncover them and I am not making it my job. If you want to have an immune system that ferrets out bad behavior, you'll have to take responsibility for building that.

Assorted Closing Thoughts

Some final thoughts about Nonlinear

  • I’ve still got a lot of genuine uncertainty about who did what and how responsible the core Nonlinear team are for all the horrible experiences Alice and Chloe had. I just wanted to get it out into a state where Nonlinear weren't in a position to just attack their former employees' characters and push the post away. I hope for Nonlinear's sakes that they are able to show that they're not as culpable for the harms as it seems. I’ve had to work pretty hard to be confident that the harms won't be inappropriately pushed under the rug.
  • For the record, a bunch of the stuff that Nonlinear tried, if they were to apologize for, seems forgivable to me, and not obviously norm-violating ex ante. Traveling around the world in a small group sounds fun (though after seeing how it went down here I'd now be much more worried about it). I have been very financially dependent on my cofounder in the past, and worked without a legal structure. I think it's generally quite hard to have a personal assistant that actual solves your personal problems and stays out of your way where there isn't a bunch of friction and a bit of a strange power dynamic. I think all of these things went quite badly wrong here and I think they should've tried to make that up to the ex-employees, but I don't think these things should never be tried again (though not all at once), and that if they had made it up to them that would've been okay.
  • The primary thing that really isn't okay according to my ethical norms, is silencing and intimidating people who were harmed and who disagree with you about why. That's why I tried so hard to communicate Alice and Chloe's perspective here, so that won't happen.
  • In general, I think it's fine for teams to try really weird things. But I think Nonlinear in particular needs to credibly signal that, if someone works with them and feels burned afterward, or get into some other conflict, they will be free to share openly that they feel that way and why, without fearing retaliation professionally or otherwise.
  • (Also everyone involved should write things down more! I think things go better when people jot down verbal agreements in writing. Makes it much easier months later to check in on what expectations were set.)
  • To be clear I think there’s a good chance that Kat and Emerson are very straightforwardly responsible for basically all the messed up things that happened here, and their best response is to stop trying to manage people, admit to themselves that they have major character flaws that are not easily patched, and focus on projects that don’t involve having much power over other people or paying people tiny-or-no-salaries. And most people's best response is to keep a safe distance from them.
  • Kat and Emerson seem to me to be in denial. Most of their comments seem to me to have been sustaining a narrative that this is all just malicious lies from Alice and Chloe. At no point in either conversation that I had with them did I feel that they could see the harms I was worried about. I hope they can see now. Then they can actually respond to that, and grow/change.
  • By default, when ex-employees criticize an organization, I don't think the ex-employees have a right to anonymity. However in this instance my opinion is that Kat and Emerson have erred way too high on the side of signaling that they will be retributive, and I think if they want to be trusted around this ecosystem in future years right now they clearly should avoid actions that seem obviously retributive. As I said, the personal costs of working at Nonlinear have haunted Alice and Chloe for 1.5 years, and I would consider it an exceedingly inappropriate escalation for Nonlinear to dox them in response to my post, even if they have valid criticisms.
  • Sometimes I'm concerned that I portrayed Nonlinear in an overly unpleasant light, given that I don't know a lot of details and am painting a broad picture. Sometimes I re-read my many interview notes and remember pretty concerning things I didn't include (due to reasons like privacy (on all sides) or because it's from a 3rd hand report), and I start forming a hunch that if all was revealed their actions would turn out to have been much worse than what I display in the post. (What I'm saying is that I have a lot of uncertainty still in both directions.)

Some final thoughts about this investigation

  • One of the hard things for me was being respectful of Alice and Chloe whilst also trying to work with them on something I knew was painful for them. My relationship to them in this whole thing has felt pretty confusing to me. From one perspective I'm just a stranger showing up in their life repeatedly interviewing them about terrible things that happened to them and saying I'm gonna try to do something about it. I was generally pretty confused about the boundaries of what sort of input from them made sense to ask for — is it appropriate to ask them to spend much time searching through texts and emails to answer some questions about what happened? I'll admit to also having some concerns about them not being the best at asserting their boundaries. I moved more slowly and carefully on that account. My guess is that had I got it all done much faster, this could have been much more painful during the process but overall they’d get past it faster and that would’ve been better for them. It would also have increased the risk of them regretting ever talking to me, which I was pretty worried about. I'm pretty sure I made some notable mistake here but I still don't know what precisely I wish I'd done differently.
  • One guess is that I should've said something like "I am willing to spend N hours working with you to make a serious case here, and if I believe it, then I'll publish it, and if I don't believe it at that point I'm going to move on" and then have them decide how much effort they wanted to put into that, and if it wasn't worth it, move on. But man, it felt wrong to have serious and credible accusations and not be able to let other people know. I didn't really feel I could let it go.
  • New people have started giving me more surprising information about Kat/Emerson that suggests other bad situations have occurred, but I'm not doing this job any more. And anyway, I think my last post gives people most of what they need to know.
  • Another sign to me that it was right for me to do this, was that many of the people I interviewed said things like "I have felt ethical concerns about Nonlinear but I didn't know what to do about them" and reported feeling relieved that they could share their thoughts with someone (me).
  • Generally, everyone who I spoke with, or got references from, seemed honest and open. But, of course, it may eventually come out that there is someone who I was mistaken to put my trust in.
  • In my last post, I advised people not to bother Alice and Chloe about this situation. I would like to revise this to say that, while I wouldn't want people to bother them about their experiences with Nonlinear, I’ll say I think it’d be pretty nice for people who are friendly with them to send them messages of warmth and friendship and support. I got a fair few of them when I wrote the post and that was helpful for me (sorry I didn't reply to most of them).

On the CEA Community Health Team (and the EA ecosystem in general)

  • [Edit: Oops, I've edited the first few bullets out from this section, I'll check some things privately and come back to edit this in the next couple days. I think it'll probably be fine, but worth checking. Sorry for the confusion, I'll leave a comment saying so when I've returned them.]
  • I think the CEA Community Health team is much more like an institutionalized whisper network than it is like the police, where lots of people will quietly give it sensitive information, but it mostly isn't in a position to use it, and on the rare occasions that it does it's not via an accountable and inspectable procedure. I think that everyone should be very clear that CEA Community Health basically doesn't police the EA ecosystem, in the sense of reliably investigating and prosecuting credible accusations of wrongdoing or injustice. There are a swath of well-intentioned people in the EA ecosystem, but I think it's pretty clear there is no reliable justice system for when things go wrong.
  • Relatedly, four interviewees who gave me some pretty helpful info would only talk to me on the condition that I not share my info with the CEA Community Health team. They didn't trust (what I'm calling) the "institutionalized whisper network" to respect them, and some expected that it would hurt their ability to get funding to share any info.
  • My current impression is that many people in the EA ecosystem feel a false sense of safety from the existence of CEA Community Health, hoping that it will pursue justice for them, when (to a first approximation) it will not. While I respect many people on the team and consider some of them friends, my current sense is that the world would probably be better if the CEA Community Health team was disbanded and it was transparent that there is little-to-no institutional protection from bullies in the EA ecosystem, and more people do not get burned by assuming or hoping that it will play that role.

Going forward, for me, personally

I’m basically finished winding down my investigator sub-process, and plan to get back to other work starting Monday. 

As I mentioned above, I have had a few calls with other people about some strongly negative experiences with some of the relevant Nonlinear team. I don’t plan to investigate those stories or any of the other people in them, though it did give me some more bayesian evidence about some of the dynamics I'd written about being accurate.

Perhaps Kat and Emerson will be able to provide helpful evidence that changes how their time with Alice and Chloe reflects on them. I hope so. But either way it’s a part of their reputation now, and that seems right to me.

If Nonlinear writes up their account of things, or a critique of my post, I'll probably read it, but I'm not committing to any substantial engagement.

I don't really want to do more of this kind of work. Our civilization is hurtling toward extinction by building increasingly capable, general, and unalignable ML systems, and I hope to do something about that. Still, I'm open to trades, and my guess is that if you wanted to pay Lightcone around $800k/year, it would be worth it to continue having someone (e.g. me) do this kind of work full-time. I guess if anyone thinks that that's a good trade, they should email me.

Right now, I'm getting back to working on LessWrong.com, after a long detour into office spaces in Berkeley, hotel renovations, and a little investigative work.

  1. ^

    Meta: The footnote editor kept crashing due to length, so I've included 5 chat logs spread over 9 footnotes.

    March 7th 

    Person A: I just wanted to flag a concern I have about some of guests currently at Lightcone. Yesterday and today I saw both Drew Spartz and Kat Woods using the Lightcone spaces, and this worries me a lot. Their company Nonlinear has a history of illegal and unethical behavior, where they will attract young and naive people to come work for them, and subject them to inhumane working conditions when they arrive, fail to pay them what was promised, and ask them to do illegal things as a part of their internship. I personally know two people who went through this, and they are scared to speak out due to the threat of reprisal, specifically by Kat Woods and Emerson Spartz. Someone took initiative and posted this comment to the EA Forum: https://forum.effectivealtruism.org/posts/L4S2NCysoJxgCBuB6/?commentId=5P75dFuKLo894MQFf

    From my friends who worked there, I know that the abuse went far beyond what is detailed in this comment.I'm worried about them being here. I'm worried that more people will have the experiences that my friends had. I'm worried about not taking seriously the damage that bad actors can do (especially given everything that has happened in the last 6 months in EA). I know this is not a lot to go on, but I would not have been happy with myself if I didn't say something.

    Thanks, [name]
     

    BP: Pretty reasonable! I was planning to post publicly about this in one of the slack channels that I'd heard this, to let other people know too.

    My current plan is to say something like

    > "I have heard anonymized reports from prior employees that they felt very much taken advantage of while working at Nonlinear under Kat. I can't vouch for them personally, I don't know the people, but I take them pretty seriously and think it's more likely than not that something seriously bad happened. I don't think uncheckable anonymized reports should be sufficient to boot someone from community spaces, especially when they've invested a bunch into this ecosystem and seems to me to plausibly be doing pretty good work, so I'm still inviting them here, but I would feel bad not warning people that working with them might go pretty badly."

    Person A: I'm trying to formulate my thoughts on this, but something about this makes me very uncomfortable.

  2. ^

    BP: Yeah, interested in hearing more.

    Can also hop on an audio call if that's easier to talk on!

    Am interested what to you seems bad about it, e.g.:

    1) Giving up too much info about the people reporting on Kat
    2) I'm making the wrong call given the info I have
    3) I'm being overly aggressive to Kat by talking about this openly

    (I think prolly I will/would actually chat with Kat first, to get her take, before posting.)

    Person A: In the time that I have been involved in EA spaces I have gotten the sense that unless abuse is extremely public and well documented nothing much gets done about it. I understand the "innocent until proven guilty" mentality, and I'm not disagreeing with that, but the result of this is a strong bias toward letting the perpetrators of abuse off the hook, and continue to take advantage of what should be safe spaces. I don't think that we should condemn people on the basis of hearsay, but I think we have a responsibility to counteract this bias in every other way possible. It is very scary to be a victim, when the perpetrator has status and influence and can so easily destroy your career and reputation (especially given that they have directly threatened one of my friends with this).

    Could you please not speak to Kat directly? One of my friends is very worried about direct reprisal.

  3. ^

    BP: I'm afraid I can't do that, insofar as I'm considering uninviting her, I want to talk to her and give her a space to say her piece to me. Also I already brought up these concerns with her when I told her she was invited.

    I am not going to name you or anyone else who raised concerns to me, and I don't plan to give any info that isn't essentially already in the EA Forum thread. I don't know who the people are who are starting this info.

  4. ^

    March 10th 

    BP: Babble of next steps:

    • Post in the announcements channel that I'm disinviting Non-Linear from Lightcone and other spaces that we'll be hosting, and that I'm happy to chat about why, and give some basic reasoning in the slack.
      • [redacted]
      • Mention that there's confidential info here but that I'm happy to be pinged about this to give more specific takes if someone needs to make a decision.
      • Maybe share some probabilities of mine on certain statements, to give a shape of my views.
    • Chat with Emerson to hear his side of the story.
      • Honestly confused about what questions to ask given confidentiality, that could give them a fair shake.
    • Maybe later see if any of the employees are open to me saying certain things with slightly more info, such as there being multiple employees who are no longer willing to speak with Nonlinear and who consider their time there to be quite traumatic, and also to explain the compensation setup and general working dynamics.
  5. ^

    Person B: Please don’t do anything without consulting me / the people who’s experiences reported

    [One of them] tells me that writing and sharing that causes her to relive it all, feel paralyzed, and unable to sleep. [The other of them] reported worse

    BP: Not planning to do anything right now.

    Person B: I think having read their docs, it’d be good for you to chat before making any public statement and before offering to share info downstream of the docs with other people

    BP: Definitely down to chat with either of them (or indeed any former employees).

    Person B: [Chloe] and [Alice] are at the stage of having Lightcone/CEA health do investigation but not necessarily want all the details spread widely publicly (might eventually be okay with that, but I think they need to prepare themselves)

    Person C: a not-great-but-okay option is to just have a call with Emerson similar to with Kat, i.e. "I've heard some concerning things [cite public comments], do you want to talk about Nonlinear's employment practices from your perspective."

  6. ^

    BP: Yeah, I guess that's the default.

    Person B: I think this situation is a case where we ought to figure out how to work with victims/survivors who are kind of traumatized

    And figure out how to get justice in a way that doesn’t punish (cause harm to them) them for speaking up in a way that just makes other people wary of speaking up

    I think part of that is being careful with how you use information they provide, not sharing it in ways the victims might feel really uncomfortable with. Yes, hella annoying. But they’ve already been so reluctant and scared.

    BP: I think the problem is that the thing has gotten sufficiently bad that the former employees are both (a) very hurt and (b) want to not have the bad things that happened to them widely known or discussed.

    Person B: I think they’re open it to eventually. They considered just making a public post. It’s more that I think we ought to check with them on how the info gets used

    I think what they really don’t want is to be taken by surprise.

    BP: When you f*ck up hard enough that the other party won't openly talk about what happened, it gets much harder to sort things out.

  7. ^

    April 3rd 

    BP: Current plan I'm thinking about:

    —Talk with both about a whistleblower payout of [redacted range]

    —Then do some standard investigating, talk to both sides, check the facts, talk to more interns / former employees, etc

    —Then publish my takeaways along with statements from all involved

  8. ^

    April 12th 

    This day there was a thread on LW about Nonlinear.

    BP: I was thinking of writing this:

    > It is not clear to me that Nonlinear's work has been executed especially poorly; the audio library seems worthwhile, and I would be quite interested to know how many and which projects were funded through the Emergency Fund project.

    > That said, I've chatted with a number of former staff/interns about their experiences for about 10 hours, and I would strongly advise future employees to agree on salary and financial agreements ahead of time, in written contract, and not do any work for free. It also seems to me that Nonlinear hasn't been very competent at managing the legal details of running a non-profit (and indeed lost their non-profit status at some point in the last few years due to not filing basic paperwork), and I would be concerned about them managing the finances of other prizes if the money was actually handed to Nonlinear at any point.

    Person B: I think that comment is tantamount to publishing conclusions of your investigation before you actually publish your conclusions. Also I predict that it will attract a lot more attention than you are maybe thinking. I’d hold off, but perhaps try to be quick about, getting your actually verdict.

  9. ^

    April 16th 

    BP: My guess is that this isn't as hard as I'm making it out to be. I think the single goal is to make it so that

    1) [Chloe] and [Alice] are open that they had a strongly negative experience with Nonlinear and are critical of it, with a bunch of details public

    2) Nonlinear is not in a position to retaliate in an underhanded way

    I think that's my main proposal, is that I get a basic public statement from them that addresses the overall details, and that I can check-in with Nonlinear about. Just get it to be out in the open.

New Comment
47 comments, sorted by Click to highlight new comments since:

it seemed to me Alice and Chloe would be satisfied to share a post containing accusations that were received as credible.

 

This is a horrible constraint to put on an epistemic process. You cannot, ever, guarantee the reaction to these claims, right? Isn't this a little like writing the bottom line first? 

If it were me in this position, I would have been like: 

Sorry Alice & Chloe, but the goal of an investigation like this is not to guarantee a positive reaction for your POV, from the public. The goal is to reveal what is actually true about the situation. And if you aren't willing to share your story with the public in that case, then that is your choice, and I respect that. But know that this may have negative consequences as well, for instance, on future people who Nonlinear works with. But if it turns out that your side of the story is false or exaggerated or complicated by other factors (such as the quality of your character), then it would be best for everyone if I could make that clear as well. It would not serve the truth to go into this process by having already 'chosen a winner' or 'trying to make sure people care enough' or something like this. 

There are basically three possible outcomes to Ben investigating the story of Alice and Chloe:

  • Ben concludes that the accusations against Nonlinear are true
  • Ben concludes that the accusations against Nonlinear are false
  • Ben decides that he doesn't have enough evidence to make a conclusion (but can share the data)

You are talking about the first two options, but it seems quite clear to me that the third option is the thing Alice and Chloe actually worry about. (A&C know whether they are telling the truth or lying, but they can't predict whether Ben will be sufficiently convinced by the evidence or not.) What they want is for Ben not to publish the story if the third option happens, because the predictable outcome is that Nonlinear would take revenge against them.

Ben also wants to avoid the third option, but he can't really promise it. Maybe there simply is not enough evidence either way; or maybe there is, but collecting it would take more time than Ben is willing to spend.

if it turns out that your side of the story is false or exaggerated or complicated by other factors (such as the quality of your character), then it would be best for everyone if I could make that clear as well.

Certainly! I think I did do this. I mentioned that two people came away with a false impression of how much money Alice received, and that some people involved questioned her reliability a bunch. Sometimes I think the stories she'd share with me were a bit fuzzy and when I asked her for primary sources they were slightly out of line with her recollection (though overall roughly quite similar).

I think the thing I'm attempting to point out is:

If I hold myself to satisfying A&C's criterion here, I am basically:

a) strangleholding myself on how to share information about Nonlinear in public
b) possibly overcommitting myself to a certain level of work that may not be worth it or desirable
c) implicitly biasing the process towards coming out with a strong case against Nonlinear (with a lower-level quality of evidence, or evidence to the contrary, being biased against) 

I would update if it turns out A&C was actually fine with Ben coming to the (open, public) conclusion that A&C's claims were inaccurate, unfounded, or overblown, but it didn't sound like that was okay with them based on the article above, and they weren't open to that sort of conclusion. It sounded like they needed the outcome to be a pretty airtight case against Nonlinear. 

Anyway that's ... probably all I will say on this point. 

I am grateful for you, Ben, and the effort you put into this, as it shows your care, and I do think the community will benefit from the work. I am concerned about your well-being and health and time expenditure, but it seems like you have a sense for how to handle things going forward. 

I am into setting firm boundaries and believe it's a good skill to cultivate. I get that it is not always a popular option and may cause people to not like me. :P 

I affirm that there was a bias toward the process coming out against Nonlinear. I think this would normally be unjustified and unfair but it was done here due to the IMO credible threat of retaliation — otherwise I would have just shared my info as I wanted to on day one. I have tried to be open about the algorithm I followed so that people can update on the filtering. Insofar as the concern about retaliation was essentially ungrounded then I think that doing this was wrong and I made a fairly serious mistake. I think it will be hard to know with certainty, given how much of the stuff was verbal, but overall I am quite confident that it was a justified concern.

To clarify, A&C didn't ask me to make a "credible" post, I myself thought that was what I should do.

If I investigated and thought that the fears and harms were false, then my guess is that I would have shared a low-detail version of that. ("I have looked into these concerns about treatment of employees a fair bit and basically do not buy them.") These accusations were having effects for Nonlinear and I would have wanted to counteract that.

[-]plex204

I was asked to comment by Ben earlier, but have been juggling more directly impactful projects and retreats. I have been somewhat close to parts of the unfolding situation, including spending some time with both Alice, Chloe, and (separately) the Nonlinear team in-person, and communicating online on-and-off with most parties.

I can confirm some of the patterns Alice complained about, specifically not reliably remembering or following through on financial and roles agreements, and Emerson being difficult to talk to about some things. I do not feel notably harmed by these, and was able to work them out with Drew and Kat without much difficulty, but it does back up my perception that there were real grievances which would have been harmful to someone in a less stable position. I also think they've done some excellent work, and would like to see that continue, ideally with clear and well-known steps to mitigate the kinds of harms which set this in motion.

I have consistently attempted to shift Nonlinear away from what appears to me a wholly counterproductive adversarial emotional stance, with limited results. I understand that they feel defected against, especially Emerson, but they were in the position of power and failed to make sure those they were working with did not come out harmed, and the responses to the initial implosion continued to generate harm and distraction for the community. I am unsettled by the threat of legal action towards Lightcone and focus on controlling the narrative rather than repairing damage.

Emerson: You once said one of the main large failure modes you were concerned about becoming was Stalin's mistake: breaking the networks of information around you so you were unaware things were going so badly wrong. My read is you've been doing this in a way which is a bit more subtle than the gulags, by the intensity of your personality shaping the fragments of mind around you to not give you evidence that in fact you made some large mistakes here. I felt the effects of this indirectly, as well as directly. I hope you can halt, melt, and catch fire, and return to the effort as someone who does not make this magnitude of unforced error.

You can't just push someone who is deeply good out of the movement which has the kind of co-protective nature of ours in the way you merely shouldn't in some parts of the world, if there's intense conflict call in a mediator and try and heal the damage.

Edit: To clarify, this is not intended as a blanket endorsement of mediation, or of avoiding other forms of handling conflict. I do think that going into a process where the parties genuinely try and understand each other's worlds much earlier would have been much less costly for everyone involved as well as the wider community in this case, but I can imagine mediation is often mishandled or forced in ways which are also counterproductive.

For what it's worth, I think mediation in situations like this seems like a naive and terrible strategy.

I do not want to aim for and end state of "the relationships have been mended". That seems outside of the scope of what we can reasonably commit to in the majority of wrongdoings. And it's just not what I think you should aim for when trying to get justice after someone has been hurt.

When a crime is committed like theft or damages, courts would not issue that the defendant has to engage in "100 hours of mediation", I think they'd be like "go to prison for 6 months" or "pay back $100k to the defendants". 

I think if the wrongdoing has been established as having happened (e.g. by courts or by however the local social environment figures that sort of thing out), the wrongdoer should pay a cost that makes sure the initial act was not worth it to them (ex ante), and then everyone can move on, and the parties involved can figure out whatever relationship they want after that (including no relationship).

It's far easier to look past former wrongs after debts have been paid or bad deeds punished, and it's good to avoid having the victim and the perpetrator talk about it at length. That is often both unproductive and painful for the victim.

Edit: Oops! I wrote 'meditation' instead of 'mediation'. Fixed.

Forced or badly done mediation seems indeed terrible, entering into conversation facilitated by someone skilled with an intent to genuinely understand the harms caused and make sure you correct he underlying patterns seems much less bad than the actual way the situation played out.

I agree with that statement as worded, but you still seem to be presupposing a view of ‘mediation is good-by-default in this sort of situation’ that at least don’t think you’ve argued for.

That's fair, I've added a note to the bottom of the post to clarify my intended meaning. I am not arguing for it in a well-backed up way, just stating the output of my models from being fairly close to the situation and having watched a different successful mediation.

My sense is that there are a good number more injustices and predators in the EA ecosystem, most of which do not look exactly like this case. But it is not my job to uncover them and I am not making it my job. If you want to have an immune system that ferrets out bad behavior, you'll have to take responsibility for building that.

I have been thinking about creating an institution to work on this kind of thing. If anyone reading this is interested in this, please contact me and/or join the following discord server: Rationalist/EA Court

Why the downvotes?

  1. the median outcome for projects like this is doing far more harm than good
  2. you haven't given any indication you understand the risks, much less are likely to beat them. 

I probably don't understand the risks! Like I have some similar example institutions in mind, but none that are super similar to what I'm doing or which have consequences as bad as you are implying, so I assume maybe there's a significant history that you are aware of and which I have missed. What examples do you have in mind?

I plan to write my idea in much greater detail later before actually launching it. More information now about the risks could be convenient if it shows how it is a waste of time to pursue, though alternatively if I disagree I might address the disagreements in later writeups.

I probably don't understand the risks!

One risk is that similar-sounding institutions can and do occasionally get taken over precisely by the people they're setup to prevent, and then those people have institutional backing and are even harder to dislodge.

E.g. see the section on Legible Signals from a podcast / interview with habryka from early this year:

Like in the context of the FTX situation, a proposal that I've discussed with a number of people is, "Fuck, man, why did people trust Sam? I didn't trust Sam." "What we should have done is,we should have just created a number, like, there's 10 cards and these are the 'actually trustworthy people' and 'high-integrity people' cards, and we should have given them to 10 people in the EA community who we actually think are [highly] trustworthy and high-integrity people, so that it actually makes sense for you to trust them. And we should just have [high-legibility] signals of trust and judgment that are clearly legible to the world."

To which my response was "Lol, that would have made this whole situation much worse." I can guarantee you that if you [had] handed a number of people - in this ecosystem or any other ecosystem - the "This person has definitely good judgment and you should trust what they say." [card. Then] in the moment somebody has that card and has that official role in the ecosystem, of course they will be [under a] shitton of adversarial pressure, for [them to] now endorse people who really care about getting additional resources, who really care about stuff.

...

And then I'm like, "Well, there are two worlds. Either nobody cares about who you think is high-integrity and trustworthy, or people *do* care and now you've made the lives of everyone who you gave a high-integrity / trustworthy card a lot worse. Because now they're just an obvious giant target, that if you successfully get one of the people of the high-integrity, high-trustworthy cards to endorse you, you have free reign and now challenging you becomes equivalent to challenging the “high-integrity, [high-trust] people” institution. Which sure seems like one of the hardest institutions to object to.

And I think we've seen this in a number of other places... There was a specific board set up by the Center for Applied Rationality, [where] CFAR kept being asked to navigate various community disputes, and they were like, "Look, man, we would like to run workshops, can we please do anything else?" And then they set up a board to be like, "Look, if you have community disputes in the Bay Area, go to this board. They will maybe do some investigative stuff, and then they will try to figure out what should happen, like, do mediation, maybe [speak about] who was actually in the right, who was in the wrong."

And approximately the first thing that happened is that, like, one of the people who I consider most abusive in the EA community basically just captured that board, and [got] all the board members to endorse him quite strongly. And then when a bunch of people who were hurt by him came out, the board was like, "Oh, we definitely don't think these [people who were abused] are saying anything correct. We trust the guy who abused everyone."

Which is a specific example of, if you have an institution that is being given the power to [blame] and speak judgment on people, and try to create common knowledge about [what] is trustworthy and what is non-trustworthy, that institution is under a lot of pressure...

[We] can see similar things happening with HR departments all around the world. Where the official [purpose] of the HR department is to, you know, somehow make your staff happy and give them [ways to] escalate to management if their manager is bad. But most HR departments around the world are actually, like, a trap, where if you go to the HR department, probably the person you complained about is one of the first people to find out, and then you can be kicked out of the organization before your complaint can travel anywhere else.

It's not true in all HR departments but it's a common enough occurrence in HR departments that if you look at Hacker News and are like, "Should I complain to HR about my problems?", like half of the commenters will be like, "Never talk to HR." HR is the single most corrupt part of any organization. And I think this is the case because it also tends to be the place where hiring and firing [decisions get] made and therefore is under a lot of pressure.

This is definitely something I've thought about and have multiple layers of plans to reduce, though my plans are admittedly of questionable strength, so there are pretty legitimate reasons to doubt my idea. I will probably research this some more before writing it up. That said my idea is very different from just handing out "you can trust this person" cards.

I think if you want to pursue a project in the "justice" space, you should write up the problems you see and your planned incentive structures in a way that is legible. Then people can decide if they should trust your justice procedure.

I plan on doing that.

I think that that Habryka podcast has a lot of potential for projects, it just needs a wide variety of people to build off of it.

Haven't downvoted but was considering it.

My crux around this is something like; conflict resolution is an iterated and adversarial game, where failure can cost a lot. "Creating an institution to work on [uncovering injustices and predators]" looks to me a lot like "Creating an institution to keep secrets on computers connected to the internet." You don't just have to outsmart the basic problem, you also need to outsmart everyone who has an incentive to subvert your system. I don't think that's an impossible challenge but it's harder than it looks, and it's harder than it looks in part because some people are actively trying to obscure the ways in which it's hard. The system can even look like it's working fine right up until it tries to tackle something important, in the same way that a substitution cypher will look like it's working fine right up until I try to store a bunch of bank account information with it.

It seems like you aren't noticing some skulls.

There's also a dynamic of something like... this is one of those issues where being too interested in the problem is correlated with being bad at solving it.  Obviously you have to compromise on this a little or these kinds of things never get done, but if someone's only qualification is interest I think the EV is very negative.

That sounds really discouraging, so I want to tell @tailcalled: I think it's great you care about people are want to prevent them from being hurt. I think the easiest, least risky way to do that is to create abundance so people have less dependence on any one entity and are thus less vulnerable. The more parties being thrown by people who aren't creeps (or harboring creeps), the easier it is to avoid the parties that are. So I'd encourage you to start by building socially, rather than investigation. 

Same with my comment. :-/ Maybe the downvoters want to point out the risk of this turning into some denunciation/witchhunting/revolution eating her own children/cancel culture scenario. I'm worried about these dangers too (which is why I mentioned autoimmune disorders), but didn't want to turn my comment into an essay exploring pros and cons and risks and benefits and negative attractor states and ways to avoid them.

Of course, I would appreciate some explanation from the downvoters. My policy is to only downvote if I also take the time to comment.

Like ProgramCrafter I neither downvoted nor upvoted your comment.

Maybe the downvoters want to point out the risk of this turning into some denunciation/witchhunting/revolution eating her own children/cancel culture scenario.

Could be what they are worried about.

My current model of how witch-hunts/cancel-culture occurs is that when there is no legitimate way to get justice, people sometimes manage to arrange vigilante justice based on social alliances, but that vigilante justice is by its very nature going to be more chaotic and less accurate than a proper system.

So one consequence of my idea, if it works properly, is that it would reduce witchhunts by providing victims with effective means of achieving justice.

Possibly. I would expect it to be very difficult to build a legitimate, independent and just institution for that. There is a reason we have checks and balances in government.

I think that this idea came from a thought process that generally generates good ideas (e.g. an LLM API that predicts whether someone has read the Sequences), but this time, due to bad luck, it ended up outputting an extremely bad idea. (I didn't downvote)

I guess that people who downvoted this would like to see more details why this "court" would work and how won't it be sued when it misjudges (and the more cases there are, the higher probability of misjudge is).

(meta: I neither downvoted nor upvoted the proposal)

Ah.

I briefly mentioned this in the discord:

For a while I have been thinking about how one can best come to useful truth with respect to controversial subjects. I have developed a theoretical framework that I need to write up in detail, but for now here's the short version:

  • There is no executive which can deal out punishment or fine to pay for damages, so neither the accuser nor the accused have sufficient motivation to unravel the truth; instead most of the value will have to come from informing the community, and we suffer a commons problem because each community member is not sufficiently motivated.
  • Different parties have different questions that they are most interested in. One of the biggest jobs of the court is to distinguish all of the relevant questions so that things don't get falsely generalized.
  • Most people do not have time to work through the details, so the court needs to provide an easy-to-digest summary.
  • The court needs to dig up novel evidence, because most evidence is not publicly available, and it needs to come up with novel theory of social interactions to understand the significance of the events, making use of rationalist skills.

I'm of course open to input for how to do things differently than this, though you should expect some pushback because I do currently have some theory and observations backing the above strategy, so the discussion will have to clarify the alternatives.

The good news is I've strongly upvoted this back to positive territory.

I guess I should say, feel encouraged to join both if you want to help making the court or if you have some conflict you want the court to investigate.

Meta: Once again, here is a link to the crosspost on the EA Forum.

[-]trevor5-22

my current sense is that the world would probably be better if the CEA Community Health team was disbanded and it was transparent that there is little-to-no institutional protection from bullies in the EA ecosystem, and more people do not get burned by assuming or hoping that it will play that role.

I'm not sure about this, even if the Community Health And Special Projects team did a bad job on this specific case, that doesn't indicate much about whether they're valuable for odd jobs, such as external threats to EA. Their website mentions 4 key categories of work, with two of the four being "Reducing risks related to sensitive projects, like work in policy and politics" and "Finding specialists to work on specific problems, for example, improving public communications around EA or risk-reduction in areas with high geopolitical risk". I've met some of the people working on those matters, which absolutely should have a dedicated org, and they're very professional and serious about at least being available as consultants who can handle sensitive situations and work in volatile environments e.g. journalism.

In addition to geopolitical circumstances, they also could do things like run prediction markets on people researching S-risk, to forecast the odds that they end up going crazy and try to maximize damage to the AI safety community, like Ziz or Emile Torres, rather than the traditional approach, which is going into anaphylactic shock and purging anyone who "seems potentially crazy" (e.g. including people who take things seriously but aren't good at doing it charismatically).

There's also issues like epistemic-hijacking attacks that focus on key strategic targets in a group or movement, and external threats like that straight-up require a centralized body with broad powers, just to have the slightest chance of having any deterrence and countermeasures. With slow takeoff, every passing year will make overwhelmingly powerful adversaries and black swan events into a greater and greater hazard to the very survival of the AI safety movement itself.

My takeaway from this is that it seems like the more EA-adjacent people are just not on the same level as Rat-adjacent people at handling power dynamic disputes, where the people making up all of the sides in the dispute instinctively become driven to win at feuds because humans are primates. That is an unsurprising conclusion given that EA-adjacents optimized more for resources/money and rat-adjacents optimized more for mind/skill. 

But that doesn't mean the whole Community Health and Special Projects team should be disbanded.

That sounds like:

The PR purpose of the Community Health is to be able to prevent misbehavior within EA. The real purpose is to protect powerful people within EA from outside attacks. 

So they are like an HR department of a company. Simply getting rid of the HR department likely isn't good for the average company. 

Maybe the solution isn't to get rid of them but to somehow be more honest about them being like a normal HR department?

I generally agree with this assessment, but I'm not sure about the HR department analogy, and I don't think that "protecting powerful people within EA from outside attacks" is the right framing; I as thinking more along the lines of weak points and points of failure. It just so happens that important people are also points of failure, for example, if you persuade people that Yudkowsky is evil, then they don't read the Sequences. And even for people who already benefitted from reading the sequences, it could possibly de-emphasize the cognitive habits that they gained from reading the Sequences, thus making them more predictably manipulable (and in an easily measured way).

Basically, such an organization would act to prevent any scandals involving important people from coming to light which is roughly the opposite of trying to create transparency and consequences for people who engage in scandalous behavior. 

That also matches the behavior when Guzey asked them for confidentiality for his criticism of William MacAskill and they broke their confidentiality promise. 

If their purpose is to protect people like William MacAskill then breaking such confidentiality promises to help Will better defend against attacks makes sense. 

I was thinking more along the lines of SIGINT than HUMINT, and more along the lines of external threats than internal threats. I suppose that along the HUMINT lines, Facebook AI Labs hires private investigators to follow around Yudkowsky to find dirt on him, or the NSA starts contaminating people's mail/packages/food deliveries with IQ-reducing neurotoxins, then yes, I'm definitely saying there should be someone to immediately take strategic action. We're the people who predicted Slow Takeoff, that the entire world (and, by extension, the world's intelligence agencies, whose various operations will all be interrupted by the revelation that all things revolve around AI alignment) would come apart at the seams, decades in advance; and we also don't have the option of giving up on being extraordinary and significant, so we should expect things to get really serious at some point or another. On issues more along traditional community health lines, that's not my division.

I don't think that the CEA Community Health Team as it exists is an actor that would do a lot in either of those scenarios and be very helpful for dealing with them. 

"they also could do things like run prediction markets on people researching S-risk, to forecast the odds that they end up going crazy "  

 

If this is a real concern we should check if fear of hell often drove people crazy. 

Every retrospective I know of has shown them to do a terrible job. Note the failures are not even obviously ideological. They have protected the normal sort of abuser. But they also protected Kathy Forth who was a serial false accuser (yes they banned her from some events but she was still active in EA spaces until her suicide).

Would you expect to see retrospectives of cases where they did a good job? If an investigation concludes that "X made these accusations about Y but we determined them to be meritless", then there are good reasons for neither CEA nor X to bring further attention to those accusations by including them in a public retrospective. Or in cases where accusations are determined to have merit, it may still be that the victims don't want the case to be discussed in public any more than strictly necessary. Or there may be concerns of a libel suit from the wrongdoer, limiting what can be said openly.

I am extremely, extremely against disposing of important-sounding EA institutions based off of popular will, as this is a vulnerability that is extremely exploitable by outsiders and we should not create precedent/incentives to exploit that vulnerability.

If I'm wrong about this specific case, it's because of idiosyncratic details and because I didn't do as much research on this particular org relative to other people here. If I was wrong in this specific case, it would be a very weak update against my stance that EA orgs should be robust against sudden public backlash, due to features specific to the Community health team, not a strong update that I'm wrong about the vulnerabilities. The vulnerability that I'm researching is a technical issue, which remains an external threat regardless of specific details about who-did-what, and all I can say about it here is that this internal conflict is a lot more dangerous than it appears to any of the participants initiating it.

This is a specific claim about what specific people should do

We're basically doomed to continue talking past eachother here. You don't seem to be willing to give tons of detail here about how, exactly, the Community Health And Special Projects team is too corrupt to function. I'm not willing to give tons of detail here about external threats that are vastly more significant than any internal drama within EA, which means I can't explain the details demonstrating why external threats to EA actually dominate the calculus of how important is the Community Health And Special Projects team or whether it should be disbanded.

For at least five I have been telling victims* that there is no obvious advice on what they should do. But that they should absolutely avoid any formal community process. In particular totally avoid the community health team. 

I assume the community health team is mostly or entirely staffed by good well intentioned people. But I have personally treated it as way too corrupt in function. Whenever someone from the community health team messages me I just ignore them. Once you think an institution is corrupt the SAFEST response is to minimize contact. By contrast im happy to discuss community issues, which I have non-trivially often ended up involved in, with seemingly well intentioned normal community members.

Official dispute processes tend to get captured by people who are either overt bad actors or, more commonly, just very internally aligned with power/status/money. You should not trust these processes in general. But especially do not trust them in the rationality community.

 

*technically people claiming to me to be victims, afaik in all cases these are the same thing)

What community health team are you talking about?

I would not consider CEA to be part of the rationality community

Thank you for doing this investigation! It must have been a strenuous undertaking in terms of time, thought and emotion.

It would be good to have some sort of community immune system, as you call it, (although one would have to be wary of autoimmune disorders) but it's very understandable not to want to do that part-time. I started an investigation much smaller than yours last year, but gave up eventually because it was too much work in addition to my job and other things I want to do in my life. (Although a bit of sleuthing is fun, too. My investigation didn't involve the personal safety/public interest dilemmas that yours did.)

Added 2023-09-17: After skimming the comments and seeing the down- and disagreement votes, I have to note: While it would be nice to have a working immune system (or just more investigations as circumspect and comprehensive as Ben's), attempting to build it would be even riskier and more difficult than I had thought. Probably near-impossible.