Disclaimer: I am not writing this message in connection to my employer, my institution, or any third party. This is a personal judgement call, exercised solely in my own capacity.

Summary

Over the past few months, I have been involved in supporting insider reports about misconduct in AGI frontier labs. In particular, I’ve been supporting the victim of a crime perpetrated by an AGI frontier lab leader.

I am reaching out to the AI safety and governance community for support regarding their legal case, which has significant implications for AI development.

Details

I have known the crime victim well for many years, and they have earned my highest trust. After intensive discussions with them, I can attest that they have approached this lawsuit with serious consideration and compelling reasons. 

A reputable law firm has agreed to take their case on contingency. I have reviewed the case in detail. Frankly, I am disturbed and unsettled by its contents. I also fear the implications for the rest of the industry.

Further, I have spent significant time around Silicon Valley AI communities and have the context to be aware of troubling practices. I believe that their legal case will bring to light structural problems with the current AI industry leadership, which will shift the course of AGI development for the better. 

With proper legal protection, the victim may be able to speak more freely about what they have witnessed, which will be valuable information for the AI industry as a whole. 

Not doing anything would mean that there is little chance for public discovery or to correct bad practices. The follow-on effects of this lack of accountability on the highest level of leadership in AI frontier labs are most likely negative. 

I would like to discuss the entire chain of reasoning, but doing so would currently interfere with the legal process. I hope to one day also speak more freely. The lawsuit, if it were to become public, would speed up the date when that could happen.

The plaintiff cares deeply about getting AI right. In addition to taking on the mental cost of the lawsuit, the plaintiff has pledged to give 10% of the potential upside to the AI safety and governance community.

There are two main obstacles to the lawsuit. The first is funding. Secondly, it will be taxing (and maybe even dangerous) for the plaintiff. They are willing to do it to support lab leader accountability for the benefit of the broader AI safety and governance community. Dealing with this crime is taking up the majority of the plaintiff’s time.

They have found a lawyer who will work on commission, meaning the lawyer only gets paid if the case succeeds, but then the law firm will receive a substantial amount of the upside (i.e. the lawyer will be paid by damages awarded to the plaintiff). The plaintiff will still need to pay expenses, regardless of the outcome of the case.

Expenses include everything except the lawyer’s hourly billing. For example:

  • depositions
  • expert witnesses
  • cybersecurity for the victim and key witnesses
  • physical security, such as bodyguards, if needed
  • media training / publicist, if needed
  • moving to a safer location, if needed
  • filing fees
  • any expenses for the law firm, such as travel
  • counseling
  • unforeseen events

Being able to cover these expenses will dramatically increase the likelihood of this case going ahead successfully. There is a shrinking window to file this case.

If the costs of the lawsuit end up lower than expected (for instance, if it doesn’t proceed to a jury trial), we will return the funds to you or donate them to an org supporting insider reports in AI frontier labs. 

Call to Action

Litigate-for-impact is an underexplored path toward developing safe AGI. I propose that this opportunity is both low cost and high impact in the context of AI safety.

It is low cost because the main legal fees are taken care of. The plaintiff only needs to cover the expenses listed above, which is why we are reaching out to the AI safety community for fiscal support.

It is high impact because it is a chance to uncover the recklessness of an AGI frontier lab.

If you are an individual who wants to help, please get in touch with me. If you’re at an org that would consider fiscal sponsorship for this project, please contact me, too.

New Comment
7 comments, sorted by Click to highlight new comments since:

FWIW, if anyone is interested in my take, my guess is it doesn't make sense to support this (and mild-downvoted the post). 

I am pretty worried about some of your past reporting/activism in the space somewhat intentionally conflating between some broader Bay Area VC and tech culture and the "EA community" in a way that IMO ended up being more misleading than informing (and then you ended up promoting media articles that I think were misleading, despite I think many people pointing this out).

People can form their own opinions on this: https://forum.effectivealtruism.org/posts/JCyX29F77Jak5gbwq/ea-sexual-harassment-and-abuse?commentId=DAxFgmWe3acigvTfi 

I might also be wrong here, and I don't feel super confident, but I at least have some of my flags firing and would have a prior that lawsuits in the space, driven by the people who currently seem involved, would be bad. I think it's reasonable for people to have very different takes on this. 

I am obviously generally quite in favor of people sharing bad experiences they had, but would currently make bets that most people on LW would regret getting involved with this (but am also open to argument and don't feel super robust in this).

Hi habryka,

 

Thank you for your comment. It contains a few assumptions that are not quite true. I am not sure that the comment section here is the best place to address them, and in person diplomacy may be wise. I would be down to get coffee the next time we are in the same city and discuss in more detail.

Sure, happy to chat sometime. 

I haven't looked into the things I mentioned in a ton of detail (though have spent a few hours on it), but have learned to err on the side of sharing my takes here (where even if they are wrong, it seems better to have them be in the open so that people correct them and people can track what I believe even if they think it's dumb/wrong).

Ok, thank you for your openness. I find that in-person conversations about sensitive matters like these are easier as tone, facial expression, body language are very important here. It is possible that my past comments on EA that you refer to came off as more hostile than intended due to the text-based medium.

Fwiw, the contents of this original post actually have nothing to do with EA itself, or the past articles that mentioned me.

Makes sense. My experience has been that in-person conversations are helpful for getting on the same page, but they also often come with confidentiality requests that then make it very hard for information to propagate back out into the broader social fabric, and that often makes those conversations more costly than beneficial. But I do think it's a good starting point if you don't do the very costly confidentiality stuff.

Fwiw, the contents of this original post actually have nothing to do with EA itself, or the past articles that mentioned me.

Yep, that makes sense. I wasn't trying to imply that it was (but still seems good to clarify).

(The crosspost link isn't working)

Apologies, the post is still getting approved by the EA forum as I've never posted there under this account.