Hey all, Anthropic cofounder here. I wanted to clarify Anthropic's position on non-disparagement agreements:
In other words— we're not here to play games with AI safety using legal contracts. Anthropic's whole reason for existing is to increase the chance that AI goes well, and spur a race to the top on AI safety.
Some other examples of things we've needed to adjust from the standard corporate boilerplate to ensure compatibility with our mission: (1) replacing standard shareholder governance with the Long Term Benefit Trust and (2) supplementing standard risk management with the Responsible Scaling Policy. And internally, we have an anonymous RSP non-compliance reporting line so that any employee can raise concerns about issues like this without any fear of retaliation.
Please keep up the pressure on us and other AI developers: standard corporate best practices won't cut it when the stakes are this high. Our goal is to set a new standard for governance in AI development. This includes fostering open dialogue, prioritizing long-term safety, making our safety practices transparent, and continuously refining our practices to align with our mission.
We're not claiming that Anthropic never offered a confidential non-disparagement agreement. What we are saying is: everyone is now free to talk about having signed a non-disparagement agreement with us, regardless of whether there was a non-disclosure previously preventing it. (We will of course continue to honor all of Anthropic's non-disparagement and non-disclosure obligations, e.g. from mutual agreements.)
If you've signed one of these agreements and have concerns about it, please email hr@anthropic.com.