L'Ésswrong, c'est moi.
I agree in general, but think the force of this is weaker in this specific instance because NonLinear seems like a really small org. Most of the issues raised seem to be associated with in-person work and I would be surprised if NonLinear ever went above 10 in-person employees. So at most this seems like one order of magnitude in difference. Clearly the case is different for major corporations or orgs that directly interact with many more people.
I think there will be some degree to which clearly demonstrating that false accusations were made will ripple out into the social graph naturally (even with the anonymization), and will have consequences. I also think there are some ways to privately reach out to some smaller subset of people who might have a particularly good reason to know about this.
If this is an acceptable resolution, why didn't you just let the problems with NonLinear ripply out into the social graph naturally?
If most firms have these clauses, one firm doesn't, and most people don't understand this, it seems possible that most people would end up with a less accurate impression of their relative merits than if all firms had been subject to equivalent evidence filtering effects.
In particular, it seems like this might matter for Wave if most of their hiring is from non-EA/LW people who are comparing them against random other normal companies.
I would typically aim for mid-December, in time for the American charitable giving season.
After having written an annual review of AI safety organisations for six years, I intend to stop this year. I'm sharing this in case someone else wanted to in my stead.
Reasons
Hopefully it was helpful to people over the years. If you have any questions feel free to reach out.
Thanks!
Alignment research: 30
Could you share some breakdown for what these people work on? Does this include things like the 'anti-bias' prompt engineering?
This post seems like it was quite influential. This is basically a trivial review to allow the post to be voted on.