Seems like we have two kinds of criticism here, reflected by the two articles:
These are somewhat independent (someone could do anti-malaria research while abusing their employees; someone could provide dream jobs while producing little output), but also somewhat connected in the sense that sometimes cutting corners is excused by trying to optimize for greater good as fast as possible.
So the entire discussion could basically be divided into answering 3 questions:
And it seems to me that the conclusions are mostly "no", "no", and "no".
The first "no" doesn't mean that Nonlinear is the worst in the world, or criminal; it's just that most of us (whether EA or not) probably expect much better from our employers. It's like a negative review at Glassdoor.
The second and third "no" are two parallel reasons why we should not accept "but they are trying to do effective altruism" as an excuse for the former conclusion.
I do think cutting corners should be tolerated in EA? Everything in moderation and all that. Most very effective organizations cut corners.
Most very effective organizations cut corners.
Technically true, but gods help us all if organizations start cutting corners as a way to signal greater effectiveness, and we keep responding to this signal positively (until the moment when things predictably blow up).
Yes, a new point. Basically: "effective organizations cut corners" is a mild infohazard.
Yes, sometimes it is necessary to cut corners to achieve a greater good, but such things should be done with caution. Which means, if you cut too many corners, or you keep doing it with unimportant things, you have gone too far. As soon as the necessity to cut corners passes, you should try to get things to normal.
But when this meme becomes popular, it motivates organizations to get sloppy, excusing the sloppiness by "as you see, we care about effectiveness so much that we don't have any time left for the stupid concerns of lesser minds". And then... people get hurt, because it turns out that some of the rules actually existed for a reason (usually as a reaction to people getting hurt in the past).
Cutting corners should be seen as a bad thing that is sometimes necessary, not as a good thing that should be celebrated. Otherwise bad actors (especially) will pass our tests with flying colors.
Yes, a new point. Basically: "effective organizations cut corners" is a mild infohazard.
I do not in fact know the right amount of cutting corners to do. This is strong evidence this is not in fact an infohazard! I'd like to at least see some numbers before you declare something immoral and dangerous to discuss! I'm tempted to strong-downvote for such a premature comment, but instead I will strong disagree.
But when this meme becomes popular, it motivates organizations to get sloppy, excusing the sloppiness by "as you see, we care about effectiveness so much that we don't have any time left for the stupid concerns of lesser minds". And then... people get hurt, because it turns out that some of the rules actually existed for a reason (usually as a reaction to people getting hurt in the past).
Is this true? Maybe? It definitely causes orgs to get sloppy, but sloppiness in certain areas so you can focus on areas that matter is just exactly what triage is. Obviously the quote you gave is the wrong mindset to have, but why not just talk about how that's a stupid mindset, but mindsets of the form "Yes, our legal paperwork is not in fact in order because it would cost us $180k/year for a full time lawyer, and this paperwork is never actually checked, so we're ok with the risk", or "Berkeley has terrible building regulations, so we're going to build a nice shack, and this is illegal, but its not visible from the street, and we expect the shack to be really cool, so we'll build it anyway" seem smart to me, to list a few clear & obvious examples.
Cutting corners should be seen as a bad thing that is sometimes necessary, not as a good thing that should be celebrated. Otherwise bad actors (especially) will pass our tests with flying colors.
I don't think there's a single rule you can apply to all instances of cutting corners. Sometimes its the right decision, sometimes not. When its the right decision it should be praised, otherwise it should not.
I'd like to at least see some numbers before you declare something immoral and dangerous to discuss!
Discussing hypothetical dangers shouldn't require numbers. It's probably not so dangerous to discuss hypothetical dangers that they shouldn't be discussed when there are no numbers.
This is correct in general. For this particular discussion? It may be right. Numbers may be too strong a requirement to change my mind. At least a Fermi estimate would be nice, also any kind of evidence, even personal, supporting Viliam’s assertions will definitely be required.
The important part isn't assertions (which honestly I don't see here), it's asking the question. Like with advice, it's useless when taken as a command without argument, but as framing it's asking whether you should be doing a thing more or less than you normally do it, and that can be valuable by drawing attention to that question, even when the original advice is the opposite of what makes sense.
With discussion of potential issues of any kind, having norms that call for avoiding such discussion or for burdening it with rigor requirements makes it go away, and so the useful question of what the correct takes are remains unexplored.
I think it would be proper to provide a specific prediction, so here is one:
Assuming that we could somehow quantify "good done" and "cutting corners", I expect a negative correlation between these two among the organizations in EA environment.
I’m glad for the attempted prediction! Seems not very cruxy to me. Something more cruxy: I imagine that people are capable of moderating themselves to an appropriate level of “cutting corners” So I expect a continuity of cutting corners levels. But you expect that small amounts of cutting corners quickly snowball into large amounts. So you should expect a pretty bimodal distribution.
[edit] A way this would not change my mind: If we saw a uni, bi, or multimodal distribution, but each of the peaks corresponded to a different cause area. I would say we’re picking up different levels of cutting corners ability from several different areas people may work in.
But you expect that small amounts of cutting corners quickly snowball into large amounts.
I don't expect the existing organizations to get more sloppy.
I expect more sloppy organizations to join the EA ecosystem... and be welcome to waste the resources and burn out people (and not produce much actual value in return), because the red flags will be misinterpreted as a sign of being awesome.
I am not sure if this will result in a bimodal distribution, but expect that there will be some boring organizations that do their accounting properly and also cure malaria, and some exciting organizations that will do a lot of yachting and hot tub karaoke parties... and when things blow up no one will be able to figure out how many employees they actually had and whether they actually paid them according to the contract which doesn't even exist on paper... because everyone was like "wow, these guys are thinking and acting so much out-of-the-box that they are certainly the geniuses who will save the world" when actually there were just some charismatic guys who probably meant good but didn't think too hard about it.
I'd expect that to depend heavily on the definition of "good done" and "cutting corners". For some definitions I'd expect a positive correlation and other definitions I'd expect a negative correlation.
I am not sure I would characterise 2 and 4 like that.
I think I'd say ozzy's criticisms were more like:
I agree that Ozy made these recommendations and that I didn't emphasize their recommendations in my summary. I think what I summarized was the problems Ozy pointed at. These problems are things the recommendations are meant to address, but I suspect there are some underlying dynamics that generate the problems (at least except the 3rd one) and so I don't think EA will listen to the recommendations well enough to fix them and therefore I think that the problems are more relevant to list because they show the future of EA. But of course this is a subjective editorial choice and I think one could reasonably have done otherwise.
These 4 beefs are different and less serious than the original accusations, or at least feel that way to me. Retconning a motte after the bailey is lost? That said, they're reasonable beefs for someone to have.
These 4 beefs aren't about the original accusations; Ozy's previous post was about the original accusations. Rather, these 4 beefs are concerns that Ozy already had about Effective Altruism in general, and which the drama around Nonlinear ended up highlighting as a side-effect.
Because these beefs are more general, they're not as specifically going to capture the ways Alice and Chloe were harmed. However I think on a community level, these 4 dynamics should arguably be a bigger concern than the more specific abuse Alice and Chloe faced, because they seem to some extent self-reinforcing, e.g. "Do It For The Gram" will attract and reward a certain kind of people who aren't going to be effectively altruistic.
I guess I should also say, see this post for LW discussion of Ozy's review of the original accusations: https://www.lesswrong.com/posts/wNqufsqkicMNxabZz/practically-a-book-review-appendix-to-nonlinear-s-evidence
Against Nonlinear is a followup that Ozy wrote to Practically A Book Review: Appendix to "Nonlinear's Evidence: Debunking False and Misleading Claims". In it, they lay out 4 trends in Effective Altruism where Nonlinear was especially bad: