I'm very happy to see effective altruism community members write public posts about EA organizations, where they point out errors, discuss questionable choices, and ask hard questions. I'd like to see more cases where someone digs deeply into an org's work and writes about what they find; a simple "this checked out" or a "critical feedback that helps the org improve" are both good outcomes. Even a "this org is completely incompetent" is a good outcome: I'd rather orgs be competent of course, but in the cases where they aren't we want to know so we can explore alternatives.

When posting critical things publicly, however, unless it's very time-sensitive we should generally be letting orgs review a draft first. This allows the org to prepare a response if they want, which they can post right when your posts goes out, usually as a comment. It's very common that there are important additional details that you don't have as someone outside the org, and it's good for people to be able to review those details alongside your post. If you don't give the org a heads up they need to choose between:

  • Scrambling to respond as soon as possible, including working on weekends or after hours and potentially dropping other commitments, or

  • Accepting that with a late reply many people will see your post, some will downgrade their view of the org, and most will never see the follow-up.

Surprising them is to your advantage, if you consider this tactically, but within the community we're working towards the same goals: you're not trying to win a fight, you're trying to help us all get closer to the truth.

In general I think a week is a good amount of time to give for review. I often say something like "I was planning to publish this on Tuesday, but let me know if you'd like another day or two to review?" If a key person is out I think it's polite to wait a bit longer (and this likely gets you a more knowledgeable response) but if the org keeps trying to push the date out you've done your part and it's fine to say no.

Sometimes orgs will respond with requests for changes, or try to engage you in private back-and forth. While you're welcome to make edits in response to what you learn from them, you don't have an obligation to: it's fine to just say "I'm planning to publish this as-is, and I'd be happy to discuss your concerns publicly in the comments."

[EDIT: I'm not advocating this for cases where you're worried that the org will retaliate or otherwise behave badly if you give them advance warning, or for cases where you've had a bad experience with an org and don't want any further interaction. For example, I expect Curzi didn't give Leverage an opportunity to prepare a response to My Experience with Leverage Research, and that's fine.]

For orgs, when someone does do this it's good to thank them in your response. Not only is it polite to acknowledge it when someone does you a favor, it also helps remind people that sharing drafts is good practice.

As a positive example, I think the recent critical post, Why I don't agree with HLI's estimate of household spillovers from therapy handled this well: if James had published that publicly on a Sunday night with no warning then HLI would have been scrambling to put together a response. Instead, James shared it in advance and we got a much more detailed response from HLI, published at the same time as the rest of the piece, which was really helpful for outsiders trying to make sense of the situation.

The biggest risk here, as Ben points out, is that faced with the burden of sharing a draft and waiting for a response some good posts won't happen. To some people this sounds a bit silly (if you have something important to say and it's not time sensitive is it really so bad to send a draft and set a reminder to publish in a bit?) but not to me. I think this depends a lot on how people's brains work, but for some of us a short (or no!) gap between writing and publishing is an incredibly strong motivator. I'm writing this post in one sitting, and while I think I'd still be able to write it up if I knew I had to wait a week I know from experience this isn't always the case. This is a strong reason to keep reviews low-friction: orgs should not be guilting people into making changes, or (in the typical case) pushing for more time. Even if the process is as frictionless as possible, there's the unavoidable issue of delay being unpleasant, and I expect this norm does lose us a few good posts. Given how stressful it is to rush out responses, however, and the lower quality of such responses, I think it's a good norm on balance.

New to LessWrong?

New Comment
74 comments, sorted by Click to highlight new comments since: Today at 6:01 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

For the record I don't think anyone needs to check with Lightcone before criticizing any of our work.

6RobertM1y
To the extent that this is saying that you expect Lightcone to pay fewer (unjustified) costs than most EA orgs from people publishing criticisms without pinging us first, I think that's probably true.  Those costs are obviously not zero and it's not obvious to me that the "correct" policy for any given individual who might want to publish such a criticism is to skip that step by default, if they're optimizing for the propagation of true information. The reason most people should at least consider skipping that step is because, as Jeff points out, this expectation is a non-trivial cost that will sometimes cause the publication to not happen.  However, if you're a person for whom the cost of that kind of check-in is low, there may often be an intersection above zero between the curves of your cost and the improved output you'd get from the check-in.

I'm not saying it won't improve someone's post to get direct feedback from us, and I'm not saying it might not end up reducing some amount of effort from someone on the lightcone team to respond to things people are wrong about, but my current model is that for people to have justified belief in their model of the work that an org does, they should believe they would have heard negative info about us if it exists, and so I ought to encourage people to be openly severely critical and push back against demands to not write their criticism for a pretty large swathe of possible reasons. 

Concretely, public criticism of CFAR and MIRI has made me feel more confident in my model of how bad things have been in those places (and how bad it hasn't). Even if the criticism itself was costly to respond to, the cost was just worth it in terms of people being able to come to accurate beliefs about those places.

1jefftk1y
Do you read my post as a demand that people not write their criticism?

I wasn't referring specifically to the OP when I wrote that, I meant I ought to pushback against a pretty wide swath of possible arguments and pressures against publishing criticism. Nonetheless I want to answer your question.

My answer is more "yes" than "no". If someone is to publish a critique of an EA org and hasn't shown it to the org, people can say "But you didn't check it with the org, which we agreed is a norm around here, so you don't seem to be willing to play by the rules, and I now suspect the rest of this post is in bad faith." Yet I think it's a pretty bad norm in many important instances. If the writer feels personally harmed or tricked by the org, they often will feel that their relationship with the org is rocky, and may choose to say nothing rather than talk to the org. I am reminded of a time at university where I felt so scrutinized and disliked by my professors that after completing my assigned problem set, I couldn't bring myself to hand it in and face them directly. Somewhat more strongly, I think requiring people to talk to whoever they're saying negative things about would have made it much more costly for the authors of both of the links I linked to above,... (read more)

[-]gwern1y1910

(Also Zoe Curzi and Leverage. Really there’s a lot of examples.)

Also examples on the other side, I would note. Without a healthy skepticism of anonymous or other kinds of reduced-accountability reports, one would've been lead around by the nose by Ziz's attempts.

1TekhneMakre1y
Aha. Now it seems to me that my reading of the OP and my reaction, as well as others's reading of my comments, have both followed this pattern: 1. P1 implicitly proposes to call on some social machinery in some way (in jefftk's case, the machinery of norm setting; in my case, the machinery of group-epistemology) 2. P2 objects to a wide swath of proposals to call on that machinery (you, me, others pushing back on this norm setting; jefftk and others push back against trusting group epistemology) 3. P1 is confused about the response to some other proposal, or the imputation of claims not made In both cases I think that the most salient thing should be: this social machinery is broken. The norm setting/enforcing machine is broken, and the group epistemology machine is broken.
-2jefftk1y
In cases where you're worried about bad behavior by an org or have had a bad experience with them and don't want to interact with them (including the examples you described above) I agree it's fine to go ahead without sending it to them. On the other hand, I think this is only rarely the case for critical posts? The larger category, what this doesn't apply, is what I was trying to address here. I should edit the post to include this, though I need to think about the wording and don't have time to make the change right now.

Right. I suspect we still have some disagreement but happy to leave it here. 

(To briefly leave pointer, but with no expectation Jeff for you to respond to it: I think this sort of dynamic extends further into lots of other criticism, where even if your criticism isn't about bad behavior you're still pretty unsure how they respond to criticism and whether they'll respond well, and it can be very stressful to engage directly yet still pro-social to publish criticism.)

3jefftk1y
Edited to add something covering this, though I suspect it doesn't go as far as you'd prefer? (Also curious what you think of Ray's argument)
5Ben Pace1y
I actually think your caveat helps a lot.

What's wrong with an org posting a response later? Why do they have to scramble? This isn't a rhetorical question. I imagine two of the reasons are:

  1. If there's a misconception caused by the post, then until the misconception is corrected, there could be incorrect harm to the org. This seems like a reasonable concern in theory, but is this really a concern in practice? What are some examples of this being a real concern (if not for point 2 below)?

  2. If there's a misconception caused by the post, then the misconception might become ingrained in the social space and be more difficult for the org to correct.

If point 2 is necessary for your argument, then, this seems like the problem is about people getting ingrained opinions that can't be later corrected. Why does that happen? Are the processes driven by people who have really ingrained opinions actually processes that good-doing orgs want to be a part of? I expect people to shrug and say "yeah, but look, that's just how the world works, people get ingrained opinions, we have to work around that". My response is: why on earth are you forfeiting victory without thoroughly discussing the problem?

A more convincing argument would have to discuss how orgs are using time-sensitive PR to deceive the social space.

[-]jp1y157

This comment feels like wishful thinking to me. Like, I think our communities are broadly some of the more truth-seeking communities out there. And yet, they have flaws common to human communities, such as both 1 and 2. And yet, I want to engage with these communities, and to cooperate with them. That cooperation is made much harder if actors blithely ignore these dynamics by:

  1. Publishing criticism that could wait
  2. Pretend that they can continue working on that strategy doc they were working on, while there's an important discussion centered on their organization's moral character happening in public

I have a long experience of watching conversations about orgs evolve. I advise my colleagues to urgently reply. I don't think this is an attempt to manipulate anyone.

2TekhneMakre1y
What are your ideas for attenuating the anti-epistemic effects of belief lock-in and group think and information cascades?
[-]jefftk1y1410

What are some examples of this being a real concern

I didn't give this example in the original because it could look like calling out a specific individual author, which would be harsh. But it does seem like this post needs an example, so I've linked it here. Please be nice; I'm not giving them as an example of a bad person, just someone whose post triggered bad dynamics.

This is something I've been thinking about for a while, but it was prompted by the recent On what basis did Founder's Pledge disperse $1.6 mil. to Qvist Consulting from its Climate Change Fund? It reads as a corruption exposé, and I think Founders Pledge judged correctly that if they didn't get their response out quickly a lot of people would have shifted their views in ways that (the original poster agrees) would have been wrong.

The problem of people not getting the right information here seems hard to solve. For example, if you see just the initial post and none of the followup I think it's probably right to be somewhat more skeptical of FP. After the follow-up we have (a) people who saw only the original, (b) people who saw only the follow-up, and (c) people who saw both. And even if the follow-up gets a... (read more)

5TekhneMakre1y
I appreciate you sharing the example. I read the post, and I'm confused. It seems fine to me. Like, I'd guess (though could be wrong) that if I read this without context, I'd sort of shrug and be like "huh, I guess FP doesn't write super detailed grant reports". It doesn't read like a corruption expose to me. If someone is lying or distorting, then that can cause unjustified harm. If such a person could be convinced to run things by orgs beforehand, that would perhaps be good, though not obviously. If someone is NOT lying or distorting, then I think it's good for them to share in an efficient way, i.e. post without friction from running things by orgs. If there's harm, it's not directional. If people are just randomly not getting information, then that's bad, but it doesn't imply sharing less information would be good. If there's lock-in and info cascades, that's bad, but the answer isn't "don't share information". You wrote: Would you also call for positive posts to be run by an org's biggest critics? I could see that as a reasonable position. It's something I'd worry they would do if they were investing in PR this way. It's something I worry that they currently do because the EA social dynamics have tended in the past to motivate lying: https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html It's something that I worry EAs would coordinate on to silence discussion of, by shunning and divesting from people who bring it up.
1jefftk1y
I think some people will read the article as "they should have given more details publicly", but if that was what the author was trying to say they could have written a pretty different post. Something like: Instead, they walk the reader through a series of investigative steps in a way that reads like someone uncovering a corrupt grant. I think this would be positive, but putting it into practice is hard. If I'm writing something about Founders Pledge I don't know who their biggest critics are, so who do I share the post with? If that were the only problem I could imagine a system where each org has a notifications mailing list where anyone can post "here's a draft of something about you I'm thinking of publishing" and anyone interested in forthcoming posts can subscribe. But while I would trust Founders Pledge to behave honorably with my draft (not share it, not scoop me, etc) I have significantly less trust for large unvetted groups. If you had a proposal for how to do this, though, I'd be pretty interested! I didn't find that post very convincing when it came out, and still don't. I think the Forum discussion was pretty good, especially @Raemon's comments. And Sarah's followup is also worth reading.
3TekhneMakre1y
Huh. It just sounded like "I thought I'd find some information and then I didn't" to me. Maybe I'm just being tone deaf. Like, it sounded like a (boring and result-less) stack trace of some investigation. Ok. Yeah I don't see an obvious implementation; I was mainly trying to understand your position, though maybe it would actually be good. Thanks for the links.
3jefftk1y
I do think it is literally that, and I think that's probably how the author intended it. But I think many people won't read it that way?
3TekhneMakre1y
You may well be right. I think what's important to me here, is that the fact be highlighted that the cost here is coming from this property of readers. Like, I don't like the norm proposal, but if it were "made a norm" (whatever that means), I'd want it to be emphatically tagged with "... and this norm is only here because of the general and alarming state of affairs where people will read things for tone in addition to content, do groupthink, do info cascades, and take things as adversarial moves in a group conflict calling for side-choosing".
3Yair Halberstadt1y
Simple example: Person posts something negative about givewell. I read it, shrug, and say I guess I'll donate somewhere else in the future. Two days later a rebuttal appears, which if I ever read would change my mind. But I never do read it, because I've moved on, and givewell isn't hugely important to me.
3TekhneMakre1y
Is this a fictional example? You imagine that this happens, but I'm skeptical. Who would see just one post on LessWrong or the EA forum about GiveWell, and thenceforth not see or update on any further info about GiveWell?
5DirectedEvolution1y
I don't know about Yair's example, but it's possible they just miss the rebuttal. They'd see the criticism, not happen to log onto EA forum on the days when Givewell's response is on the top of the forum, and update only on the criticism because by a week or two later, many people probably just have a couple main points and a background negative feeling left in their brains.
4TekhneMakre1y
It's concerning if, as seems to be the case, people are making important decisions based on "a couple main points and a background negative feeling left in their brains".
8Yair Halberstadt1y
But people do make decisions like this, which is a well known psychological result. We've got to live with it, and it's not givewells fault that people are like this.
2TekhneMakre1y
As I said in my (unedited) toplevel comment:
1M. Y. Zuo1y
The latter point does not seem to follow the prior. 
4Yair Halberstadt1y
I do not know of any achievable plan to make a majority of the world's habitants rational.
-9M. Y. Zuo1y
2Yair Halberstadt1y
A fictional example.
1Phib1y
Anecdote of me (not super rationalist-practiced, also just at times dumb) - I sometimes discover stuff I briefly took to be true in passing to be false later. Feels like there’s an edge of truth/falsehoods that we investigate pretty loosely but still use a heuristic of some valence of true/false maybe a bit too liberally at times.
2TekhneMakre1y
What happens when you have to make a decision that would depend on stuff like that?
1Phib1y
I am unaware of those decisions at the time. I imagine people are some degree of ‘making decisions under uncertainty’, even if that uncertainty could be resolved by info somewhere out there. Perhaps there’s some optimization of how much time you spend looking into something and how right you could expect to be?
6TekhneMakre1y
Yeah, there's always going to be tradeoffs. I'd just think that if someone was going to allocate $100,000 of donations, or decide where to work, based on something they saw in a blog post, then they'd e.g. go and recheck the blog post to see if someone responded with a convincing counterargument.
5jefftk1y
A lot of it is more subtle and continuous than that: when someone is trying to decide where to give do I point them towards Founders Pledge's writeups? This depends a lot on what I think of Founders Pledge overall: I've seen some things that make me positive on them, some that make me negative, and like most orgs I have a complicated view. As a large language model trained by human I don't have full records on the provenance of all my views, and even if I did checking hundreds of posts for updates before giving an informal recommendation would be all out of proportion.
2TekhneMakre1y
Okay, but like, it sounds like you're saying: we should package information together into packets so that if someone randomly selects one packet, it's a low-variance estimate of the truth; that way, people who spread opinions based on viewing a very small number of packets of information. This seems like a really really low benefit for a significant cost, so it's a bad norm.
2jefftk1y
That's not what I'm saying. Most information is not consumed by randomly selecting packets, so optimizing for that kind of consumption is pretty useless. In writing a comment here it's fine to assume people have read the original post and the chain of parent comments, and generally fine to assume they've read the rest of the comments. On the other hand "top level" things are often read individually, and there I do think putting more thought into how it stands on its own is worth it. Even setting aside the epistemic benefits of making it more likely that someone will see both the original post and the response, though, the social benefits are significant. I think a heads up is still worth it even if we only consider how the org will be under strong pressure to respond immediately once the criticism is public, and the negative effects that has on the people responsible for producing that response.

I dunno man. If I imagine someone who's sort of peripheral to EA but knows a lot about X, and they see EA org-X doing silly stuff with X, and they write a detailed post, only to have it downvoted due to the norm... I expect that to cut off useful information far more than prevent {misconceptions among people who would have otherwise had usefully true and detailed models}.

4jefftk1y
I agree that would be pretty bad. This is a norm I'm pushing for people within the EA community, though, and I don't think we should be applying it to external criticism? For example, when a Swedish newspaper reported that FLI had committed to fund a pro-nazi group this was cross-posted to the forum. I don't think downvoting that discussion on the basis that FLI hadn't had a chance to respond yet would have been reasonable at all. I also don't think downvoting is a good way of handling violations of this norm. Instead I want to build the norm positively, by people including at the ends of their posts "I sent a draft to org for review" and orgs saying "thanks for giving us time to prepare a response" in their responses. To the extent that there's any negative enforcement I like Jason's suggestion, that posts where the subject didn't get advance notice could get a pinned mod comment.
2Garrett Baker1y
I expect much of the harm comes from people updating an appropriate amount from the post, not seeing the org/person’s reply because they never had to make any important decisions on the subject, then noticing later that many others have updated similarly, and subsequently doing a group think. Then the person/org is considered really very bad by the community, so other orgs don’t want to associate with them, and open phil no longer wants to fund them because they’re all scaredy cats they care about their social status.  To my knowledge this hasn’t actually happened, though possibly this is because nobody wants to be talking about the relevant death-spiraled orgs. Seems more likely the opposite is at play with many EA orgs like OpenPhil or Anthropic (Edit: in the sense that imo many are over-enthusiastic about them. Not necessarily to the same degree, and possibly for reasons orthogonal to the particular policy being discussed here), so I share your confusion about why orgs would force their employees to work over the weekend to correct misconceptions about them. I think most just want to seem professional and correct to others, and this value isn’t directly related to the core altruisticcmission (unless you buy the signaling hypothesis of altruism). 

Yeah, doing a group think seems to increase this cost. (And of course the group think is the problem here, and playing to the group think is some sort of corruption, it seems to me.)

2Garrett Baker1y
I don’t understand this part of your response. Can you expand?

Suppose that it actually were the case that OP and so on would shun orgs based on groupthink rather than based on real reasons. Now, what should an org do, if faced with the possibility of groupthink deciding the org is bad? An obvious response is to try to avoid that. But I'm saying that this response is a sort of corruption. A better response would be to say: Okay, bye! An even better response would be to try to call out these dynamics, in the hopes of redeeming the groupthinkers. The way the first response is corruption, is

  1. You're wasting time on trying to get people to like you, but those people have neutered their ability to get good stuff done, by engaging in this enforced groupthink.
  2. You're distorting your thoughts, confusing yourself between real reality and social reality.
  3. You're signaling capitulation to everyone else, saying, "Yes, even people as originally well-intentioned as we were, even such people will eventually see the dark truth, that all must be sacrificed to the will of groupthink". This also applies internally to the org.
1lsanders1y
I don’t have a clear opinion on the original proposal… but is it really possible to completely avoid groupthink that decides an org is bad?  (I assume that “bad” in this context means something like “not worth supporting”.) I would say that some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist.  I would also agree with you that delegating all evaluation to the group level has obvious downsides. If we accept both of those points, I think the question is more a matter of how to most productively scope the manner and degree to which individuals delegate their evaluations to a broader group, rather than a binary choice to wholly avoid (or support) such delegation.
2TekhneMakre1y
I'm not saying don't use group-level reasoning. I'm saying that, based on how people are advocating behaving, it seems like people expect the group-level reasoning that we currently actually have, to be hopelessly deranged. If that expectation is accurate, then this is a far worse problem than almost anything else, and we should be focusing on that. No one seems to get what I'm saying though.
1lsanders1y
Do you disagree that “some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist”?  If not, how does that dynamic differ from “shun[ning] orgs based on groupthink rather than based on real reasons”?
2TekhneMakre1y
Because groups can in theory compute real reasons. "Groups-level weeding out" sounds like an action that a group can take. One can in principle decide which actions to take based on reasons. Groupthink refers to making decisions based not on real reasons, but rather based on emergent processes that don't particularly track truth, but instead e.g. propagate social pressures or whatever. As an example: https://en.wikipedia.org/wiki/Information_cascade
3lsanders1y
For that distinction to be relevant, individuals need to be able to distinguish whether a particular conclusion of the group is groupthink or whether it’s principled. If the information being propagated in both cases is primarily the judgment, how does the individual group member determine which judgments are based on real reasons vs not?  If the premise is that this very communication style is the problem, then how does one fix that without re-creating much of the original burden on the individual that our group-level coordination was trying to avoid? If folks try to square this circle through a mechanism like random spot checks on rationales, then things may become eventually consistent but in many cases I think the time lag for propagating updates may be considerable.  Most people would not spot check any particular decision, by definition.  Anything that requires folks to repeatedly look at the group’s conclusions for all of their discarded ideas ends up being burdensome IMO.  So, I have trouble seeing an obvious mechanism for folks to promptly notice that the group reverted their decision that a particular org is not worth supporting?  The only possibilities I can think of involve more rigorously centralized coordination than I believe (as a loosely-informed outsider) to be currently true for EA.
2TekhneMakre1y
The broken group-level process doesn't solve anything, it's broken. I don't know how to fix it, but a first step would be thinking about the problem at all, rather than trying to ignore it or dismiss it as intractable before trying.
3lsanders1y
Okay, so you‘re defining the problem as groups transmitting too little information?  Then I think a natural first step when thinking about the problem is to determine an upper bound on how much information can be effectively transmitted.  My intuition is that the realistic answer for many recipients would turn out to be “not a lot more than is already being transmitted”.  If I’m right about that (which is a big “if”), then we might not need much thinking beyond that point to rule out this particular framing of the problem as intractable.
2TekhneMakre1y
I think you're very very wrong about that.
6lsanders1y
Fair enough.  Thanks for the conversation!

Remember: if an authority is doing something you don't like, make sure to ask them before you criticize them. By being an org, they are more important than you, and should be respected. Make sure to respect your betters.

More seriously, it doesn't seem totally unreasonable to ask people you're criticizing if they'd like to reply. And I do believe in trying to be respectful to those you disagree with to a significant degree most of the time. But, there's something extremely yikes about this being the default.

Remember: if an authority is doing something you don't like, make sure to ask them before you criticize them. By being an org, they are more important than you, and should be respected. Make sure to respect your betters.

I'm not sure if you changed your mind or kinda-sorta still mean this. But I also think that it would be best to have a norm of giving individual people a week to read and respond to a critical post, unless you have reason to think they'd use the time to behave in a tactical/adversarial manner. Same for orgs. If you think an organization would just use the week to write something dishonest or undermine your reputation, then go right ahead and post immediately. But if you're criticizing somebody or an org who you genuinely think will respond in good faith, then a week of response time seems like a great norm to me - it's what I would want if I was on the receiving end.

6the gears to ascension1y
It's strikethrough because I'm being sarcastic, and therefore never actually meant it, and in fact mean the opposite.
[-]gjm1y136

I don't think anyone ever thought you might actually think what you wrote in struck-out text. But what's not so clear is whether you actually think that's a fair paraphrase of the real meaning of what jefftk wrote.

I think it's plainly not a fair paraphrase of the real meaning of what jefftk wrote, and that there is no reason to think that Jeff's actual opinions or intentions at all resemble those in your struck-out text.

Jeff explicitly says that someone writing a critical piece about an organization shouldn't feel any pressure to let them influence what it says, let alone to let them stop it being published. He doesn't say anything at all like "they are more important than you" or "they are better than you" or "you owe them respect".

And I don't think there's anything at all "yikes" about Jeff's suggestion. I think that on the whole it is likely to produce more useful criticism and more useful discussion.

4Ben1y
I am really curious what the "disagreement votes" on this comment actually mean. Gears said "I used this formatting to show sarcasm.", what does it mean to disagree with that?
4jefftk1y
I think "disagree" means something like "I think you actually did sort of mean it"?
2the gears to ascension1y
yeah no idea either. here's the version of the original comment I endorse without hesitation:
2TekhneMakre1y
(I upvoted this, and then when I saw it was downvoted I strong upvoted. It's not exactly a deep analysis but the vibe should be brought into the light.)
1jefftk1y
Can you say more about what seems 'yikes' about defaulting to giving orgs (or individuals) you're criticizing a few days to prepare a response? (It's hard for me to see how to take your stricken initial paragraph, since you said below that you didn't mean it, but it sort of seems like you do mean it?)
2the gears to ascension1y
Sorry for ambiguity - here's the version I mean to be endorsing by using strikethrough to denote sarcasm: https://www.lesswrong.com/posts/Hsix7D2rHyumLAAys/run-posts-by-orgs?commentId=5gGyLeDK4b6nWfbCt - the strikethrough is simply, in my head, the css style that a paragraph which is sarcastic should get. one should not leave paragraphs meant to be inverted un-struck, imo. I don't mind the idea of "hey, I'm publishing this, feel free to comment on it. I can add your comments." I do mind the idea of "Hey, is there anything about this document you'd like me to edit before I criticize you?"
6jefftk1y
I am 100% not advocating that! Giving the organization an opportunity to prepare a response is not the same thing as letting them decide or influence what your post says.
2the gears to ascension1y
Hmm. I see.

I'm toying with a summary of "giving notice and a preview is value-creating when the org is genuinely trying but value-destroying when it's not.  A post author can easily be uncertain about this and pushing them to decide ahead of time destroys more value." 

I think the effect can be changed on the margin by how the org responds, and would be really interested in a companion piece to this about what orgs owe potential writers of critical posts. 

I disagree with this post. At the very least, I feel like there should be some kind of caveat or limit regarding the size of the organization or distance that one has from the organization. For example, if I'm writing a post or comment about some poor experience I had with Amazon, do I have a moral obligation to run that post by Amazon's PR beforehand? No. Amazon is a huge company, and I'm not really connected to them in any way, so I do not and should not feel any obligation towards them prior to sharing my experiences with their products or services.

-1jefftk1y
I'm only proposing here that EA community members let EA organizations review drafts before publishing. I think this probably also applies to other similar communities, but not without that cooperative relationship.

Isn’t whether there is, in fact, a cooperative relationship likely to be precisely the issue at hand, in many cases of criticism of EA orgs?

-1jefftk1y
If you don't have a cooperative relationship with the org then I wouldn't apply this rule, no. But most org criticism I see where someone didn't run it by the org is in cases where someone has either no preexisting relationship with the org (beyond being within the EA community) or one that's sufficiently cooperative that sharing would have been fine.

Why only orgs? Why not people? Is there some reason why this shouldn’t apply to all criticism, of anyone?

(Also, did you intend to limit this suggestion to only EA orgs, or any orgs?)

2jefftk1y
Public criticism of people is rare enough that I wasn't thinking about it, but yes, I think the same arguments apply. I think the case is much weaker when (a) you don't think they would want to respond or (b) you don't trust them to behave honorably with the information.
2Said Achmiz1y
Alright, makes sense, certainly a consistent view. I am definitely opposed to it in all cases. I do not think that criticism of, or commentary on, public actions of individuals, or organizations, should be run by those being criticized prior to publication. (My views on the matter are, in essence, similar to what Ben says in his comments on this post. See also my comments on Sarah Constantin’s related post for elaboration.)
2jefftk1y
In response to Ben's comments I've edited my post to clarify additional situations in which I don't think giving an org a heads up is needed: Not sure how much this addresses your concerns? Reading your comments on Sarah's post, it sounds like your objecting to a norm where someone criticizing is expected to address private feedback they get from an org before publishing? I'm not advocating that -- as I wrote above:
1Said Achmiz1y
Almost not at all. That’s definitely not the only thing I’m objecting to. This norm should not exist at all, because it will inevitably reduce the probability that true and/or correct criticism reaches the public. The downsides you point out are to the organization (or person) themselves. But I do not think that members of the public have any obligation to consider the org’s interests in such cases. Indeed, it would be wrong to do so, to whatever extent that consideration of the org’s interests (and any actions, or perceived obligation for actions, that result from such consideration) may tip the scales toward not publishing the criticism. It seems to me that it’s morally acceptable to consider the org’s interests only insofar as they have an effect on the public (construed here in an identical way to “the public” in “learning true facts and correct criticism of an org benefits the public). And even in those cases, disclosure of true information and correct criticism must be weighted much more strongly than some other purported effects (in accordance with the principle of non-paternalism). In short, the author of a critical post owes the target of the criticism no consideration of consequences, so long as obligations of honesty, accuracy, legality, appropriateness, etc. are met. The author may owe the post’s audience (a.k.a. the public) more than that, or may not; that may be argued. But to the target—no.