It seems like you're imagining a context that isn't particularly conducive to making intellectual progress. Otherwise, why would it be the case that John feels the need to regularly argue for veganism? If it's not obvious to the others that John's not worth engaging with, they should double-crux and be done with it. The "needs" framing feels like a tell that talking, in this context, is mainly about showing that you have broadcast rights, rather than about informing others.
The main case I can imagine where a truth-tracking group should be rationing attention like this, is an emergency where there's a time-sensitive question that needs to be answered, and things without an immediate bearing on it need to be suppressed for the duration.
The "needs" framing feels like a tell that talking, in this context, is mainly about showing that you have broadcast rights, rather than about informing others.
That's because lots of talking is mainly about broadcast rights. Any doublecrux on this situation has to include both John's explicit argument, AND his need for broadcast rights, or it won't actually solve the underlying issue. He'll fail to update, or choose another thing to continually bring up.
Pretending humans are only optimizing for truth is a recipe for spending lots of time having arguments that are pretend about one thing when they're actually about broadcast rights or traumas.
The dialogue is portraying an organization that's just realizing that the naive idea that more time spent on object level truths leads to more truth is wrong.
In the fully evolved form of the organization, someone (maybe even John himself) would have realized he had this need the first or second time it happened, and gone meta to address it. Then in the future, when it comes up, people could point out when it's derailing the conversation in a way that puts John's need above the need of the group to get to the truth. The organization would
...Overall great post, thanks! Much I agree with, but a few things stick out.
By bringing this topic up so much, you're putting your needs above the needs of others you're interacting with and the group, instead of bringing it up less frequently, which would be placing the needs on equal ground.
The competing needs frame feels off to me. I think this is why (but I haven't thought about it at length):
In many cases (including the rationality community/LW), the point is to come together towards some joint objective. Raemon would call this building a product together. When you're building a product, it's not about my needs vs your needs, it's about which actions will actually lead to a successful product.
I think it's quite importantly about both. In her review of An Everyone Culture Sarah describes a deliberately developmental organizations as
creating a culture where everyone talks about mistakes and improvements, and where the personal/professional boundaries are broken down.
That second one may seem nuts, but as she points out in her review of moral mazes in the same post, there's a really good reasons to bring our needs to work: If we don't, we end up pretending to have conversations about Product that are actually about our needs. This should terrify us as people who actually care about the product, because it might mean that your fighting for website minimalism is actually about your need to be heard, and has nothing to do with creating a better reading experience as you're eloquently arguing.
Once you accept that much work at traditional organizations is actually about
...I feel like the elephant in the room is that convincing logical arguments are often only weak to moderate evidence for something.
In the 'keep the organization from being overrun' sense, see also sealioning. The search space of worthwhile things is very large and idiosyncratically explored by well meaning, intelligent people. Aggressive value laden 'logical arguments' often point to a tacit value to have everyone converge on the same set of metaheuristics. This is because the person doing this has a strong need for internal consistency that they are externalizing onto their social space. And there's nothing wrong with wanting internal consistency. But if pressed hard, it is anti-truth seeking as an aggregate strategy because you lose out on the consilience of having different people pursuing different search methods. Epistemology is a team sport. The objection would be 'but if we don't then argue about what we've discovered what's the point?' The point is that adversarial processes as a part of the truth seeking process needs to be consensual. This applies doubly when you aren't in a 101 space and people might be sick of a dynamic where simple seeming questions with complicated answers make newer members feel entitled to the effort needed to explain said complicated answers. This is one of the reasons well written blog posts that can be referenced by name can be so helpful for community discourse.
I like this post by the way and my comment wasn't an objection to it.
our norm of radical transparency means that this and all similar conversations I have like this will be recorded and shared with everyone, and any such political moves by me will be laughably transparent.
And the decision algorithm that your brain uses to decide who to sit down is also recorded, one imagines? In accordance with our norm of radical transparency.
The general rule is that people should give equal weight to their own needs, the needs of the people they're interacting with, and the needs of the organization as a whole.
I'm terribly sorry, but I'm afraid I'm having a little bit of trouble working out the details of exactly how this rule would be applied in practice—could you, perhaps, possibly, help me understand?
Suppose Jill comes to Jezebel and says, "Jezebel, by mentioning the hidden Bayesian structure of language and cognition so often, you're putting your own needs above the needs of those you're interacting with, and those of the organization as a whole."
Jezebel says, "Thanks, I really value your opinion! However, I've already taken everyone's needs into account, and I'm very confident I'm already doing the right thing."
What happens?
Jill sits down with everyone, understands all the points of views, and does her best to understand all the arguments. In the end, she determines Jezebel was correct.
What if, instead, Jill determines that Jezebel was wrong—but Jezebel still disagrees?
She sits down with each of them and explains why, based on the values, she decided that Jezebel was correct.
What if all said people are not satisfied with Jill’s explanation?
or they're doing some sort of socratic move (in the latter case, this is a style of conversation I'd rather not have on my posts
Very well. I will endeavor to be more direct.
there are clear answers to them if you spend a few minutes steelmanning how the aforementioned organization would work well
The fourth virtue is evenness! If you first write at the bottom of a sheet of paper, "And therefore, the aforementioned organization would work well!", it doesn't matter what arguments you write above it afterward—the evidential entanglement between your position and whatever features-of-the-world actually determine organizational success, was fixed the moment you determined your conclusion. After-the-fact steelmanning that selectively searches for arguments supporting that conclusion can't help you design better organizations unless they have the power to change the conclusion. Yes requires the possibility of no.
they're looking for impossible certainty in an obviously context specific and highly variable situation
We're looking for a decision procedure. "It's context-specific; it depends" is a good start, but a useful proposal needs to say more about what it depends on.
A simple ex
...Doing the "strong opinions weakly held" thing can make it hard to know when I've updated, so I want to list a few updates I've made from discussing this post with people on LW and in person:
One of the major things I didn't realize about the models I was using in this post is when they do and don't apply. In particular, the models related to radical transparency and applying the values to everyone work better in a private space with strong vetting, and the models related to "balancing needs" work better in a public space with weaker vetting. If I were to write the post again, this is the biggest change I would focus on making.
I am now more skeptical of radical transparency and wary of some of its' psychological effects, especially in the context of a public space, but even in private organizations with strong vetting.
I still think the "people's needs are equal with the product of the space" model is basically correct for a public space, but now think that there are multiple ways that could look. One of the ways it could look is like here, but another way this could be implemented is in which everyone is "responsible" for their own feelings. That is, people can treat thei
I just reread your post and have a couple more comments.
Jill: The problem is twofold. Firstly, people find it annoying to retread the same conversation over and over. More importantly, this topic usually leads to demon conversations, and I fear that continued discussion of the topic at the rate its' currently discussed could lead to a schism. Both of these outcomes go against our value of being a premiere community that attracts the smartest people, as they're actually driving these people away!
Jill: Yes, truthseeking is very important. However,...
I liked this post a lot and loved the additional comment about "Feeling and truth-seeking norms" you wrote here.
As a small data point: there have been at least three instances in the past ~three months where I was explicitly noticing certain norm-promoting behavior in the rationalist community (and Lesswrong in particular) that I found off-putting, and "truth-seeking over everything else" captures it really well.
Treating things as sacred can lead to infectiousness where items in the vicinity of the thing are treated as sacred too, even ...
Something that I'm maybe able to put into words now:
The classical example of "sacred values run amok" in my mind is when you ask people how much money a hospital should spend on a heart transplant for a dying child. People try to dodge the question, avoiding trading off a sacred value for a mundane value. Despite the fact that money can buy hospital equipment that saves other lives.
It's plausible that hospital should hold "keeping people healthy and alive" as an overall sacred value, which they never trade off against. This might forbid some paths where resources are spent on things that weren't necessary to keep people healthy and alive. But it doesn't tell you what are the best strategies to go about it are. You're allowed to sacrifice a boy's life to buy hospital equipment. You're even allowed to sacrifice a boy's life to make sure your employees are well rested and not overly stressed. Running a hospital is a marathon, not a sprint.
Over the past couple years, I have updated to "yes, LessWrong should be the place focused on truthseeking." I think I came to believe that right around the time I wrote Tensions in...
I definitely agree that there could exist perverse situations where there are instrumental tradeoffs to be made in truthseeking of the kind I and others have been suspicious of. For lack of a better term, let me call these "instrumentally epistemic" arguments: claims of the form, "X is true, but the consequences of saying it will actually result in less knowledge on net." I can totally believe that some instrumentally epistemic arguments might hold. There's nothing in my understanding of how the universe works that would prevent that kind of scenario from happening.
But in practice, with humans, I expect that a solid supermajority of real-world attempts to explicitly advocate for norm changes on "instrumentally epistemic" grounds are going to be utterly facile rationalizations with the (typically unconscious) motivation of justifying cowardice, intellectual dishonesty, ego-protection, &c.
I (somewhat apologetically) made an "instrumentally epistemic" argument in a private email thread recently, and Ben seemed super pissed in his reply (bold italics, incredulous tone, "?!?!?!?!?!" punctuation). But the thing is—even if I might conceivably go on to defend a modified form of my orig
...I'm not advocating lying here, I'm advocating learning the communication skills necessary to a) actually get people to understand your point (which they'll have a harder time with if they're defensive), and b) not wasting dozens of hours unnecessarily (which could be better spent on figuring other things out).
[and to be clear, I also advocate gaining the courage to speak the truth even if your voice trembles, and be willing to fight for it when it's important. Just, those aren't the only skills a rationalist or a rationalist space needs. Listening, communicating clearly, avoiding triggering people's "use language as politics mode", and modeling minds and frames different from your own are key skills too]
I find myself wanting to question your side more
Thanks, I appreciate it a lot! You should be questioning my "side" as harshly as you see fit, because if you ask questions I can't satisfactorily answer, then maybe my side is wrong, and I should be informed of this in order to become less wrong.
Why do you think this prior is right?
The mechanism by which saying true things leads to more knowledge is at least straightforward: you present arguments and evidence, and other people evaluate those arguments and evidence using the same general rules of reasoning that they use for everything else, and hopefully they learn stuff.
In order for saying true things to lead to less knowledge, we need to postulate some more complicated failure mode where some side-effect of speech disrupts the ordinary process of learning. I can totally believe that such failure modes exist, and even that they're common. But lately I seem to be seeing a lot of arguments of the form, "Ah, but we need to coordinate in order to create norms that make everyone feel Safe, and only then can we seek truth." And I just ... really have trouble taking this seriously as a good faith argument rather than an attempt to col
...When you look at the question using that native architecture, it becomes relatively simple to find a reasonable answer.
I don't think "reasonable" is the correct word here. You keep assuming away the possibility of conflict. It's easy to find a peaceful answer by simulating other people using empathy, if there's nothing anyone cares about more than not rocking the boat. But what about the least convenient possible world where one party has Something to Protect which the other party doesn't think is "reasonable"?
The shared values and culture serve to make sure those heuristics are calibrated similarly between people.
Riiiight, about that. The OP is about robust organizations in general without mentioning any specific organization, but given the three mentions of "truthseeking", I'd like to talk about the special case of this website, and set it in the context of a previous discussion we've had.
I don't think the OP is compatible with the shared values and culture established in Sequences-era Overcoming Bias and Less Wrong. I was there (first comment December 22, 2007). If the Less Wrong and "rationalist" brand names are now largely being held by a different culture with differen
...I don't think "reasonable" is the correct word here. You keep assuming away the possibility of conflict. It's easy to find a peaceful answer by simulating other people using empathy, if there's nothing anyone cares about more than not rocking the boat. But what about the least convenient possible world where one party has Something to Protect which the other party doesn't think is "reasonable"?
Yes, if someone has values that are in fact incompatible with the culture of the organization, they shouldn't be joining that organization. I thought that was clear in my previous statements, but it may in fact have not been. If every damn time their own values are at odds with what are best for the organization given its' values, that's an incompatible difference. They should either find a different organization, or try the archipeligo model. There are such thing as irreconcilable value differences.
I don't think the OP is compatible with the shared values and culture established in Sequences-era Overcoming Bias and Less Wrong.
I agree. I think when that culture was established, the community was missing important concepts abou...
I think when that culture was established, the community was missing important concepts about motivated reasoning and truth seeking
Can you be more specific? Can you name three specific concepts about motivated reasoning and truthseeking that you know, but Sequences-era Overcoming Bias/Less Wrong didn't?
I think many of those norms originally caused the site to decline and people to go elsewhere.
I mean, that's one hypothesis. In contrast, my model has been that communities congregate around predictable sources of high-quality writing, and people who can produce high-quality content in high volume are very rare. Thus, once Eliezer Yudkowsky stopped being active, and Yvain a.k.a. the immortal Scott Alexander moved to Slate Star Codex (in part so that he could write about politics, which we've traditionally avoided), all the "intellectual energy" followed Scott to SSC.
Can you think of any testable predictions (or retrodictions) that would distinguish my model from your model?
I also got that this is a subject you care a lot about.
Yes. Thanks for listening.
When you look at the question using that native architecture, it becomes relatively simple to find a reasonable answer. This is the same way that we regularly find solutions to complex negotiations between multiple parties, or plan complex situations with multiple constraints, even though many of those tasks are naively uncomputable.
I'm not confident that it does. I perhaps expect people doing this using the native architecture to feel like they've found a reasonable answer. But I would expect them to actually be prioritising their own feelings, in most cases. (Though some people will underweight their own feelings. And perhaps some people will get it right.)
Perhaps they will get close enough for the answer to still count as "reasonable"?
If someone attempts to give equal weight to their own needs, the meds of their interlocutor, and the needs of the forum as a whole - how do we know whether they've got a reasonable answer? Does that just have to be left to moderator discretion, or?
I think Said things that individuals bear full responsibility their feelings of safety, and that it’s actively harmful to make these something the group space has to worry about.
Well, this is certainly not an egregious strawman by any stretch of the imagination—it’s a reasonable first approximation, really—but I would prefer to be somewhat more precise/nuanced. I would say this:
Individuals bear full responsibility for having their feelings (of safety, yes, and any other relevant propositional attitudes) match the reality as it in fact (objectively/intersubjectively verifiably) presents itself to them.[1]
This, essentially, transforms complaints of “feeling unsafe” into complaints of “being unsafe”; and that is something that we (whoever it is who constitute the “we” in any given case) can consider, and judge. If you’re actually made unsafe by some circumstance, well, maybe we want to do something about that, or prevent it. (Or maybe we don’t, of course. Likely it would depend on the details!) If you’re perfectly safe but you feel unsafe… that’s your own business; deal with it yourself![2]
...I think Said might even believe that “social safety” isn’t even important for the space, i.
I hereby proclaim that "feelings of safety" be shortened to "fafety." The domain of worrying about fafety is now "fafety concerns."
Problem solved. All in a day's work.
Just, the sort of thing that you should say 'ah, that makes sense. I will work on that' for the future.
It's actually not clear to me that I should work on that. As a professional hazard of my other career, I'm pretty used to people trying to use "You would be more persuasive if you were nicer" as an attempted silencing tactic; if I just believed everyone who told me that, I would never get anything done.
If you supress a signal it's hard to know how representative it is of the whole population. If people stop expressing when their feelings are hurt it becomes next to impossible to keep a representative statistic. Why vote when my vote is one among thousands and very unlikely to be a swing vote? If a speech act infact impacts a big portion but each believes they are a single person minority you get more suppression than desired. Also general danger of revolving around common denominators.
There's a lot I like about this post (I was mulling over a similar sort of post, spelling out what collection of norms I actually think would actually work best for a dedicated truthseeking space).
There are two crystallizations here that I like, which I'd been struggling to articulate: over the past year I've updated harder in the "yes, it's really important for LessWrong's highest value to be truthseeking, and not to make any tradeoffs for other things." But something about that still felt nagging to me. I grappled a bi...
What would go horribly wrong?
I'm not sure what the effect of having your every word recorded and freely available would have on in person conversations. Or having Crocker's Rules instituted.
Epistemic Status: Strong opinions weakly held. Mostly trying to bring some things into the discourse that I think are too often ignored.
Some updates I've made based on the discussion in this post are here .
Introduction
Jessicata's Dialogue on Appeals to Consequences is an expansion of a response that she wrote to me a few months ago, arguing a particular point that I agree with: Namely, if you have an object level thing you want in the world, it's almost never worth lying or withholding information about that thing, because it breaks meta level norms about truthseeking that are much more important to accomplishing object level goals in general. However, there's a slightly more interesting case that I think is quite murkier, that the original comment was pointing to. That is, what if your truthseeking norms are in tension with OTHER meta level norms that are important? In general, how do you deal with instances where tensions between two important values cause you to not know what to do?
Dialogue
Let's imagine John and Jill are discussing John's behavior in a private space. Jill is a leader of the space, and John is someone who frequently attends the space and has lively discussions trying to get to the truth.
Jill: John, I've had several complaints about your tendency to steer conversations towards the divisive topic that everyone should be a Vegan, and I'm going to ask you to tone it down a bit when you're in our main space.
John: Are people saying that I'm making arguments that are false?
Jill: No, no one is saying that you're making false arguments. John: Are people saying that I'm derailing the conversation? I think you'll find that every instance I brought up veganism was highly relevant to the conversation.
Jill: Yes, some people have said that, but I happen to believe you when you say that you've only brought it up in relevant contexts for you.
John: Then what's the problem? I'm stating relevant true beliefs that add to the totality of the conversation and steer it in conversationally relevant directions.
Jill: The problem is twofold. Firstly, people find it annoying to retread the same conversation over and over. More importantly, this topic usually leads to demon conversations, and I fear that continued discussion of the topic at the rate its' currently discussed could lead to a schism. Both of these outcomes go against our value of being a premiere community that attracts the smartest people, as they're actually driving these people away!
John: Excuse me for saying so, but this a clear appeal to consequence!
Jill: Is it? I'm not saying that the negative consequences to the community mean that what you're saying is false - that would be a clear logical fallacy. Instead I'm just asking you to bring up this argument less often because I think it will lead to bad outcomes.
John: Ok, maybe it's not a logical fallacy, but it is dangerous. This community is built on a foundation of truth seeking, and once we start abandoning that because of people's feelings, we devolve into tribal dynamics and tone arguments!
Jill: Yes, truthseeking is very important. However, It's clear that just choosing one value as sacred , and not allowing for tradeoffs can lead to very dysfunctional belief systems,.I believe you've pointed at a clear tension in our values as they're currently stated. The tension between freedom of speech and truth, and the value of making a space that people actually want to have intellectual discussions at.
John: You're saying there's a tension, but to me there's a clear and obvious winner. Under your proposed rules, anyone will be able to silence anything simply by saying they don't like it!
Jill: If I find someone trying to silence good arguments through that tactic, I'll sit them down and have a similar conversation to the one we're having now.
John: That's even worse! That means that instead of the putting the allowed conversation topics up to vote, we're putting them in the hands of one person, you! You can silence any conversation you want.
Jill: I can see how it would seem that way, but I believe we've cultivated some great cultural norms that make it harder for me to play to political games like that. Firstly, our norm of radical transparency means that this and all similar conversations I have like this will be recorded and shared with everyone, and any such political moves by me will be laughably transparent.
John: That makes sense. Also, Hi Mom!
Jill: Second, our organization allows anyone to apply the values to anyone else, so if you see ME not following the values in any of my talks, you can call me out on it and I'll comply.
John: Sure, you say that now, but because of your role you can just defy that rule whenever you want! Jill: That's true, and it's one of the reasons I've worked to cultivate integrity as a leader. Has there been any instance of my behavior where you think I would actually do that?
John: No I suppose not. Are there any other cultural norms preventing you from using the arbitrary nature of decisions for your own gain? Jill: There's one more. Our organization has a clear set of values, and as the leader one of my roles is to spearhead the change the values in clear ways when there's tension between them. So I'm not just going to talk to you, I'm actually going to suggest to the organization that we clarify our values such that they tell us to do in these relatively common situations, and I'm going to have you help me.
John: I think that makes sense. We can probably make a list of topics that people are allowed to taboo, and a list of topics people are not allowed to taboo, and then I'll always know what it's ok to "appeal to consequences" on. Jill: I'm afraid that particular rule would be unwise. I think there's practically unlimited scissor statements that could cause schisms in our community, and a skilled adversary could easily find one that's not on our list of approved topics. No, I'm afraid we'll need to make a general value that can cover these situations in the general case.
John: Oh, so trying to avoid appeals to consequence argument can actually be used by someone looking to harm our community? That's interesting! But it's not clear to me that there is a general rule that can cover all the cases.
Jill: There is. The general rule is that people should give equal weight to their own needs, the needs of the people they're interacting with, and the needs of the organization as a whole.
John: I'm not sure I get it.
Jill: Well, you have a need to express that everyone should be a vegan. It's clearly very important to you, or you wouldn't bring it up so much. At the same time, many of the people in our community have a need to have variety in their conversation, and you should be aware of this when talking with them. Finally, our organization has a need to not experience/discuss scissor statements too often or too frequently, in order to remain healthy and avoid frequent schisms. By bringing this topic up so much, you're putting your needs above the needs of others you're interacting with and the group, instead of bringing it up less frequently, which would be placing the needs on equal ground.
John: That makes sense. I suppose by the same token, if there's a really interesting topic that's helpful for the group to know about, and lots of people want to talk about, it would be putting your own needs above others needs if you said it hurt your feelings so people couldn't talk about it.
Jill: Exactly!
John: So this rule seems plausible to me, and I'm sure it would be great for many people, but I have to admit its' not for me. I'd much prefer a space where people are allowed to say anything they want to me, and I can say anything I want to them in return.
Jill: I agree that this may not be the best rule for everybody. That's why next week we're going to start experimenting with The Archipelago Model. As I said, I want you to tone it down in the main room, which follows the Maturity value mentioned above. However, we've designated a side room that instead follows Crocker's Rules. You're allowed to go to either room, but when in that room, must follow the stated values of the room. And most importantly, all conversations are recorded and can be listened to by anyone in the community!
John: Cool, that seems worthwhile, but very messy and likely to have numerous hidden failure modes...
Jill: I agree, but it at least seems worth a shot!
Commentary
So you probably noticed already, but this post wasn't really about Appeal to Consequences at all. Instead, it's a meditation on how good organizations deal with tensions in their values, and avoid the organization being overrun by skilled sociopaths. A lot of these suggestions and ideas come from the work I've been doing over the past year or so to figure out what makes great organizations and communities. I'd be particularly interested in peoples' inner sim of how the organization described by John and Jill above would go horribly wrong, and counter ideas about what could be done to fix THOSE issues.