Assumption: Most people are not truthseeking.
Therefore, a rational truthseeking person's priors would still be that the person they are debating with is optimizing for something else, such as creating an alliance, or competing for status.
Collaborative truthseeking would then be what happens where all participants trust each other to care about truth. That not only each of them cares about truth privately, but that this value is also common knowledge.
If I believe that the other person genuinely cares about truth, then I will take their arguments more seriously, and if I am surprised, I will be more likely to ask for more info.
If "collaborative" is qualifying truth-seeking, perhaps we can see it more easily by contrast with non-collaborative truthseeking. So what might that look like?
This suggests collaborative truthseeking is done 1) for the benefit of both parties, 2) in a way that builds trust and mutual understanding, and 3) in a way that uses that trust and mutual understanding as a foundation.
There's another relevant contrast, where we could look at collaborative non-truthseeking, or contrast "collaborative truthseeking" as a procedure with other procedures that could be used (like "allocating blame"), but this one seems most related to what you're driving at.
I share Richard Kennaway's feeling that this is a rather strange question because the answer seems so obvious; perhaps I'm missing something important. But:
"Collaborative" just means "working together". Collaborative truthseeking means multiple people working together in order to distinguish truth from error. They might do this for a number of reasons, such as these:
There is a sense in which collaborative truth-seeking is built out of individual truth-seeking. It just happens that sometimes the most effective way for an individual to find what's true in a particular area involves working together with other individuals who also want to do that.
Collaborative truth-seeking may involve activities that individual truth-seeking (at least if that's interpreted rather strictly) doesn't because they fundamentally require multiple people, such as adversarial debate or double-cruxing.
Being "collaborative" isn't a thing that in itself brings benefits. It's a name for a variety of things people do that bring benefits. Speech-induced state changes don't result in better predictions because they're "collaborative"; engaging in the sort of speech whose induced state changes seem likely to result in better predictions is collaboration.
And yes, there are circumstances in which collaboration could be counterproductive. E.g., it might be easier to fall into groupthink. Sufficiently smart collaboration might be able to avoid this by explicitly pushing the participants to explore more diverse positions, but empirically it doesn't look as if that usually happens.
Related: collaborative money-seeking, where people join together to form a "company" or "business" that pools their work in order to produce goods or services that they can sell for profit, more effectively than they could if not working together. Collaborative sex-seeking, where people join together to form a "marriage" or "relationship" or "orgy" from which they can derive more pleasure than they could individually. Collaborative good-doing, where people join together to form a "charity" which helps other people more effectively than the individuals could do it on their own. Etc.
(Of course businesses, marriages, charities, etc., may have other purposes besides the ones listed above, and often do; so might groups of people getting together to seek the truth.)
There are two cultures in this particular trade-off. Collaborative and adversarial.
I pitch collaborative as, "let's work together to find the answer (truth)" and I pitch adversarial as, "let's work against each other to find the answer (truth)".
Internally the stance is different. For collaborative, it might look something like, "I need to consider the other argument and then offer my alternative view". For adversarial, it might look something like, "I need to advocate harder for my view because I'm right". (not quite a balanced description)
Collaborative: "I don't know if that's true, what about x" Adversarial "you're wrong because of x".
Culturally 99% of either is fine as long as all parties agree on the culture and act like it. They do include each other at least partially.
Bad collaboration is not being willing to question the other's position and bad adversarial is not being willing to question one's own position and blindly advocating.
I see adversarial as going downhill in quality of conversation faster because it's harder to get a healthy separation of "you are wrong" from, "and you should feel bad (or dumb) about it". "only an idiot would have an idea like that".
In a collaborative process, the other person is not an idiot because there's an assumption that we work together. If adversarial process cuts to the depth of beliefs about our interlocker then from my perspective it gets un-pretty very quickly. Although skilled scientists are always using both and have a clean separation of personal and idea.
In an adversarial environment, I've known of some brains to take the feedback, "you are wrong because x" and translate it to, "I am bad, or I should give up, or I failed" and not "I should advocate for my idea better".
At the end of an adversarial argument is a very strong flip, popperian style "I guess I am wrong so I take your side".
At the end of a collaborative process is when I find myself taking sides, up until that point, it's not always clear what my position is, and even at the end of a collaborative process I might be internally resting on the best outcome of collaboration so far, but tomorrow that might change.
I see the possibility of being comfortable in each step of collaboration to say, "thank you for adding something here". However I see that harder or more friction to say so during adversarial cultures.
I advocate for collaboration over adversarial culture because of the bleed through from epistemics to inherent interpersonal beliefs. Humans are not perfect arguers or it would not matter so much. Because we play with brains and mixing territory of belief and interpersonal relationships I prefer collaborative to adversarial but I could see a counter argument that emphasised the value of the opposite position.
I can also see that it doesn't matter which culture one is in, so long as there is clarity around it being one and not the other.
Collaborative: "I don't know if that's true, what about x" Adversarial "you're wrong because of x".
Culturally 99% of either is fine as long as all parties agree on the culture and act like it.
Okay, but those mean different things. "I don't know if that's true, what about x" is expressing uncertainty about one's interlocutor's claim, and entreating them to consider x as an alternative. "You're wrong because of x" is a denial of one's interlocutor's claim for a specific reason.
I find myself needing to say both of these things, but in different situations, each of which probably occurs more than 1% of the time. This would seem to contradict the claim that 99% of either is fine!
A culture that expects me to refrain from saying "You're wrong because of x" even if someone is in fact wrong because of x (because telling the truth about this wouldn't be "collaborative") is trying to decrease the expressive power of language and is unworthy of the "rationalist" brand name.
I advocate for collaboration over adversarial culture because of the bleed through from epistemics to inherent interpersonal beliefs.
I advocate for a culture that discourages bleed-through from epistemics to inherent
...if Kevin really manages to be wrong about everything, you'd be able to get the right answer just by taking his conclusions and inverting them
That only works for true-or-false questions. In larger answer spaces, he'd need to be wrong in some specific way such that there exists some simple algorithm (the analogue of "inverting") to compute the right answers from those wrong ones.
If multiple parties engage in adversarial interactions (e.g., debate, criminal trial, ...) with the shared goal of arriving at the truth then as far as I'm concerned that's still an instance of collaborative truth-seeking.
On the other hand, if at least one party is aiming to win rather than to arrive at the truth then I don't think they're engaging in truth-seeking at all. (Though maybe it might sometimes be effective to have a bunch of adversaries all just trying to win, and then some other people, who had better be extremely smart and...
What I currently call “Collaborative Truthseeking” typically makes sense when two people are building a product together on a team. It’s not very useful to say “you’re wrong because X”, because the goal is not to prove ideas wrong, it’s to build a product. “You’re wrong because X, but Y might work instead” is more useful, because it actually moves you closer to a working model. It can also do a fairly complex thing of reaffirming trust, such that people remain truthseeking rather than trying to win.
What if we’re building a product together, and I think you’re wrong about something, but I don’t know what might work instead? What should I say to you?
(See, e.g., this exchange, and pretend that cousin_it and I were members of a project team, building some sort of web app or forum software together.)
I roughly endorse this description. (I specifically think the "99% of either is fine" is a significant overstatement, but I probably endorse the weaker claim of "both styles can generally work if people are trying to do the same thing")
I'm a big fan of collaborative truth-seeking, so lemme try to explain what distinction I'd be communicating with it:
In an idealized individual world, you would be individually truth-seeking. This would include observing information, and using te information to update your model. You might also talk to others, which for various reasons (e.g. to help your allies or to fit in with social norms about honesty and information-sharing or ...) might include telling them (to the best of your ability) the sorts of true, relevant information that you would usually use in your own decision-making.
However, the above scenario runs into some problems, mostly because the true, relevant information that you'd usually use in your own decision-making might be simplified in various ways, for instance rather than concerning your observations directly, it concerns latent variable inferences that you've made on the basis of these observations. These latent variables are inferred from your capacity to observe, and for your capacity to make decisions, so it can be difficult for others to apply them. In particular:
There's also the issue that everyone involved might have far less evidence than could be collected if one went out and systematically collected it.
If one simply optimizes one's model for one's own purposes and then dumps the content of the model into collective discourse, the above problems tend to make the discourse get stuck because nobody is ready to deeply change anybody's mind, and instead only ready to make minor corrections to others who are engaging from basically the same perspective.
These problems don't seem inevitable though. If both parties agree to set a bunch of time aside to dive in deep, they could work to fix it. For instance:
Basically, collaborative truth-seeking involves modifying one's map so that it becomes easier to resolve disputes by collecting likelihood ratios, and then going out to collect those likelihood ratios.
One thing that I've come to think is that a major factor to clarify is one's "perspective" or "Cartesian frame", as in. which sources of information is one getting, and which areas of actions can one take within the subject matter. This perspective influences many of the other issues, and therefore is an efficient thing to talk about if one wants to understand them.
I don't hear this phrase much, so I suspect it's heavily context-specific in it's usage. If I were to use it at work, it'd probably be ironic, as a euphemism for "let me correct your thinking".
I can imagine it being used as a way to explicitly agree that the participants in a discussion are there to each change their minds, or to understand and improve their models, by comparing and exchanging beliefs with each other. Truth-seeking is the intent to change your beliefs, collaborative truth-seeking is the shared intent to change the group members' beliefs.
People coming together to work on a common goal can typically accomplish more than if they worked separately. This is such a familiar thing that I am unclear where your perplexity lies.
What conditions must obtain for an interaction between people to constitute “coming together to work on a common goal”? How commonly do said conditions obtain? Are they in effect in all, most, some, or none of the interactions between commenters on Less Wrong?
These are non-trivial questions.
What conditions must obtain for an interaction between people to constitute “coming together to work on a common goal”?
That people have a common goal, and that they come together to work on it. Ok, I'm being deliberately tautologous there, but these are ordinary English words that we all know the meanings of, put together in plain sentences. I am not seeing what is being asked by your question, or by Zack's. Examples of the phenomenon are everywhere (as are examples of its failure).
As for how to do real work as a group (an expression meaning the same as "coming together to work on a common goal"), and how much of it is going on at any particular place and time, these are non-trivial questions. They have received non-trivial quantities of answers. To consider just LW and the rationalsphere, see for example various criticisms of LessWrong as being no more than a place to idly hang out (a common purpose but a rather trifling one compared with some people's desires for the place); MIRI; CFAR, FHI; rationalist houses; meetups; and so on. In another sphere, the book "Moral Mazes" (recently discussed here) illustrates some failures of collaboration.
I do not see how the OP gives any entry into these questions, but I look forward to seeing other people's responses to it.
[Off topic] Data point: the repeated "(respectively I/you)" at the beginning of the post made that paragraph several times harder to read for me than it otherwise would have been.
(Publication history note: lightly adapted from a 4 May 2017 Facebook status update. I pulled the text out of the JSON-blob I got from exporting my Facebook data, but I'm not sure how to navigate to the status update itself without the permalink or pressing the Page Down key too many times, so I don't remember whether I got any good answers from my Facebook friends at the time.)
Here's the link, found from my account by searching for "collaborative truthseeking". There is a "Posts from Anyone/You/Your Friends" radio control on the left of the search page, so should probably work on your own posts as well.
I keep hearing this phrase, "collaborative truthseeking." Question: what kind of epistemic work is the word "collaborative" doing?
Like, when you (respectively I) say a thing and I (respectively you) hear it, that's going to result in some kind of state change in my (respectively your) brain. If that state change results in me (respectively you) making better predictions than I (respectively you) would have in the absence of the speech, then that's evidence for the hypothesis that at least one of us is "truthseeking."
But what's this "collaborative" thing about? How do speech-induced state changes result in better predictions if the speaker and listener are "collaborative" with each other? Are there any circumstances in which the speaker and listener being "collaborative" might result in worse predictions?