I think you need a proposal that is a lot clearer before it stands any substantial chance of becoming a social norm. Surely we can't expect every comment here (or anywhere) to end with a full explicit probability distribution over all related issues. Nor can we demand that everyone reply to any reply to them. So you need some easily implementable and verifiable standards on who is expected to reply to what, and who is supposed to summarize their opinions when on what. I'm not that optimistic about his approach, but I'll keep my mind open.
Plausible deniability all over again. If you don't reply, it can always be seen as if you've forgot/left on vacation/got eaten by a giant squid. If you do reply, you signal with the quality of your reply, and so if you don't do your best in the context of the conversation at that point, it's a negative signal based on what's expected. A sloppy reply or a "this is what I believe if not why and I won't continue this conversation" will even signal that you don't respect your interlocutor enough to give a proper response and explain your disagreement, an inference worse than if you didn't reply at all. Deciding to reply carries a sentence of having to reply well and also to probably have to reply to the follow-ups. Not replying at all is the only way out.
We need a norm not for stating your last position, which is a burden on the person who has to reply, but for accepting irresponsible declarations, as last words in a conversation (and some way of signaling that this comment is in the "last word" mode). This is somewhat in conflict with mental hygiene: you shouldn't normally expose yourself to statements you can't sufficiently verify for yourself, but for this particular problem the balance seems to be in the other direction.
In other words, you have a hypothesis that the purpose of a "LessWrong"-type site is to provide a space for debate, and the data disagrees with your hypothesis. Whereupon you've decided to challenge the data...
I'm a relative newcomer to LW, having stumbled across it a couple months ago. Take it from this newbie: it's entirely unclear at the start what you are supposed to do in top-level posts and comments. This community operates on norms that (as far as I could tell) are tacit rather than explicit, and the post above suggests that these norms are unclear to the community itself.
For instance, at one point I thought that comments provided an opportunity for readers to offer constructive feedback to authors. In this view, a comment isn't intended to signal agreement or disagreement; rather its job is to help the author improve his or her exposition of the point they wanted to make, to be as clear, informative and convincing as possible.
My advice: authors of top-level posts could clarify what kind of response they expect (constructive feedback, agreement/disagreement, having their mistakes pointed out and corrected, elaboration of their theses by people who agree, etc.). Authors would then take responsibility for declaring cloture of this process, if appropriate.
If posting on a blog required committing to spend time in the future answering replies, then I wouldn't post there. I treat blogging as a leisure activity, which means that I should be able to stop doing it at any time and for any duration without consequences. I think most non-prominent posters feel the same way.
I'm actually divided about this. On the one hand, it's a really good point - this isn't something which should be a commitment. On the other hand, it makes sense as a norm - it's worth a lot to see an argument end in a satisfactory fashion, even if that fashion is, "I don't see any value in continuing this debate".
On the gripping hand, though, people will drop out of arguments without conclusion - through burning out, getting slammed with other commitments, or even through the simple decision that the community is not worth their time. And all these are legitimate reasons, even if "I can't win this argument otherwise" is not.
"Encourage it" is about where I'm at, now.
I agree about the issue of unresolved arguments. Was agreement reached and that''s why the debate stopped? No way to tell.
Particularly the epic AI-foom debate between Robin and Eliezer on OB, over whether AI or brain simulations were more likely to dominate the next century, was never clearly resolved with updated probability estimates from the two participants. In fact probability estimates were rare in general. Perhaps a step forward would be for disputants to publicize their probability estimates and update them as the conversation proceeds.
BTW sorry to see that linkrot continues to be a problem in the future.
One of Robin's comments made me think about what some of the possible hidden costs of my proposal are. Some have already been mentioned by others, so I'll just collect them here:
Are there any others that I've missed?
I think that this is a great idea. I often find myself ending a debate with someone important and rational without the sense that our disagreement has been made explicit, and without a good reason for why we still disagree.
I suspect that if we imposed a norm on LW that said: every time two people disagree, they have to write down, at the end, why they disagree, we would do better.
I suspect that if we imposed a norm on LW that said: every time two people disagree, they have to write down, at the end, why they disagree, we would do better.
Unfortunately that is usually 'I said it all already and they just don't get it. They think all this crazy stuff instead.'
Just letting things go allows both to save face. This can increase the quality of discussion because it reduces the need to advocate strongly so you are the clear winner once both sides make their closing statements.
I have been trying to figure out how to improve online conversations ever since I started (in 1992) to see a lot of potential in online conversations. In parallel, through the writing of Nick Szabo I came to understand a little bit how common law (the system used in the courts in the English-speaking countries) helped our civilization become as wealthy as it has become. Well about 5 or 6 years ago, I decided that judical systems are the best metaphor or model for what is needed to realize the full potential in online conversations. In particular, I came...
I think about this kind of issue a lot myself. My conclusion is along the lines of Hanson's X isn't about X - debating isn't really about discovering truth, for most people in most forums (LWers might be able to do better).
Indeed, it's not even clear to me that debate ever works. In science, debate is useful mostly to clarify positions, the meaning of terms, and the points of disagreement. It is never relied upon to actually obtain truth - that's what experiments are for.
One problem that debates inevitably encounter is the failure to distinguish question...
I find that people sometimes misread my intent (perhaps I am not clear enough) or use words in a different way to me. So continuing the discussion wouldn't increase their knowledge of the world apart from the little bit that refers to me, which doesn't seem worthwhile.
I feel a forum where no argument is unresolved would work better if there was a way of splitting people into groups with different view points. Then anyone from that group could make arguments on its behalf.
Brilliant post Wei.
Historical examination of scientific progress is much less of a gradual ascent towards a better understanding upon the presentation of a superior argument (Karl Popper's Logic of Scientific Discovery) but much more a irrational insistence on a set of assumptions as unquestionable dogma until the dam finally burst under the enormous pressures that kept building (Thomas Kuhn's Structure of Scientific Revolutions).
This is childish. You propose a norm, it doesn't make much of a splash, and so you adopt an incredulous persona to try to make us invoke an absurdity heuristic on our lack of norm-having? I liked the original idea - it'd be interesting, at least - but this is silly.
It was meant to be a light-hearted attempt to fight the Status Quo Institution Bias. Since you liked the original idea but don't like the way I went about trying to get it implemented, do you have any other suggestions?
My friend Sasha, the software archaeology major, informed me the other day that there was once a widely used operating system, which, when it encountered an error, would often get stuck in a loop and repeatedly present to its user the options Abort, Retry, and Ignore. I thought this was probably another one of her often incomprehensible jokes, and gave a nervous laugh. After all, what interface designer would present "Ignore" as a possible user response to a potentially catastrophic system error without any further explanation?
Sasha quickly assured me that she wasn't joking. She told me that early 21st century humans were quite different from us. Not only did they routinely create software like that, they could even ignore arguments that contradicted their positions or pointed out flaws in their ideas, and did so publicly without risking any negative social consequences. Discussions even among self-proclaimed truth-seekers would often conclude, not by reaching a rational consensus or an agreement to mutually reassess positions and approaches, or even by an unilateral claim that further debate would be unproductive, but when one party simply fails to respond to the arguments or questions of another without giving any indication of the status of their disagreement.
At this point I was certain that she was just yanking my chain. Why didn't the injured party invoke rationality arbitration and get a judgment on the offender for failing to respond to a disagreement in a timely fashion, I asked? Or publicize the affair and cause the ignorer to become a social outcast? Or, if neither of these mechanisms existed or provided sufficient reparation, challenge the ignorer to a duel to the death? For that matter, how could those humans, only a few generations removed from us, not feel an intense moral revulsion at the very idea of ignoring an argument?
At that, she launched into a long and convoluted explanation. I recognized some of the phrases she used, like "status signaling", "multiple equilibria", and "rationality-enhancing norms and institutions", from the Theory of Rationality class that I took a couple of quarters ago, but couldn't follow most of it. (I have to admit I didn't pay much attention in that class. I mean, we've had the "how" of rationality drummed into us since kindergarten, so what's the point of spending so much time on the "what" and "why" of it now?) I told her to stop showing off, and just give me some evidence that this actually happened, because my readers and I will want to see it for ourselves.
She said that there are plenty of examples in the back archives of Google Scholar, but most of them are probably still quarantined for me. As it happens, one of her class projects is to reverse engineer a recently discovered "blogging" site called "Less Wrong", and to build a proper search index for it. She promised that once she is done with that she will run some queries against the index and show me the uncensored historical data.
I still think this is just an elaborate joke, but I'm not so sure now. We're all familiar with the vastness of mindspace and have been warned against anthropomorphism and the mind projection fallacy, so I have no doubt that minds this alien could exist, in theory. But our own ancestors, as recently as the 21st century? My dear readers, what do you think? She's just kidding... right?
[Editor's note: I found this "blog" post sitting in my drafts folder today, perhaps the result of a temporal distortion caused by one of Sasha's reverse engineering tools. I have only replaced some of the hypertext links, which failed to resolve, for obvious reasons.]