My friend Sasha, the software archaeology major, informed me the other day that there was once a widely used operating system, which, when it encountered an error, would often get stuck in a loop and repeatedly present to its user the options Abort, Retry, and Ignore. I thought this was probably another one of her often incomprehensible jokes, and gave a nervous laugh. After all, what interface designer would present "Ignore" as a possible user response to a potentially catastrophic system error without any further explanation?
Sasha quickly assured me that she wasn't joking. She told me that early 21st century humans were quite different from us. Not only did they routinely create software like that, they could even ignore arguments that contradicted their positions or pointed out flaws in their ideas, and did so publicly without risking any negative social consequences. Discussions even among self-proclaimed truth-seekers would often conclude, not by reaching a rational consensus or an agreement to mutually reassess positions and approaches, or even by an unilateral claim that further debate would be unproductive, but when one party simply fails to respond to the arguments or questions of another without giving any indication of the status of their disagreement.
At this point I was certain that she was just yanking my chain. Why didn't the injured party invoke rationality arbitration and get a judgment on the offender for failing to respond to a disagreement in a timely fashion, I asked? Or publicize the affair and cause the ignorer to become a social outcast? Or, if neither of these mechanisms existed or provided sufficient reparation, challenge the ignorer to a duel to the death? For that matter, how could those humans, only a few generations removed from us, not feel an intense moral revulsion at the very idea of ignoring an argument?
At that, she launched into a long and convoluted explanation. I recognized some of the phrases she used, like "status signaling", "multiple equilibria", and "rationality-enhancing norms and institutions", from the Theory of Rationality class that I took a couple of quarters ago, but couldn't follow most of it. (I have to admit I didn't pay much attention in that class. I mean, we've had the "how" of rationality drummed into us since kindergarten, so what's the point of spending so much time on the "what" and "why" of it now?) I told her to stop showing off, and just give me some evidence that this actually happened, because my readers and I will want to see it for ourselves.
She said that there are plenty of examples in the back archives of Google Scholar, but most of them are probably still quarantined for me. As it happens, one of her class projects is to reverse engineer a recently discovered "blogging" site called "Less Wrong", and to build a proper search index for it. She promised that once she is done with that she will run some queries against the index and show me the uncensored historical data.
I still think this is just an elaborate joke, but I'm not so sure now. We're all familiar with the vastness of mindspace and have been warned against anthropomorphism and the mind projection fallacy, so I have no doubt that minds this alien could exist, in theory. But our own ancestors, as recently as the 21st century? My dear readers, what do you think? She's just kidding... right?
[Editor's note: I found this "blog" post sitting in my drafts folder today, perhaps the result of a temporal distortion caused by one of Sasha's reverse engineering tools. I have only replaced some of the hypertext links, which failed to resolve, for obvious reasons.]
I think about this kind of issue a lot myself. My conclusion is along the lines of Hanson's X isn't about X - debating isn't really about discovering truth, for most people in most forums (LWers might be able to do better).
Indeed, it's not even clear to me that debate ever works. In science, debate is useful mostly to clarify positions, the meaning of terms, and the points of disagreement. It is never relied upon to actually obtain truth - that's what experiments are for.
One problem that debates inevitably encounter is the failure to distinguish questions of "is" from questions of "ought". We can potentially come to an agreement about answers to is-questions. It will be harder to agree about ought-questions.
Almost all debates involve mixtures of is-questions and ought-questions. Ideally, we would lay out a system of terminal values (answers to basic ought-questions), then ask a bunch of is-questions, and figure out what policy leads to the best fulfillment of the values. Of course, people never do this, either because the answers to the is-questions can't be reliably obtained, or because debate isn't about finding truth.
To get better answers to policy questions, we should do something similar to what Wall St. types do when they need to evaluate a security. Build a big spreadsheet that expresses the relationship between the is-Q and ought-Q values, plug in values for the ought-Q numbers and estimates for the is-Q numbers, and see what comes out. The model should also be tested under various assumptions for the is-Q numbers.
Every rationalist should be willing to revise his support of any policy, if new information about the is-Q numbers appears. Furthermore, he should be able to express what kind of new is-Q information would lead him to revise his policy support. For example, if you support international treaties to limit CO2 emissions, you should be to say under what conditions you would reverse your support (the same is true if you don't support such treaties, of course).