What's the googolplexth decimal of pi? I don't know, but I know that it's rational for me to give each possible digit P=1/10. So there's a sense in which I can rationally assign probabilities to mathematical facts or computation outcomes on which I'm uncertain. (Apparently this can be modeled with logically impossible possible worlds.)
When we debate the truth of some proposition, we may not be engaging in mathematics in the traditional sense, but we're still trying to learn more about a structure of necessary implications. If we can apply probabilities to logic, we can quantify logical information. More logical information is better. And this seems very relevant to a misunderpracticed sub-art of group rationality -- the art of responsible argumentation.
There are a lot of common-sense guidelines for good argumentative practice. In case of doubt, we can take the logical information perspective and use probability theory to ground these guidelines. So let us now unearth a few example guidelines and other obvious insights, and not let the fact that we already knew them blunt the joy of discovery.
Every time we move from the issue at hand to some other, correlated issue, we lose informativeness. (We may, of course, care about the correlated issue for its own sake. Informativeness isn't the same thing as being on-topic.) The less the issue is correlated with the issue we care about, the more informativeness we lose. Relevance isn't black and white, and we want to aim for the lighter shades of gray -- to optimize and not just satisfice. When we move from the issue at hand to some issue correlated with some issue correlated with the issue at hand, we may even lose all informativeness! Relevance isn't transitive. If governments subsidized the eating of raspberries, would that make people happier? One way to find out is to think about whether it would make you happier. And one way to find out whether it would make you happier is to think about whether you're above-averagely fond of raspberries. But wait! Almost nobody is you. Having lost sight of our original target, we let all relevance slip away.
When we repeat ourselves, when we focus our attention on points misunderstood by a few loud people rather than many silent people, when we invent clever verbose restatements of the sentence "I'm right and you're wrong", when we refute views that nobody holds, when we spend more time on stupid than smart arguments, when we make each other provide citations for or plug holes in arguments for positions no one truly doubts, when we discuss the authority of sources we weren't taking on faith anyway, when we introduce dubious analogies, we waste space, time, and energy on uninformative talk.
It takes only one weak thought to ruin an argument, so a bad argument may be made out of mostly good, usable thoughts. Interpretive charity is a good thing -- what was said is often less interesting than what should have been said.
Incomplete logical information creates moral hazard problems. Logical information that decays creates even more moral hazard problems. You may have heard of "God", a hideous shapeshifter from beyond the universe. He always turns out to be located, and to obviously always have been located, in the part of hypothesis space where your last few arguments didn't hunt such creatures to extinction. And when you then make some different arguments to clean out that part of hypothesis space, he turns out be located, and to obviously always have been located, in some other part of hypothesis space, patrolled by the ineffectual ghosts of arguments now forgotten. (I believe theoreticians call this "whack the mole".)
The bigger a group of rationalists, the more its average member should focus on looking for obscure arguments that seem insane or taboo. There's a natural division of labor between especially smart people who look for novel insights, and especially rational people who can integrate them and be authorities.
My main recommendation: undertake a conscious effort to keep feeling your original curiosity, and let your statements flow from there, not from a habit to react passively to what bothers you most out of what has been said. Don't just speak under the constraint of having to reach a minimum usefulness threshold; try to build a sense of what, at each point in an argument, would be the most useful thing for the group to know next.
Consider a hilariously unrealistic alternate universe where everything that people argue about on the internet matters. I daresay that even there people could train themselves to mine the same amount of truth with less than half of the effort. In spite of the recent escape of the mindkill fairy, can we do especially well on LessWrong? I hope so!
What's the googolplexth decimal of pi? I don't know, but I know that it's rational for me to give each possible digit P=1/10. So there's a sense in which I can rationally assign probabilities to mathematical facts or computation outcomes on which I'm uncertain. (Apparently this can be modeled with logically impossible possible worlds.)
When we debate the truth of some proposition, we may not be engaging in mathematics in the traditional sense, but we're still trying to learn more about a structure of necessary implications. If we can apply probabilities to logic, we can quantify logical information. More logical information is better. And this seems very relevant to a misunderpracticed sub-art of group rationality -- the art of responsible argumentation.
There are a lot of common-sense guidelines for good argumentative practice. In case of doubt, we can take the logical information perspective and use probability theory to ground these guidelines. So let us now unearth a few example guidelines and other obvious insights, and not let the fact that we already knew them blunt the joy of discovery.
My main recommendation: undertake a conscious effort to keep feeling your original curiosity, and let your statements flow from there, not from a habit to react passively to what bothers you most out of what has been said. Don't just speak under the constraint of having to reach a minimum usefulness threshold; try to build a sense of what, at each point in an argument, would be the most useful thing for the group to know next.
Consider a hilariously unrealistic alternate universe where everything that people argue about on the internet matters. I daresay that even there people could train themselves to mine the same amount of truth with less than half of the effort. In spite of the recent escape of the mindkill fairy, can we do especially well on LessWrong? I hope so!