Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Rationalism is most often characterized as an epistemological position. On this view, to be a rationalist requires at least one of the following: (1) a privileging of reason and intuition over sensation and experience, (2) regarding all or most ideas as innate rather than adventitious, (3) an emphasis on certain rather than merely probable knowledge as the goal of enquiry. -- The Stanford Encyclopedia of Philosophy on Continental Rationalism.
By now, there are some things which most Less Wrong readers will agree on. One of them is that beliefs must be fueled by evidence gathered from the environment. A belief must correlate with reality, and an important part of that is whether or not it can be tested - if a belief produces no anticipation of experience, it is nearly worthless. We can never try to confirm a theory, only test it.
But yet, we seem to have no problem coming up with theories that are either untestable or that we have no intention of testing, such as evolutionary psychological explanations for the underdog effect.
I'm being a bit unfair here. Those posts were well thought out and reasonably argued, and Roko's post actually made testable predictions. Yvain even made a good try at solving the puzzle, and when he couldn't, he reasonably concluded that he was stumped and asked for help. That sounds like a proper use of humility to me.
But the way that ev-psych explanations get rapidly manufactured and carelessly flung around on OB and LW has always been a bit of a pet peeve for me, as that's exactly how bad evpsych gets done. The best evolutionary psychology takes biological and evolutionary facts, applies those to humans and then makes testable predictions, which it goes on to verify. It doesn't take existing behaviors and then try to come up with some nice-sounding rationalization for them, blind to whether or not the rationalization can be tested. Not every behavior needs to have an evolutionary explanation - it could have evolved via genetic drift, or be a pure side-effect from some actual adaptation. If we set out by trying to find an evolutionary reason for some behavior, we are assuming from the start that there must be one, when it isn't a given that there is. And even a good theory need not explain every observation.
Obviously I'm not saying that we should never come up with such theories. Be wary of those who speak of being open-minded and modestly confess their ignorance. But we should avoid giving them excess weight, and instead assign them very broad confidence intervals. This seems to contradict the claim that the human mind is well adapted to its EEA. Is evolutionary psychology wrong? Maybe the creationists are correct after all writes Roko, implying that it is crucial for us to come up with an explanation (yes, I do know that this is probably just a dramatic exaggaration on Roko's part, but it made such a good example that I couldn't help but to use it). But regardless of whether or not we do come up with an explanation, that explanation doesn't carry much weight if it doesn't provide testable predictions. And even if it did provide such predictions, we'd need to find confirming evidence first, before lending it much credence.
I suspect that we rationalists may have a tendency towards rationalism, as in the meaning above. In order to learn how to think, we study math and probability theory. We consider different fallacies, and find out how to dismantle broken reasoning, both that of others and our own. We learn to downplay the role of our personal experiences, recognizing that those may be just the result of a random effect and a small sample size. But learning to think more like a mathematician, whose empiricism resides in the realm of pure thought, does not predispose us to more readily go collect evidence from the real world. Neither does the downplaying of our personal experiences. Many are computer science majors, used to being in the comfortable position of being capable of testing their hypotheses without needing to leave their office. It is, then, an easy temptation to come up with a nice-sounding theory which happens to explain the facts, and then consider the question solved. Reason must reign supreme, must it not?
But if we really do so, we are endangering our ability to find the truth in the future. Our existing preconceptions constrain part of our creativity, and if we believe untested hypotheses too uncritically, the true ones may never even occur to us. If we believe in one falsehood, then everything that we build on top of it will also be flawed.
This isn't to say that all tests would necessarily have to involve going out of your room to dig for fossils. A hypothesis does get some validation from simply being compatible with existing knowledge - that's how they pass the initial "does this make sense" test in the first place. Certainly, a scholarly article citing several studies and theories in its support is already drawing on considerable supporting evidence. It often happens that a conclusion, built on top of previous knowledge, is so obvious that you don't even need to test it. Roko's post, while not yet in this category, drew on already established arguments relating to the Near-Far distinction and other things, and I do in fact find it rather plausible. Unless contradictory evidence comes in, I'll consider it the best explanation of the underdog phenomenon, one which I can build further hypotheses on. But I do keep in mind that none of its predictions have been tested yet, and that it might still be wrong.
It is therefore that I say: certainly do come up with all kinds of hypotheses, but if they haven't been tested, be careful not to believe in them too much.