Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
The criticism is sometimes leveled against rationalists: "The Inquisition thought they had the truth! Clearly this 'truth' business is dangerous."
There are many obvious responses, such as "If you think that possessing the truth would license you to torture and kill, you're making a mistake that has nothing to do with epistemology." Or, "So that historical statement you just made about the Inquisition—is it true?"
Reversed stupidity is not intelligence: "If your current computer stops working, you can't conclude that everything about the current system is wrong and that you need a new system without an AMD processor, an ATI video card... even though your current system has all these things and it doesn't work. Maybe you just need a new power cord." To arrive at a poor conclusion requires only one wrong step, not every step wrong. The Inquisitors believed that 2 + 2 = 4, but that wasn't the source of their madness. Maybe epistemological realism wasn't the problem either?
It does seem plausible that if the Inquisition had been made up of relativists, professing that nothing was true and nothing mattered, they would have mustered less enthusiasm for their torture. They would also have had been less enthusiastic if lobotomized. I think that's a fair analogy.
And yet... I think the Inquisition's attitude toward truth played a role. The Inquisition believed that there was such a thing as truth, and that it was important; well, likewise Richard Feynman. But the Inquisitors were not Truth-Seekers. They were Truth-Guardians.
I once read an argument (can't find source) that a key component of a zeitgeist is whether it locates its ideals in its future or its past. Nearly all cultures before the Enlightenment believed in a Fall from Grace—that things had once been perfect in the distant past, but then catastrophe had struck, and everything had slowly run downhill since then:
"In the age when life on Earth was full... They loved each other and did not know that this was 'love of neighbor'. They deceived no one yet they did not know that they were 'men to be trusted'. They were reliable and did not know that this was 'good faith'. They lived freely together giving and taking, and did not know that they were generous. For this reason their deeds have not been narrated. They made no history."
—The Way of Chuang Tzu, trans. Thomas Merton
The perfect age of the past, according to our best anthropological evidence, never existed. But a culture that sees life running inexorably downward is very different from a culture in which you can reach unprecedented heights.
(I say "culture", and not "society", because you can have more than one subculture in a society.)
You could say that the difference between e.g. Richard Feynman and the Inquisition was that the Inquisition believed they had truth, while Richard Feynman sought truth. This isn't quite defensible, though, because there were undoubtedly some truths that Richard Feynman thought he had as well. "The sky is blue," for example, or "2 + 2 = 4".
Yes, there are effectively certain truths of science. General Relativity may be overturned by some future physics—albeit not in any way that predicts the Sun will orbit Jupiter; the new theory must steal the successful predictions of the old theory, not contradict them. But evolutionary theory takes place on a higher level of organization than atoms, and nothing we discover about quarks is going to throw out Darwinism, or the cell theory of biology, or the atomic theory of chemistry, or a hundred other brilliant innovations whose truth is now established beyond reasonable doubt.
Are these "absolute truths"? Not in the sense of possessing a probability of literally 1.0. But they are cases where science basically thinks it's got the truth.
And yet scientists don't torture people who question the atomic theory of chemistry. Why not? Because they don't believe that certainty licenses torture? Well, yes, that's the surface difference; but why don't scientists believe this?
Because chemistry asserts no supernatural penalty of eternal torture for disbelieving in the atomic theory of chemistry? But again we recurse and ask the question, "Why?" Why don't chemists believe that you go to hell if you disbelieve in the atomic theory?
Because journals won't publish your paper until you get a solid experimental observation of Hell? But all too many scientists can suppress their skeptical reflex at will. Why don't chemists have a private cult which argues that nonchemists go to hell, given that many are Christians anyway?
Questions like that don't have neat single-factor answers. But I would argue that one of the factors has to do with assuming a defensive posture toward the truth, versus a productive posture toward the truth.
When you are the Guardian of the Truth, you've got nothing useful to contribute to the Truth but your guardianship of it. When you're trying to win the Nobel Prize in chemistry by discovering the next benzene or buckyball, someone who challenges the atomic theory isn't so much a threat to your worldview as a waste of your time.
When you are a Guardian of the Truth, all you can do is try to stave off the inevitable slide into entropy by zapping anything that departs from the Truth. If there's some way to pump against entropy, generate new true beliefs along with a little waste heat, that same pump can keep the truth alive without secret police. In chemistry you can replicate experiments and see for yourself—and that keeps the precious truth alive without need of violence.
And it's not such a terrible threat if we make one mistake somewhere—end up believing a little untruth for a little while—because tomorrow we can recover the lost ground.
But this whole trick only works because the experimental method is a "criterion of goodness" which is not a mere "criterion of comparison". Because experiments can recover the truth without need of authority, they can also override authority and create new true beliefs where none existed before.
Where there are criteria of goodness that are not criteria of comparison, there can exist changes which are improvements, rather than threats. Where there are only criteria of comparison, where there's no way to move past authority, there's also no way to resolve a disagreement between authorities. Except extermination. The bigger guns win.
I don't mean to provide a grand overarching single-factor view of history. I do mean to point out a deep psychological difference between seeing your grand cause in life as protecting, guarding, preserving, versus discovering, creating, improving. Does the "up" direction of time point to the past or the future? It's a distinction that shades everything, casts tendrils everywhere.
This is why I've always insisted, for example, that if you're going to start talking about "AI ethics", you had better be talking about how you are going to improve on the current situation using AI, rather than just keeping various things from going wrong. Once you adopt criteria of mere comparison, you start losing track of your ideals—lose sight of wrong and right, and start seeing simply "different" and "same".
I would also argue that this basic psychological difference is one of the reasons why an academic field that stops making active progress tends to turn mean. (At least by the refined standards of science. Reputational assassination is tame by historical standards; most defensive-posture belief systems went for the real thing.) If major shakeups don't arrive often enough to regularly promote young scientists based on merit rather than conformity, the field stops resisting the standard degeneration into authority. When there's not many discoveries being made, there's nothing left to do all day but witch-hunt the heretics.
To get the best mental health benefits of the discover/create/improve posture, you've got to actually be making progress, not just hoping for it.
Next post: "Guardians of the Gene Pool"
Previous post: "Every Cause Wants To Be A Cult"