Yesterday I said: "Rationality is systematized winning"
"But," you protest, "the reasonable person doesn't always win!"
What do you mean by this? Do you mean that every week or two, someone who bought a lottery ticket with negative expected value, wins the lottery and becomes much richer than you? That is not a systematic loss; it is selective reporting by the media. From a statistical standpoint, lottery winners don't exist—you would never encounter one in your lifetime, if it weren't for the selective reporting.
Even perfectly rational agents can lose. They just can't know in advance that they'll lose. They can't expect to underperform any other performable strategy, or they would simply perform it.
"No," you say, "I'm talking about how startup founders strike it rich by believing in themselves and their ideas more strongly than any reasonable person would. I'm talking about how religious people are happier—"
Ah. Well, here's the the thing: An incremental step in the direction of rationality, if the result is still irrational in other ways, does not have to yield incrementally more winning.
The optimality theorems that we have for probability theory and decision theory, are for perfect probability theory and decision theory. There is no companion theorem which says that, starting from some flawed initial form, every incremental modification of the algorithm that takes the structure closer to the ideal, must yield an incremental improvement in performance. This has not yet been proven, because it is not, in fact, true.
"So," you say, "what point is there then in striving to be more rational? We won't reach the perfect ideal. So we have no guarantee that our steps forward are helping."
You have no guarantee that a step backward will help you win, either. Guarantees don't exist in the world of flesh; but contrary to popular misconceptions, judgment under uncertainty is what rationality is all about.
"But we have several cases where, based on either vaguely plausible-sounding reasoning, or survey data, it looks like an incremental step forward in rationality is going to make us worse off. If it's really all about winning—if you have something to protect more important than any ritual of cognition—then why take that step?"
Ah, and now we come to the meat of it.
I can't necessarily answer for everyone, but...
My first reason is that, on a professional basis, I deal with deeply confused problems that make huge demands on precision of thought. One small mistake can lead you astray for years, and there are worse penalties waiting in the wings. An unimproved level of performance isn't enough; my choice is to try to do better, or give up and go home.
"But that's just you. Not all of us lead that kind of life. What if you're just trying some ordinary human task like an Internet startup?"
My second reason is that I am trying to push some aspects of my art further than I have seen done. I don't know where these improvements lead. The loss of failing to take a step forward is not that one step, it is all the other steps forward you could have taken, beyond that point. Robin Hanson has a saying: The problem with slipping on the stairs is not falling the height of the first step, it is that falling one step leads to falling another step. In the same way, refusing to climb one step up forfeits not the height of that step but the height of the staircase.
"But again—that's just you. Not all of us are trying to push the art into uncharted territory."
My third reason is that once I realize I have been deceived, I can't just shut my eyes and pretend I haven't seen it. I have already taken that step forward; what use to deny it to myself? I couldn't believe in God if I tried, any more than I could believe the sky above me was green while looking straight at it. If you know everything you need to know in order to know that you are better off deceiving yourself, it's much too late to deceive yourself.
"But that realization is unusual; other people have an easier time of doublethink because they don't realize it's impossible. You go around trying to actively sponsor the collapse of doublethink. You, from a higher vantage point, may know enough to expect that this will make them unhappier. So is this out of a sadistic desire to hurt your readers, or what?"
Then I finally reply that my experience so far—even in this realm of merely human possibility—does seem to indicate that, once you sort yourself out a bit and you aren't doing quite so many other things wrong, striving for more rationality actually will make you better off. The long road leads out of the valley and higher than before, even in the human lands.
The more I know about some particular facet of the Art, the more I can see this is so. As I've previously remarked, my essays may be unreflective of what a true martial art of rationality would be like, because I have only focused on answering confusing questions—not fighting akrasia, coordinating groups, or being happy. In the field of answering confusing questions—the area where I have most intensely practiced the Art—it now seems massively obvious that anyone who thought they were better off "staying optimistic about solving the problem" would get stomped into the ground. By a casual student.
When it comes to keeping motivated, or being happy, I can't guarantee that someone who loses their illusions will be better off—because my knowledge of these facets of rationality is still crude. If these parts of the Art have been developed systematically, I do not know of it. But even here I have gone to some considerable pains to dispel half-rational half-mistaken ideas that could get in a beginner's way, like the idea that rationality opposes feeling, or the idea that rationality opposes value, or the idea that sophisticated thinkers should be angsty and cynical.
And if, as I hope, someone goes on to develop the art of fighting akrasia or achieving mental well-being as thoroughly as I have developed the art of answering impossible questions, I do fully expect that those who wrap themselves in their illusions will not begin to compete. Meanwhile—others may do better than I, if happiness is their dearest desire, for I myself have invested little effort here.
I find it hard to believe that the optimally motivated individual, the strongest entrepreneur a human being can become, is still wrapped up in a blanket of comforting overconfidence. I think they've probably thrown that blanket out the window and organized their mind a little differently. I find it hard to believe that the happiest we can possibly live, even in the realms of human possibility, involves a tiny awareness lurking in the corner of your mind that it's all a lie. I'd rather stake my hopes on neurofeedback or Zen meditation, though I've tried neither.
But it cannot be denied that this is a very real issue in very real life. Consider this pair of comments from Less Wrong:
I'll be honest —my life has taken a sharp downturn since I deconverted. My theist girlfriend, with whom I was very much in love, couldn't deal with this change in me, and after six months of painful vacillation, she left me for a co-worker. That was another six months ago, and I have been heartbroken, miserable, unfocused, and extremely ineffective since.
Perhaps this is an example of the valley of bad rationality of which PhilGoetz spoke, but I still hold my current situation higher in my preference ranking than happiness with false beliefs.
And:
My empathies: that happened to me about 6 years ago (though thankfully without as much visible vacillation).
My sister, who had some Cognitive Behaviour Therapy training, reminded me that relationships are forming and breaking all the time, and given I wasn't unattractive and hadn't retreated into monastic seclusion, it wasn't rational to think I'd be alone for the rest of my life (she turned out to be right). That was helpful at the times when my feelings hadn't completely got the better of me.
So—in practice, in real life, in sober fact—those first steps can, in fact, be painful. And then things can, in fact, get better. And there is, in fact, no guarantee that you'll end up higher than before. Even if in principle the path must go further, there is no guarantee that any given person will get that far.
If you don't prefer truth to happiness with false beliefs...
Well... and if you are not doing anything especially precarious or confusing... and if you are not buying lottery tickets... and if you're already signed up for cryonics, a sudden ultra-high-stakes confusing acid test of rationality that illustrates the Black Swan quality of trying to bet on ignorance in ignorance...
Then it's not guaranteed that taking all the incremental steps toward rationality that you can find, will leave you better off. But the vaguely plausible-sounding arguments against losing your illusions, generally do consider just one single step, without postulating any further steps, without suggesting any attempt to regain everything that was lost and go it one better. Even the surveys are comparing the average religious person to the average atheist, not the most advanced theologians to the most advanced rationalists.
But if you don't care about the truth—and you have nothing to protect—and you're not attracted to the thought of pushing your art as far as it can go—and your current life seems to be going fine—and you have a sense that your mental well-being depends on illusions you'd rather not think about—
Then you're probably not reading this. But if you are, then, I guess... well... (a) sign up for cryonics, and then (b) stop reading Less Wrong before your illusions collapse! RUN AWAY!
I think that it is very important to look at how much work the commenter put into their comment.
One thing that kills discussion boards is that the conversations become too cliched. Mr. A makes the standard comment. Mr. B make the standard rebuttal. Mr. A makes the standard defence. Mr. B makes the traditional follow up.
When Mr. A makes the standard comment, is that for real, or is it just trolling? Tough question. I think that there comes a point at which one has to get tough and do drive-by downvoting on valid, on topic comments, because they are common place and threaten to destroy the discussion by making it too familiar, swamping the discussion with the banal.
The other side to this it if Mr. A makes a three paragraph comment. 1)His point. 2)The standard rebuttal. 3)Why he thinks his points survives the standard rebuttal. At this point we know that Mr. A is not a troll. He has put in too much work to count coup on getting a bite. He is making a effort to move the discussion on briskly so that it can reach unbroken ground. He has earned an explanation of why his comment is crap, and I would say that he has earned the right to an actual typed in criticism instead of a down vote.
There are other kinds of work worthy of respect. It is easy to make a long general response, either by being a fast typist and rattling it off, or by use of cut and paste. A comment is worthy or respect if the commenter has taken the time to tailor it so that it is clear how the general point applies to the particular case under discussion. Gathering up and checking relevant links eats time. If some-one has gone to the trouble of decorating his comment with relevant links, that should earn him immunity from drive-by down voting.
One the other hand, there is discussion in the blog sphere of turning off comments altogether. Some people say that if the comments are there they feel obliged to read them, but actually they are mostly the same-old-same-old and a waste of time. Which ends up with the reader feeling that they are wasting their time reading the blog and giving up altogether. Short, mildly entertaining, chitchatty comments that fill the fleeting hour with work not done will eventually kill LessWrong. I think readers should be very free with downvotes for lightweight comments.
IAWYC.