Rationalists should beware rationalism

27 Kaj_Sotala 06 April 2009 02:16PM

Rationalism is most often characterized as an epistemological position. On this view, to be a rationalist requires at least one of the following: (1) a privileging of reason and intuition over sensation and experience, (2) regarding all or most ideas as innate rather than adventitious, (3) an emphasis on certain rather than merely probable knowledge as the goal of enquiry. -- The Stanford Encyclopedia of Philosophy on Continental Rationalism.

By now, there are some things which most Less Wrong readers will agree on. One of them is that beliefs must be fueled by evidence gathered from the environment. A belief must correlate with reality, and an important part of that is whether or not it can be tested - if a belief produces no anticipation of experience, it is nearly worthless. We can never try to confirm a theory, only test it.

But yet, we seem to have no problem coming up with theories that are either untestable or that we have no intention of testing, such as evolutionary psychological explanations for the underdog effect.

I'm being a bit unfair here. Those posts were well thought out and reasonably argued, and Roko's post actually made testable predictions. Yvain even made a good try at solving the puzzle, and when he couldn't, he reasonably concluded that he was stumped and asked for help. That sounds like a proper use of humility to me.

But the way that ev-psych explanations get rapidly manufactured and carelessly flung around on OB and LW has always been a bit of a pet peeve for me, as that's exactly how bad evpsych gets done. The best evolutionary psychology takes biological and evolutionary facts, applies those to humans and then makes testable predictions, which it goes on to verify. It doesn't take existing behaviors and then try to come up with some nice-sounding rationalization for them, blind to whether or not the rationalization can be tested. Not every behavior needs to have an evolutionary explanation - it could have evolved via genetic drift, or be a pure side-effect from some actual adaptation. If we set out by trying to find an evolutionary reason for some behavior, we are assuming from the start that there must be one, when it isn't a given that there is. And even a good theory need not explain every observation.

continue reading »

Cached Selves

172 AnnaSalamon 22 March 2009 07:34PM

by Anna Salamon and Steve Rayhawk (joint authorship)

Related to: Beware identity

A few days ago, Yvain introduced us to priming, the effect where, in Yvain’s words, "any random thing that happens to you can hijack your judgment and personality for the next few minutes."

Today, I’d like to discuss a related effect from the social psychology and marketing literatures: “commitment and consistency effects”, whereby any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future

To sum up the principle briefly: your brain builds you up a self-image. You are the kind of person who says, and does... whatever it is your brain remembers you saying and doing.  So if you say you believe X... especially if no one’s holding a gun to your head, and it looks superficially as though you endorsed X “by choice”... you’re liable to “go on” believing X afterwards.  Even if you said X because you were lying, or because a salesperson tricked you into it, or because your neurons and the wind just happened to push in that direction at that moment.

For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself.  If my friends ask me what I think of their poetry, or their rationality, or of how they look in that dress, and I choose my words slightly on the positive side, I’m liable to end up with a falsely positive view of my friends.  If I get promoted, and I start telling my employees that of course rule-following is for the best (because I want them to follow my rules), I’m liable to start believing in rule-following in general.

All familiar phenomena, right?  You probably already discount other peoples’ views of their friends, and you probably already know that other people mostly stay stuck in their own bad initial ideas.  But if you’re like me, you might not have looked carefully into the mechanisms behind these phenomena.  And so you might not realize how much arbitrary influence consistency and commitment is having on your own beliefs, or how you can reduce that influence.  (Commitment and consistency isn’t the only mechanism behind the above phenomena; but it is a mechanism, and it’s one that’s more likely to persist even after you decide to value truth.)

continue reading »

The Apologist and the Revolutionary

159 Yvain 11 March 2009 09:39PM

Rationalists complain that most people are too willing to make excuses for their positions, and too unwilling to abandon those positions for ones that better fit the evidence. And most people really are pretty bad at this. But certain stroke victims called anosognosiacs are much, much worse.

Anosognosia is the condition of not being aware of your own disabilities. To be clear, we're not talking minor disabilities here, the sort that only show up during a comprehensive clinical exam. We're talking paralysis or even blindness1. Things that should be pretty hard to miss.

Take the example of the woman discussed in Lishman's Organic Psychiatry. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".

Why won't these patients admit they're paralyzed, and what are the implications for neurotypical humans? Dr. Vilayanur Ramachandran, leading neuroscientist and current holder of the world land-speed record for hypothesis generation, has a theory.

continue reading »

Adversarial System Hats

8 Johnicholas 11 March 2009 04:56PM

In Reply to: Rationalization, Epistemic Handwashing, Selective Processes

Eliezer Yudkowsky wrote about scientists defending pet hypotheses, and prosecutors and defenders as examples of clever rationalization. His primary focus was advice to the well-intentioned individual rationalist, which is excellent as far as it goes. But Anna Salamon and Steve Rayhawk ask how a social system should be structured for group rationality.

The adversarial system is widely used in criminal justice. In the legal world, roles such as Prosecution, Defense, and Judge are all guaranteed to be filled, with roughly the same amount of human effort applied to each side. Suppose individuals chose their own roles. It is possible that one role turns out more popular. Because different effort is applied to different sides, selecting for the positions with the strongest arguments will no longer much select for positions that are true.

continue reading »

Disputing Definitions

48 Eliezer_Yudkowsky 12 February 2008 12:15AM

Followup toHow An Algorithm Feels From Inside

I have watched more than one conversation—even conversations supposedly about cognitive science—go the route of disputing over definitions.  Taking the classic example to be "If a tree falls in a forest, and no one hears it, does it make a sound?", the dispute often follows a course like this:

If a tree falls in the forest, and no one hears it, does it make a sound?

Albert:  "Of course it does.  What kind of silly question is that?  Every time I've listened to a tree fall, it made a sound, so I'll guess that other trees falling also make sounds.  I don't believe the world changes around when I'm not looking."

Barry:  "Wait a minute.  If no one hears it, how can it be a sound?"

In this example, Barry is arguing with Albert because of a genuinely different intuition about what constitutes a sound.  But there's more than one way the Standard Dispute can start.  Barry could have a motive for rejecting Albert's conclusion.  Or Barry could be a skeptic who, upon hearing Albert's argument, reflexively scrutinized it for possible logical flaws; and then, on finding a counterargument, automatically accepted it without applying a second layer of search for a counter-counterargument; thereby arguing himself into the opposite position.  This doesn't require that Barry's prior intuition—the intuition Barry would have had, if we'd asked him before Albert spoke—have differed from Albert's.

Well, if Barry didn't have a differing intuition before, he sure has one now.

continue reading »

Rationalization

21 Eliezer_Yudkowsky 30 September 2007 07:29PM

Followup toThe Bottom Line, What Evidence Filtered Evidence?

In "The Bottom Line", I presented the dilemma of two boxes only one of which contains a diamond, with various signs and portents as evidence.  I dichotomized the curious inquirer and the clever arguer.  The curious inquirer writes down all the signs and portents, and processes them, and finally writes down "Therefore, I estimate an 85% probability that box B contains the diamond."  The clever arguer works for the highest bidder, and begins by writing, "Therefore, box B contains the diamond", and then selects favorable signs and portents to list on the lines above.

The first procedure is rationality.  The second procedure is generally known as "rationalization".

"Rationalization."  What a curious term.  I would call it a wrong word.  You cannot "rationalize" what is not already rational.  It is as if "lying" were called "truthization".

continue reading »

What Evidence Filtered Evidence?

43 Eliezer_Yudkowsky 29 September 2007 11:10PM

Yesterday I discussed the dilemma of the clever arguer, hired to sell you a box that may or may not contain a diamond.  The clever arguer points out to you that the box has a blue stamp, and it is a valid known fact that diamond-containing boxes are more likely than empty boxes to bear a blue stamp.  What happens at this point, from a Bayesian perspective?  Must you helplessly update your probabilities, as the clever arguer wishes?

If you can look at the box yourself, you can add up all the signs yourself.  What if you can't look?  What if the only evidence you have is the word of the clever arguer, who is legally constrained to make only true statements, but does not tell you everything he knows?  Each statement that he makes is valid evidence—how could you not update your probabilities?  Has it ceased to be true that, in such-and-such a proportion of Everett branches or Tegmark duplicates in which box B has a blue stamp, box B contains a diamond?  According to Jaynes, a Bayesian must always condition on all known evidence, on pain of paradox.  But then the clever arguer can make you believe anything he chooses, if there is a sufficient variety of signs to selectively report.  That doesn't sound right.

continue reading »

The Bottom Line

49 Eliezer_Yudkowsky 28 September 2007 05:47PM

There are two sealed boxes up for auction, box A and box B.  One and only one of these boxes contains a valuable diamond.  There are all manner of signs and portents indicating whether a box contains a diamond; but I have no sign which I know to be perfectly reliable.  There is a blue stamp on one box, for example, and I know that boxes which contain diamonds are more likely than empty boxes to show a blue stamp.  Or one box has a shiny surface, and I have a suspicion—I am not sure—that no diamond-containing box is ever shiny.

Now suppose there is a clever arguer, holding a sheet of paper, and he says to the owners of box A and box B:  "Bid for my services, and whoever wins my services, I shall argue that their box contains the diamond, so that the box will receive a higher price."  So the box-owners bid, and box B's owner bids higher, winning the services of the clever arguer.

The clever arguer begins to organize his thoughts.  First, he writes, "And therefore, box B contains the diamond!" at the bottom of his sheet of paper.  Then, at the top of the paper, he writes, "Box B shows a blue stamp," and beneath it, "Box A is shiny", and then, "Box B is lighter than box A", and so on through many signs and portents; yet the clever arguer neglects all those signs which might argue in favor of box A.  And then the clever arguer comes to me and recites from his sheet of paper:  "Box B shows a blue stamp, and box A is shiny," and so on, until he reaches:  "And therefore, box B contains the diamond."

continue reading »

View more: Prev