Are You Anosognosic?

17 Eliezer_Yudkowsky 19 July 2009 04:35AM

Followup to: The Strangest Thing An AI Could Tell You

Brain damage patients with anosognosia are incapable of considering, noticing, admitting, or realizing even after being argued with, that their left arm, left leg, or left side of the body, is paralyzed.  Again I'll quote Yvain's summary:

After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".

A brief search didn't turn up a base-rate frequency in the population for left-arm paralysis with anosognosia, but let's say the base rate is 1 in 10,000,000 individuals (so around 670 individuals worldwide).

Supposing this to be the prior, what is your estimated probability that your left arm is currently paralyzed?

continue reading »

Escaping Your Past

24 Z_M_Davis 22 April 2009 09:15PM

Followup to: Sunk Cost Fallacy

Related to: Rebelling Against NatureShut Up and Do the Impossible!

(expanded from my comment)

"The world is weary of the past—
O might it die or rest at last!"
— Percy Bysshe Shelley, from "Hellas"

Probability theory and decision theory push us in opposite directions. Induction demands that you cannot forget your past; the sunk cost fallacy demands that you must. Let me explain.

An important part of epistemic rationality is learning to be at home in a material universe. You are not a magical fount of originality and free will; you are a physical system: the same laws that bind the planets in their orbits, also bind you; the same sorts of regularities in these laws that govern the lives of rabbits or aphids, also govern human societies. Indeed, in the last analysis, free will as traditionally conceived is but a confusion—and bind and govern are misleading metaphors at best: what is bound as by ropes can be unbound with, say, a good knife; what is "bound" by "nature"—well, I can hardly finish the sentence, the phrasing being so absurd!

Epistemic rationality alone might be well enough for those of us who simply love truth (who love truthseeking, I mean; the truth itself is usually an abomination), but some of my friends tell me there should be some sort of payoff for all this work of inference. And indeed, there should be: if you know how something works, you might be able to make it work better. Enter intrumental rationality, the art of doing better. We all want to better, and we all believe that we can do better...

But we should also all know that beliefs require evidence.

Suppose you're an employer interviewing a jobseeker for a position you have open. Examining the jobseeker's application, you see that she was expelled from four schools, was fired from her last three jobs, and was convicted of two felonies. You ask, "Given your record, I regret having let you enter the building. Why on Earth should I hire you?"

And the jobseeker replies, "But all those transgressions are in the past. Sunk costs can't play into my decision theory—it would hardly be helping for me to go sulk in a gutter somewhere. I can only seek to maximize expected utility now, and right now that means working ever so hard for you, O dearest future boss! Tsuyoku naritai!"

And you say, "Why should I believe you?"

And then—oh, wait. Just a moment, I've gotten my notes mixed up—oh, dear. I've been telling this scenario all wrong. You're not the employer. You're the jobseeker.

continue reading »

Tarski Statements as Rationalist Exercise

11 Vladimir_Nesov 17 March 2009 07:47PM

Related to: Dissolving the Question, The Second Law of Thermodynamics, and Engines of Cognition, The Meditation on Curiosity.

The sentence "snow is white" is true if, and only if, snow is white.

-- A. Tarski

Several days ago I've spent a couple of hours trying to teach my 15 year old brother how to properly construct Tarski statements. It's quite nontrivial to get right. Learning to place facts and representations in the separate mental buckets is one of the fundamental tools for a rationalist. In our model of the world, information propagates from object to object, from mind to mind. To ascertain the validity of your belief, you need to research the whole network of factors that led you to attain the belief. The simplest relation is between a fact and its representation, idealized to represent correctness or incorrectness only, without yet worrying about probabilities. The same object or the same property can be interpreted to mean different things in different relations and contexts, indicating the truth of one statement or another, and it's important not to conflate those.

Let's say you are watching news on TV and the next item is an interview with a sasquatch. The sasquatch answers the questions about his family in decent English, with a slight British accent.

What do you actually observe, how should you interpret the data? Did you "see a sasquatch"? Did you learn the facts about sasquatch's family? Is there a fact of the matter, as to whether the sasquatch's daughter is 5 years old, as opposed to 4 or 6?

continue reading »

Striving to Accept

33 Eliezer_Yudkowsky 09 March 2009 11:29PM

Reply toThe Mystery of the Haunted Rationalist
Followup toDon't Believe You'll Self-Deceive

Should a rationalist ever find themselves trying hard to believe something?

You may be tempted to answer "No", because "trying to believe" sounds so stereotypical of Dark Side Epistemology.  You may be tempted to reply, "Surely, if you have to try hard to believe something, it isn't worth believing."

But Yvain tells us that - even though he knows damn well, on one level, that spirits and other supernatural things are not to be found in the causal closure we name "reality" - and even though he'd bet $100 against $10,000 that an examination would find no spirits in a haunted house - he's pretty sure he's still scared of haunted houses.

Maybe it's okay for Yvain to try a little harder to accept that there are no ghosts, since he already knows that there are no ghosts?

In my very early childhood I was lucky enough to read a book from the children's section of a branch library, called "The Mystery of Something Hill" or something.  In which one of the characters says, roughly:  "There are two ways to believe in ghosts.  One way is to fully believe in ghosts, to look for them and talk about them.  But the other way is to half-believe - to make fun of the idea of ghosts, and talk scornfully of ghosts; but to break into a cold sweat when you hear a bump in the night, or be afraid to enter a graveyard."

continue reading »

Don't Believe You'll Self-Deceive

15 Eliezer_Yudkowsky 09 March 2009 08:03AM

Followup toMoore's Paradox, Doublethink

I don't mean to seem like I'm picking on Kurige, but I think you have to expect a certain amount of questioning if you show up on Less Wrong and say:

One thing I've come to realize that helps to explain the disparity I feel when I talk with most other Christians is the fact that somewhere along the way my world-view took a major shift away from blind faith and landed somewhere in the vicinity of Orwellian double-think.

"If you know it's double-think...

...how can you still believe it?" I helplessly want to say.

Or:

I chose to believe in the existence of God—deliberately and consciously. This decision, however, has absolutely zero effect on the actual existence of God.

If you know your belief isn't correlated to reality, how can you still believe it?

Shouldn't the gut-level realization, "Oh, wait, the sky really isn't green" follow from the realization "My map that says 'the sky is green' has no reason to be correlated with the territory"?

Well... apparently not.

One part of this puzzle may be my explanation of Moore's Paradox ("It's raining, but I don't believe it is")—that people introspectively mistake positive affect attached to a quoted belief, for actual credulity.

But another part of it may just be that—contrary to the indignation I initially wanted to put forward—it's actually quite easy not to make the jump from "The map that reflects the territory would say 'X'" to actually believing "X".  It takes some work to explain the ideas of minds as map-territory correspondence builders, and even then, it may take more work to get the implications on a gut level.

continue reading »

Moore's Paradox

47 Eliezer_Yudkowsky 08 March 2009 02:27AM

Followup toBelief in Self-Deception

Moore's Paradox is the standard term for saying "It's raining outside but I don't believe that it is."  HT to painquale on MetaFilter.

I think I understand Moore's Paradox a bit better now, after reading some of the comments on Less Wrong.  Jimrandomh suggests:

Many people cannot distinguish between levels of indirection. To them, "I believe X" and "X" are the same thing, and therefore, reasons why it is beneficial to believe X are also reasons why X is true.

I don't think this is correct—relatively young children can understand the concept of having a false belief, which requires separate mental buckets for the map and the territory.  But it points in the direction of a similar idea:

Many people may not consciously distinguish between believing something and endorsing it.

After all—"I believe in democracy" means, colloquially, that you endorse the concept of democracy, not that you believe democracy exists.  The word "belief", then, has more than one meaning.  We could be looking at a confused word that causes confused thinking (or maybe it just reflects pre-existing confusion).

So: in the original example, "I believe people are nicer than they are", she came up with some reasons why it would be good to believe people are nice—health benefits and such—and since she now had some warm affect on "believing people are nice", she introspected on this warm affect and concluded, "I believe people are nice".  That is, she mistook the positive affect attached to the quoted belief, as signaling her belief in the proposition.  At the same time, the world itself seemed like people weren't so nice.  So she said, "I believe people are nicer than they are."

continue reading »