Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Are You Anosognosic?

17 Eliezer_Yudkowsky 19 July 2009 04:35AM

Followup to: The Strangest Thing An AI Could Tell You

Brain damage patients with anosognosia are incapable of considering, noticing, admitting, or realizing even after being argued with, that their left arm, left leg, or left side of the body, is paralyzed.  Again I'll quote Yvain's summary:

After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".

A brief search didn't turn up a base-rate frequency in the population for left-arm paralysis with anosognosia, but let's say the base rate is 1 in 10,000,000 individuals (so around 670 individuals worldwide).

Supposing this to be the prior, what is your estimated probability that your left arm is currently paralyzed?

continue reading »

Dissolving the Question

44 Eliezer_Yudkowsky 08 March 2008 03:17AM

Followup toHow an Algorithm Feels From the Inside, Feel the Meaning, Replace the Symbol with the Substance

"If a tree falls in the forest, but no one hears it, does it make a sound?"

I didn't answer that question.  I didn't pick a position, "Yes!" or "No!", and defend it.  Instead I went off and deconstructed the human algorithm for processing words, even going so far as to sketch an illustration of a neural network.  At the end, I hope, there was no question left—not even the feeling of a question.

Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct:  If you give them a question, they try to answer it.

Like, say, "Do we have free will?"

The dangerous instinct of philosophy is to marshal the arguments in favor, and marshal the arguments against, and weigh them up, and publish them in a prestigious journal of philosophy, and so finally conclude:  "Yes, we must have free will," or "No, we cannot possibly have free will."

Some philosophers are wise enough to recall the warning that most philosophical disputes are really disputes over the meaning of a word, or confusions generated by using different meanings for the same word in different places.  So they try to define very precisely what they mean by "free will", and then ask again, "Do we have free will?  Yes or no?"

A philosopher wiser yet, may suspect that the confusion about "free will" shows the notion itself is flawed.  So they pursue the Traditional Rationalist course:  They argue that "free will" is inherently self-contradictory, or meaningless because it has no testable consequences.  And then they publish these devastating observations in a prestigious philosophy journal.

But proving that you are confused may not make you feel any less confused.  Proving that a question is meaningless may not help you any more than answering it.

continue reading »

The Modesty Argument

28 Eliezer_Yudkowsky 10 December 2006 09:42PM

The Modesty Argument states that when two or more human beings have common knowledge that they disagree about a question of simple fact, they should each adjust their probability estimates in the direction of the others'.  (For example, they might adopt the common mean of their probability distributions.  If we use the logarithmic scoring rule, then the score of the average of a set of probability distributions is better than the average of the scores of the individual distributions, by Jensen's inequality.)

Put more simply:  When you disagree with someone, even after talking over your reasons, the Modesty Argument claims that you should each adjust your probability estimates toward the other's, and keep doing this until you agree.  The Modesty Argument is inspired by Aumann's Agreement Theorem, a very famous and oft-generalized result which shows that genuine Bayesians literally cannot agree to disagree; if genuine Bayesians have common knowledge of their individual probability estimates, they must all have the same probability estimate.  ("Common knowledge" means that I know you disagree, you know I know you disagree, etc.)

I've always been suspicious of the Modesty Argument.  It's been a long-running debate between myself and Robin Hanson.

continue reading »