Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Your Strength as a Rationalist

60 Post author: Eliezer_Yudkowsky 11 August 2007 12:21AM

(The following happened to me in an IRC chatroom, long enough ago that I was still hanging around in IRC chatrooms.  Time has fuzzed the memory and my report may be imprecise.)

So there I was, in an IRC chatroom, when someone reports that a friend of his needs medical advice.  His friend says that he's been having sudden chest pains, so he called an ambulance, and the ambulance showed up, but the paramedics told him it was nothing, and left, and now the chest pains are getting worse.  What should his friend do?

I was confused by this story.  I remembered reading about homeless people in New York who would call ambulances just to be taken someplace warm, and how the paramedics always had to take them to the emergency room, even on the 27th iteration.  Because if they didn't, the ambulance company could be sued for lots and lots of money.  Likewise, emergency rooms are legally obligated to treat anyone, regardless of ability to pay.  (And the hospital absorbs the costs, which are enormous, so hospitals are closing their emergency rooms...  It makes you wonder what's the point of having economists if we're just going to ignore them.)  So I didn't quite understand how the described events could have happened.  Anyone reporting sudden chest pains should have been hauled off by an ambulance instantly.

And this is where I fell down as a rationalist.  I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming.  And the Medical Establishment was always right.  Every single time.  I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack.  So I said into the IRC channel, "Well, if the paramedics told your friend it was nothing, it must really be nothing—they'd have hauled him off if there was the tiniest chance of serious trouble."

Thus I managed to explain the story within my existing model, though the fit still felt a little forced...

Later on, the fellow comes back into the IRC chatroom and says his friend made the whole thing up.  Evidently this was not one of his more reliable friends.

I should have realized, perhaps, that an unknown acquaintance of an acquaintance in an IRC channel might be less reliable than a published journal article.  Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.

So instead, by dint of mighty straining, I forced my model of reality to explain an anomaly that never actually happened.  And I knew how embarrassing this was.  I knew that the usefulness of a model is not what it can explain, but what it can't.  A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.

Your strength as a rationalist is your ability to be more confused by fiction than by reality.  If you are equally good at explaining any outcome, you have zero knowledge.

We are all weak, from time to time; the sad part is that I could have been stronger.  I had all the information I needed to arrive at the correct answer, I even noticed the problem, and then I ignored it.  My feeling of confusion was a Clue, and I threw my Clue away.

I should have paid more attention to that sensation of still feels a little forced. It's one of the most important feelings a truthseeker can have, a part of your strength as a rationalist.  It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading "EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG."

 

Part of the sequence Mysterious Answers to Mysterious Questions

Next post: "Absence of Evidence Is Evidence of Absence"

Previous post: "The Virtue of Narrowness"

Comments (101)

Sort By: Old
Comment author: anon2 11 August 2007 01:50:21AM -1 points [-]

It's strange that it sounds like a rationalist is saying that he should have listened to his instincts. A true rationalist should be able to examine all the evidence without having to rely on feelings to make a judgment, or would be able to truly understand the source of his feelings, in which case it's more than just a feeling. The unfortunate thing is that people are more likely to remember the cases when they didn't listen to their feelings which ended up being correct in the end, than all the times when they were wrong.

The "quiet strain in the back of your mind" is what drives some people to always expect the worst to happen, and every so often they are right which reinforces their confidence in their intuitions more than their confidence diminishes each time they are wrong.

In some cases, it might be possible for someone to have a rational response to a stimulus only to think that it is intuition because they don't quite understand or aren't able to fully rationalize the source of the feeling. From my own experiences, it seems that some people don't make a hard enough effort to search for the source... they either don't seem to think that there is a rational source, or don't care to take the effort.... as long as they are able to ascertain what their feelings suggest they do, they really don't seem to care whether or not the source is rational or irrational.

A true rationalist would be able to determine the source and rationality of the feeling. The interesting question is if he fails to rationally explain the feeling, should he ignore the feeling, chalking it up to his weakness as a perfect rationalist.

Since we are all human and cannot be perfectly rational, shouldn't a rationalist decide that a seemingly irrational feeling is just that, irrational. Is it not more rational to believe that a seemingly irrational feeling is the result of our own imperfection as a human?

Comment author: MrPineapple 24 January 2011 12:59:00AM 24 points [-]

a rationalist should acknowledge their irrationality, to do otherwise would be to irrational.

Comment author: smallricochet 03 September 2011 10:23:36PM -2 points [-]

Wasn't it more a rationalist should listen to their subconscious (comprised of all their past experiences?)

That sounds simple enough. Unless we don't actually know that our subconscious is biased or something.

Comment author: Eliezer_Yudkowsky 11 August 2007 01:53:38AM 22 points [-]

Anon, see Why Truth?:

When people think of "emotion" and "rationality" as opposed, I suspect that they are really thinking of System 1 and System 2 - fast perceptual judgments versus slow deliberative judgments. Deliberative judgments aren't always true, and perceptual judgments aren't always false; so it is very important to distinguish that dichotomy from "rationality". Both systems can serve the goal of truth, or defeat it, according to how they are used.

Comment author: Chris_Hibbert 11 August 2007 02:34:34AM 13 points [-]

"I should have paid more attention to that sensation of still feels a little forced."

The force that you would have had to counter was the impetus to be polite. In order to boldly follow your models, you would have had to tell the person on the other end of the chat that you didn't believe his friend. You could have less boldly held your tongue, but that wouldn't have satisfied your drive to understand what was going on. Perhaps a compromise action would have been to point out the unlikelihood, (which you did: "they'd have hauled him off if there was the tiniest chance of serious trouble"), and ask for a report on the eventual outcome.

Given the constraints of politeness, I don't know how you can do better. If you were talking to people who knew you better, and understood your viewpoint on rationality, you might expect to be forgiven for giving your bald assessment of the unlikeliness of the report.

Comment author: bigjeff5 28 January 2011 06:21:55AM 11 points [-]

Not necessarily.

You can assume the paramedics did not follow the proper procedure, and that his friend aught to go to the emergency room himself to verify that he is OK. People do make mistakes.

The paramedics are potentially unreliable as well, though given the litigious nature of our society I would fully expect the paramedics to be extremely reliable in taking people to the emergency room, which would still cast doubt on the friend.

Still, if you want to be polite, just say "if you are concerned, you should go to the emergency room anyway" and keep your doubts about the man's veracity to yourself. No doubt the truth would have come out at that point as well.

Comment author: michael_vassar3 11 August 2007 03:31:03AM 4 points [-]

In it's strongest form, not believing system 1 amounts to not believing perceptions, hence not believing in empiricism. This is possibly the oldest of philosophical mistakes, made by Plato, possibly Siddhartha, and probably others even earlier.

Comment author: Tony 11 August 2007 06:16:35AM 3 points [-]

Sounds like good old cognitive dissonance. Your mental model was not matching the information being presented.

That feeling of cognitive dissonance is a piece of information to be considered in arriving at your decision. If something doesn't feel right, usually either te model or the facts are wrong or incomplete.

T

Comment author: Psy-Kosh 30 September 2007 08:33:43PM 17 points [-]

"And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, "Well, if the paramedics told your friend it was nothing, it must really be nothing - they'd have hauled him off if there was the tiniest chance of serious trouble.""

My own "hold on a second" detector is pinging mildly at that particular bit. Specifically, isn't there a touch of an observer selection effect there? If the docs had been wrong and you ended up dying as a result, you wouldn't have been around to make that deduction, so you're (Well, anyone is) effectively biased to retroactively observe outcomes in which if the doctor did say you're not in a life threatening situation, you're genuinely not?

Or am I way off here?

Comment author: Eliezer_Yudkowsky 30 September 2007 08:59:25PM 18 points [-]

A valid point, Psy-Kosh, but I've seen this happen to a friend too. She was walking along the streets one night when a strange blur appeared across her vision, with bright floating objects. Then she was struck by a massive headache. I had her write down what the blur looked like, and she put down strange half-circles missing their left sides.

That point was when I really started to get worried, because it looked like lateral neglect - something that I'd heard a lot about, in my studies of neurology, as a symptom of lateralized brain damage from strokes.

The funny thing was, nobody in the medical profession seemed to think this was a problem. The medical advice line from her health insurance said it was a "yellow light" for which she should see a doctor in the next day or two. Yellow light?! With a stroke, you have to get the right medication within the first three hours to prevent permanent brain damage! So we went to the emergency room - reluctantly, because California has enormously overloaded emergency rooms - and the nurse who signed us in certainly didn't seem to think those symptoms were very alarming.

The thing is, of course, that non-doctors are legally prohibited from making diagnoses. So neither the nurse on the advice line, or the nurse who signed us into the emergency room, were allowed to say: "It's a migraine headache, you idiots."

You see, I'd heard the phrase "migraine headache", but I'd had no idea of what the symptoms of a "migraine headache" were. My studies in neurology told me about strokes and lateral brain damage, because those are very important to the study of functional neuroanatomy. So I knew about these super dangerous and rare killer events that seemed sort of like the symptoms we were encountering, but I didn't know about the common events that a doctor sees every day.

When you see symptoms, you think of lethal zebras, because those are what you read about in the newspapers. The doctor thinks of much less exciting horses. This is why the Medical Establishment has always been right, in my experience, every single time I'm alarmed and they're not.

But in answer to your question about selection effects, Psy-Kosh, I think I'd have noticed if my friend had actually had a stroke. In fact, it would have been much more likely to have been reported and repeated than the reverse case.

Comment author: Strilanc 15 October 2012 06:03:27AM *  0 points [-]

I had a similar experience with my girlfriend, except the symptoms were significantly more alarming. She was, among other things, unable to remember many common nouns. I would point and say 'What is that swinging room separator?" and she would be unable to figure out "door".

I was aware from the start that the symptoms might have been due to a migraine aura, having looked up the symptoms on Wikipedia, but was advised by 811 to take her to the hospital immediately. The symptoms were gone before we arrived. Five hours later (a strong hint that at least the triage people thought it wasn't an emergency), a doctor had diagnosed it as a silent migraine.

Comment author: Psy-Kosh 30 September 2007 10:02:52PM 1 point [-]

Okie, and yeah, I imagine you would have noticed.

Also, of course, docs that habitually misdiagnose would presumably be sued or worse to oblivion by friends and family of the deceased. I was just unsure about the actual strength of that one thing I mentioned.

Comment author: charon 05 January 2008 04:41:35PM 6 points [-]

I think one would be the closest to truth by replying: "I don't quite believe that your story is true, but if it is, you should... etc" because there is no way for you to surely know whether he was bluffing or not. You have to admit both cases are possible even if one of them is highly improbable.

Comment author: tel 29 October 2009 06:02:16AM 2 points [-]

Doesn't any model contain the possibility, however slight, of seeing the unexpected? Sure this didn't fit with your model perfectly — and as I read the story and placed myself in your supposed mental state while trying to understand the situation, I felt a great deal of similar surprise — but jumping to the conclusion that someone was just totally fabricating is something that deserves to be weighed against other explanations for this deviation from your model.

Your model states that pretty much under all circumstances an ambulance is going to pick up a patient. This is true to my knowledge as well, but what happens if the friend didn't report to you that once the ambulance he called it off and refused to be transported. Or perhaps at the same time his chest pains were being judged as not-so-severe the ambulance got another call in that a massive car pileup required their immediate presence.

Your strength as a rationalist must not be the rejection of things unlikely in your model but instead the act of providing appropriate levels of concern. Perhaps the best response is something along the lines of "Sounds like a pretty strange occurrence. Are you sure your friend told you everything?" Now we're starting to judge our level of confidence in the new information being valid.

Which is honestly a pretty difficult model to shake as well. So much of every bit of information you build your world with comes from other people that I think it pretty decent to trust with some amount of abandon.

Comment author: Vladimir_Nesov 29 October 2009 11:04:14AM 0 points [-]
Comment author: tel 29 October 2009 11:49:18PM 1 point [-]

That's certainly sensible, and in But There's Still a Chance Eleiezer makes examples where this seems strong. In the above example, it depends a whole lot on how much belief you have in people (or, rather, lines of IRC chat).

I think then that your strength as a rationalist comes in balancing that uncertainty against some your prior trust in people. At which point, instead of predicting the negative, I'd seek more information.

Comment author: Dpar 14 January 2010 06:41:42PM *  0 points [-]

The level of "trust" you have in a person should be inversely proportional to the sensationalism of the claim that he's making.

If a person tells you he was abducted by a UFO, you demand evidence.

If a person tells you that on the way to work he slipped and fell down, and you have no concrete reason to doubt the story in particular or the person in general, you take that at face value. It is a reasonable assumption that a perfect stranger in all likelihood will NOT be delusional or a compulsive liar.

DP

Comment author: tel 24 January 2010 01:23:25AM 0 points [-]

That makes sense if you're only evaluating complete strangers. In other words, your uncertainty about the population-inferred trustworthiness of a person is pretty high and so instead the mere (Occam Factor style) complexity of their statement is the overruling component of your decision.

In the stated case, this isn't a totally random stranger. I feel quite justified in having a less-than uninformative prior about trusting IRC ghosts. In this case, my rationally acquired prejudice overrules in inference about the truth of even somewhat ordinary tales.

Comment author: Dpar 11 May 2010 06:36:54AM *  0 points [-]

The author did not mention anything about an exceptionally high percentage of liars in IRC relative to the general population (which would be quite relevant to his statement) therefore there's no reason to believe that such had been HIS experience in the past.

Given that, there is no reason for HIM to presume that the percentage of compulsive liars in IRC would different from the general population. YOUR experiences may, of course, be drastically different, but they are not the subject of discussion here.

DP

Comment author: Dpar 14 January 2010 06:36:07PM *  6 points [-]

I don't see that you did anything at all irrational. You're talking to a complete stranger on the internet. He doesn't know you, and cannot have any possible interest in deceiving you. He tells you a fairly detailed story and asks for you advice. For him to make the whole thing up just for kicks is an example of highly irrational and fairly unlikely behavior.

Conversely, a person's panicking over chest pains and calling the ambulance is a comparatively frequent occurrence. Your having read somewhere something about ambulance policies does not amount to having concrete, irrefutable knowledge that an ambulance crew cannot make an on-site determination that there's no need to take a person to the hospital. To a person without extensive medical knowledge there is nothing particularly unlikely about the story you were told.

Therefore, the situation is this -- you are told by a complete stranger that has no reason to lie to you a perfectly believable story. You have no concrete reason ("read something somewhere" does not qualify) to doubt either the story or the man's sanity. Thus there is nothing illogical about taking the story at face value. You did the perfectly rational thing.

Since there was no irrationality in your initial behavior, the conclusions that you arrive at further in your post are unfounded.

DP

Comment author: NancyLebovitz 04 April 2010 09:26:49AM *  9 points [-]

You're talking to a complete stranger on the internet. He doesn't know you, and cannot have any possible interest in deceiving you.

There's plenty of evidence that some people (a smallish minority, I think) will deceive strangers for the fun of it.

Comment author: Dpar 11 May 2010 06:20:12AM *  0 points [-]

Which, as I said later on in the same paragraph, is irrational and unlikely behavior. Therefore, when lacking any factual evidence, the reasonable presumption is that that's not the case.

DP

Comment author: RobinZ 11 May 2010 03:14:00PM 4 points [-]

I think many of us have actually encountered liars on the Internet. I'm not sure what you mean when you say "lacking any factual evidence".

Comment author: Dpar 07 June 2010 11:07:01AM *  2 points [-]

I presume that you have encountered liars in the real world as well. Do you, on that basis, habitually assume that a random stranger engaging in casual conversation with you is a liar?

My point is that pathological liars are a small minority. So if you're dealing with a person that you know absolutely nothing about, and who does not have any conceivable reason to lie to you, there is nothing unreasonable in assuming that he's telling you the truth, unless you have factual evidence (i.e. you have accurate, verifiable knowledge of ambulance policies) that contradicts what he's saying.

DP

Comment author: RobinZ 07 June 2010 12:06:04PM 4 points [-]

I think at this point the questions have become (a) "how many bits of evidence does it take to raise 'someone is lying' to prominence as a hypothesis?" and (b) "how many bits of evidence can I assign to 'someone is lying' after evaluating the probability of this story based on what I know?"

I believe your argument is that a > b (specifically, that a is large and b is small), where the post asserts that a < b. I'm not going to say that's unreasonable, given that all we know is what Eliezer Yudkowsky wrote, but often actual experience has much more detail than any feasible summary - I'm willing to grant him the benefit of the doubt, given that his tiny note of discord got the right answer in this instance.

Comment author: Dpar 09 August 2010 05:41:54PM *  1 point [-]

My argument is what I stated, nothing more. Namely that there is nothing unreasonable about assuming that a perfect stranger that you're having a casual conversation with is not trying to deceive you. I already laid out my reasoning for it. I'm not sure what more I can add.

DP

Comment author: persephonehazard 07 June 2011 10:33:45PM 3 points [-]

"Do you, on that basis, habitually assume that a random stranger engaging in casual conversation with you is a liar?"

Yes. Absolutely. Almost /everyone/ lies to complete strangers sometimes. Who among us has never given an enhanced and glamourfied story about who they are to a stranger they struck up a conversation with on a train?

Never? Really? Not even /once/?

Comment author: Alicorn 07 June 2011 10:43:55PM 0 points [-]

Yes. Absolutely. Almost /everyone/ lies to complete strangers sometimes. Who among us has never given an enhanced and glamourfied story about who they are to a stranger they struck up a conversation with on a train?

Never? Really? Not even /once/?

If everyone regularly talked to strangers on trains, and exactly once lied to such a stranger, it would still be pretty safe to assume that any given train-stranger is being honest with you.

Comment author: persephonehazard 08 June 2011 02:55:40AM 1 point [-]

Actually, yes, you're entirely right.

In conversations I've had about this with friends - good grief, there's a giant flashing anecdata alert if ever I did see one, but it's the best we've got to go off here - I would suspect that people do it often enough that it's a reasonable thing to consider in a situation like the one being discussed here, though.

Not that I think it's a bad thing that the person in question didn't, mind you. It would be a very easy option not to consider.

Comment author: Sniffnoy 04 April 2010 10:17:29AM 5 points [-]

("read something somewhere" does not qualify)

Wait, why not?

Comment author: Dpar 11 May 2010 06:22:01AM *  0 points [-]

I read somewhere that if spin about and click my heels 3 times I will be transported to the land of Oz. Does that qualify as a concrete reason to believe that such a land does indeed exist?

DP

Comment author: thomblake 09 August 2010 05:49:31PM 7 points [-]

I read somewhere that if spin about and click my heels 3 times I will be transported to the land of Oz. Does that qualify as a concrete reason to believe that such a land does indeed exist?

That indeed serves as evidence for that fact, though we have much stronger evidence to the contrary.

N.B. You do not need to sign your comments; your username appears above every one.

Comment author: wedrifid 09 August 2010 05:54:49PM *  8 points [-]

That indeed serves as evidence for that fact, though we have much stronger evidence to the contrary.

And not just because clicking the heels three times is more canonically (and more often) said to be way to return to Kansas from Oz. and not to Oz.

Comment author: Dpar 09 August 2010 06:44:57PM -1 points [-]

So the fact that something was written somewhere is sufficient to meet your criteria for considering it evidence? I take it you have actually tried clicking your heels to check whether or not you would be teleported to Oz then?

Also, does my signing my comments offend you?

DP

Comment author: Vladimir_Nesov 09 August 2010 06:48:38PM *  10 points [-]

Also, does my signing my comments offend you?

It hurts aesthetically by disrupting uniformity of standard style.

Comment author: Dpar 09 August 2010 07:13:23PM 8 points [-]

Fair enough. It's a habit of mine that I'm not married to. If members of this board take issue with it, I can stop.

Comment author: wedrifid 09 August 2010 07:06:48PM 2 points [-]

So the fact that something was written somewhere is sufficient to meet your criteria for considering it evidence?

Yes. It's really sucky evidence.

I take it you have actually tried clicking your heels to check whether or not you would be teleported to Oz then?

This doesn't remotely follow and is far weaker evidence than other available sources. For a start, everyone knows that you get to Oz with tornadoes and concussions.

Also, does my signing my comments offend you?

It makes you look like an outsider who isn't able to follow simple social conventions and may have a tendency towards obstinacy. (Since you asked...)

Comment author: Dpar 09 August 2010 07:19:32PM *  0 points [-]

"This doesn't remotely follow and is far weaker evidence than other available sources. For a start, everyone knows that you get to Oz with tornadoes and concussions."

Let's not get bogged down in the specific procedure of getting to Oz. My point was that if you truly adapt merely seeing something written somewhere as your standard for evidence, you commit yourself to analyzing and weighing the merits of EVERYTHING you read about EVERYWHERE. Do you mean to tell that when you read a fairy tale you truly consider whether or not what's written there is true? That you don't just dismiss it offhand without giving it a second thought?

"It makes you look like an outsider who isn't able to follow simple social conventions and may have a tendency towards obstinacy. (Since you asked...)"

Like I said above to Vladimir, it's not a big deal, but you're reading quite a bit into a simple habit.

Comment author: Vladimir_Nesov 09 August 2010 07:29:03PM 2 points [-]

The fact that something is really written is true; whether it implies that the written statements themselves are true is a separate theoretical question. Yes, ideally you'd want to take into account everything you observe in order to form an accurate idea of future expected events (observable or not). Of course, it's not quite possible, but not for the want of motivation.

Comment author: Dpar 09 August 2010 07:36:30PM 0 points [-]

Well I didn't think I needed to clarify that I'm not questioning whether or not something that's written is really written. Of course, I'm questioning the truthfulness of the actual statement.

Or not so much it's truthfulness, but rather whether or not it can be considered evidence. Though I realize that you take issue with arguing over word definitions, to me the word evidence has certain meaning that goes beyond every random written sentence, whisper or rumor that you encounter.

Comment author: Vladimir_Nesov 09 August 2010 07:39:17PM *  3 points [-]

The fact that something is written, or not written, is evidence about the way world is, and hence to some extent evidence about any hypothesis about the world. Whether it's strong evidence about a given hypothesis is a different question, and whether the statement written/not written is correct is yet another question.

(See also the links from this page.)

Comment author: Cyan 09 August 2010 07:43:10PM *  6 points [-]

Though I realize that you take issue with arguing over word definitions, to me the word evidence has certain meaning that goes beyond every random written sentence, whisper or rumor that you encounter.

Around these parts, a claim that B is evidence for A is a taken to be equivalent to claiming that B is more probable if A is true than if not-A is true. Something can be negligible evidence without being strictly zero evidence, as in your example of a fairy story.

Comment author: jimrandomh 09 August 2010 07:39:10PM *  7 points [-]

Let's not get bogged down in the specific procedure of getting to Oz. My point was that if you truly adapt merely seeing something written somewhere as your standard for evidence, you commit yourself to analyzing and weighing the merits of EVERYTHING you read about EVERYWHERE.

No, you can acknowledge that something is evidence while also believing that it's arbitrarily weak. Let's not confuse the practical question of how strong evidence has to be before it becomes worth the effort to use it ("standard of evidence") with the epistemic question of what things are evidence at all. Something being written down, even in a fairy tale, is evidence for its truth; it's just many orders of magnitude short of the evidential strength necessary for us to consider it likely.

Comment author: Dpar 09 August 2010 07:53:50PM 0 points [-]

Vladimir, Cyan, and jimrandomh, since you essentially said the same thing, consider this reply to be addressed to all three of you.

Answer me honestly, when reading a fairy tale, do you really stop to consider what's written there, qualify its worth as evidence, and compare it to everything else you know that might contradict it, before making the decision that the probability of the fairy tale being true is extremely low? Do you really not just dismiss it offhand as not true without a second thought?

Comment author: Cyan 09 August 2010 07:58:18PM *  3 points [-]

When I pick up a work of fiction, I do not spend time assessing its veracity. If I read a book of equally fantastic claims which purports to be true, I do spend a little time. You might want to peruse bounded rationality for an overview.

Comment author: Oligopsony 09 August 2010 08:00:05PM 2 points [-]

No, but only because that would be cognitively burdensome. We're boundedly rational.

Comment author: Vladimir_Nesov 09 August 2010 08:11:46PM 2 points [-]

Immediate observation is only that something is written. That it's also true is a theoretical hypothesis about that immediate observation. That what you are reading is a fairy tale is evidence against the things written there being true, so the theory that what's written in a fairy tale is true is weak. On the other hand, the fact that you observe the words of a given fairy tale is strong evidence that the person (author) whose name is printed on the cover really existed.

Comment deleted 10 August 2010 08:18:14AM [-]
Comment author: Dpar 10 August 2010 12:35:58PM -2 points [-]

Duly noted. God forbid I do something that annoys you. Won't be able to live with myself.

Comment author: ciphergoth 10 August 2010 12:41:33PM 5 points [-]

As always, I recommend against sarcasm, which can hide errors in reasoning that would be more obvious when you speak straightforwardly.

Comment author: JoshuaZ 10 August 2010 01:25:13PM 5 points [-]

Generally, when some minor formatting issue annoys a long-standing member of an internet community it is a good idea to listen to what they have to say. Many internet fora have standard rules about formatting and style that aren't explicitly expressed. These rules are convenient because they make reading easier for everyone. There's also a status/signaling aspect in that not using standard formatting signals someone is an outsider. Refusing to adopt standard format and styling signals an implicit lack of identification with a community. Even if one doesn't identify with a group, the effort it takes to conform to formatting norms is generally small enough that the overall gain is positive.

Comment author: xamdam 26 February 2010 09:19:58PM 1 point [-]

An alternative explanation? You put your energy into solving a practical problem with a large downside (minimizing the loss function in nerdese). Yes, to be perfectly rational you should have said: "the guy is probably lying, but if he is not then...".

Comment author: Amanojack 14 March 2010 03:38:26AM 3 points [-]

It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading "EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG."

I wouldn't call it a flaw; blaring alarms can be a nuisance. Ideally you could adjust the sensitivity settings . . . hence the popularity of alcohol.

Comment author: Perplexed 23 July 2010 01:19:30AM *  8 points [-]

Thank you, Eliezer. Now I know how to dissolve Newcomb type problems. (http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/)

I simply recite, "I just do not believe what you have told me about this intergalactic superintelligence Omega".

And of course, since I do not believe, the hypothetical questions asked by Newcomb problem enthusiasts become beneath my notice; my forming a belief about how to act rationally in this contrary-to-fact hypothetical situation cannot pay the rent.

Comment author: Ezekiel 09 January 2012 12:32:33AM 2 points [-]

Fair enough (upvoted); but I'm pretty sure Parfit's Hitchhiker is analogous to Newcomb's Problem, and that's an absolutely possible real-world scenario. Eliezer presents it in chapter 7 of his TDT document.

Comment author: zero_call 17 August 2010 05:49:34PM 0 points [-]

This sort of brings to my mind Pirsig's discussions about problem solving in ZATAOMM. You get that feeling of confusion when you are looking at a new problem, but that feeling is actually a really natural, important part of the process. I think the strangest thing to me is that this feeling tends to occur in a kind of painful way -- there is some stress associated with the confusion. But as you say, and as Pirsig says, that stress is really a positive indication of the maturation of an understanding.

Comment author: JohnDavidBustard 01 September 2010 12:17:44PM 1 point [-]

I'm not sure that listening to ones intuitions is enough to cause accurate model changes. Perhaps it is not rational to hold a single model in your head, as your information is incomplete. Instead one can consciously examine the situation from multiple perspectives, in this way the nicer (simpler, more consistent, whatever your metric is) model response can be applied. Alternatively you could legitimately assume that all the models you hold have merit and produce a response that balances their outcomes e.g. if your model of the medical profession is wrong and they die from your advice it is much worse than the unnecessary calling of the ambulance (letting the medical profession address the balance of resources). This would lead a rational person to simultaneously believe many contradictory perspectives and act as if they were all potentially true. Does anyone know of any theory in this area? the modelling of models (and efficiently resolving multiple models) would be very useful in AI.

Comment author: Summerspeaker 07 November 2010 01:15:36AM 5 points [-]

Considering that medical errors apparently kill more people than car accidents each year in the United States, I suspect the establishment is not in fact infallible.

Comment author: tgb 04 April 2013 02:53:04AM *  2 points [-]

Citation needed? I know I'm coming to this rather late, but a quick check of the 2010 CDC report on deaths in the US gives "Complications of medical and surgical care" as causing 2,490 deaths whereas transport accidents causing 37,961 deaths (35,332 of which were classified a 'motor vehicle deaths'). The only other thing I can see that might be medical errors put under a different heading is "Accidental poisoning and exposure to noxious substances" at 33,041 which combines to still fewer deaths than transport accidents even without removing those poisonings which are not medical errors. (This poisoning category appears to have a lot of recreational drug overdoses judging by the way it sharply increases in the 15-24 age group then drops off after 54 whereas time-spent-in-hospital is presumably increasing with age.)

On the other hand, a 2012 New York Times Op-Ed claims 98,000 deaths from medical errors a year. This number is so much larger than what the CDC reports that I must be misreading something. That would be about 1 in 20 people who die in the US die due to medical error. Original source from 1999). Actually checking that source, 98,000 deaths/year is the upper bound number given (lower bound of 44,000 deaths/year). The report also recommends a 50% reduction in these deaths within 5 years (so by 2004) - and Wikipedia mentions a 2006 study claiming that they successfully preventing 120,000 deaths in an 18 month time period but I can't find this study. A 2001 followup here appears to focus on suggestions for improvements rather than on giving new data to our question. 3 minutes on Google Scholar didn't turn up any recent estimates. This entire sub-field appears to rely very heavily upon that one source - at least in the US.

Also of interest is "Actual Causes of Death in the US" which classifies deaths by 'mistake made' (so to speak) - the top killer being tobacco use, then poor diet/low exercise, alcohol, microbial agents, toxic agents, car accidents, firearms, sexual behaviors, and illicit drug use. Medical errors didn't show high up on this list, despite it being the only source in the Wikipedia article on the original article.

Edit: also some places that cite the 1999 study accuse the CDC of not reporting these deaths as their own category. This appears to have changed given the category I reported above. The fact that there has been substantial uproar about medical error since the 1999 article and a corresponding increase in funding for studying it makes me unsurprised that the CDC would start reporting.

Comment author: OrphanWilde 04 April 2013 03:49:45AM 2 points [-]

If a doctor makes a mistake treating a patient from a vehicle accident, what heading does it get reported under?

(I ask the question in earnest, to anybody who might know the answer - because depending on what the answer is, it could explain the discrepancy.)

Comment author: gwern 22 April 2011 02:12:31PM 12 points [-]

From TvTropes:

"According to legend, one night the students of Baron Cuvier (one of the founders of modern paleontology and comparative anatomy) decided to play a trick on their instructor. They fashioned a medley of skins, skulls and other animal parts (including the head and legs of a deer) into a credibly monstrous costume. One brave fellow then donned the chimeric assemblage, crept into the Baron's bedroom when he was asleep and growled "Cuvier, wake up! I am going to eat you!" Cuvier woke up, took one look at the deer parts that formed part of the costume and sniffed "Impossible! You have horns and hooves!" (one would think "what sort of animals have horns and hooves" is common knowledge).

More likely he was saying "Impossible! You have horns and hooves (and are therefore not not a predator.)" The prank is more commonly reported as: "Cuvier, wake up! I am the Devil! I am going to eat you!" His response was "Divided hoof; graminivorous! It cannot be done." Apparently Satan is vegan. Don't comment that some deer have been seen eating meat or entrails, I occasionally grab the last slice of my bud's pizza but that doesn't classify me as a scavenger."

Comment author: smallricochet 03 September 2011 10:28:50PM 1 point [-]

How do you face this situation as a rationalist?

There are four answers on a multiple answer question, A B C and D. Using prior study knowledge, you immediately dismiss B and D. Problem! A and C sound equally plausible, your first urge is A, for some reason, but after some examination, B seems more plausible to your mind.

Results notwithstanding, how would you think of this as a rationalist. Is it even relevant?

Comment author: lessdazed 03 September 2011 10:49:51PM 1 point [-]

I think more context is necessary. Sorry.

Comment author: Vaniver 03 September 2011 10:52:33PM 4 points [-]

I believe the evidence is that the initial urge of A is more credible than the rationalization of B. That is, when students change answers on multiple choice tests, they are more likely to turn a right answer to a wrong answer than a wrong answer to a right answer. (I don't know if that generalizes to a true-false setting.)

Comment author: Rixie 04 April 2013 05:56:59AM -1 points [-]

It matters why "B sounds more plausible to your mind." If it's because you remembered a new fact, or if you reworked the problem and came out with B, change the answer (after checking that your work was correct and everything.) The many multiple choice tests are written so that there is one right answer, one wrong answer, and two plausible-sounding answers, so you shouldn't change an answer just because B is starting to sound plausible.

Comment author: Vaniver 04 April 2013 11:13:06PM *  1 point [-]

There are two modes of reasoning that are useful that I'd like to briefly discuss: inside view, and outside view.

Inside view uses models with small reach / high specificity. Outside view uses models with large reach / high generality. Inside view arguments are typically easier to articulate, and thus often more convincing, but there are often many reasons to prefer outside view arguments. (Generally speaking, there are classes of decisions where inside view estimates are likely to be systematically biased, and so using the outside view is better.)

When wondering whether to switch an answer, the inside view recommends estimating which answer is better. The outside view recommends looking at the situation you're in- "when people have switched answers in the past, has it generally helped or hurt?".

There are times when switching leads to the better result. But the trouble is that you need to know that ahead of time- and so, as you suggest, there may be reasons to switch that you can identify as strong reasons. But the decision whether to apply the inside or outside view (or whether you collect enough data to increase the specificity of your outside view approach) is itself a decision you have to make correctly, which you probably want to use the outside view to track, rather than just trusting your internal assessment at the time.

Comment author: a_mshri 20 September 2011 10:58:10PM *  2 points [-]

I feel really uncomfortable with this idea: "EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG."

I think this statement suffers from the same limitations of propositional logic; consequently, it is not applicable to many real life situations.

Most of the times, our model contains rules of this type (at least if we are rationalists): Event A occurs in situation B with probability C, where C is not 0 or 1. Also, life experiences teach us that we should update the probabilities in our model over time. So beside the uncertainty caused by the probability C, there is also uncertainty resulted from our degree of belief in the correctness of the rule itself. The situation becomes more complicated when the problem is cost sensitive.

I got your point (I hope so) and I'm definitely not trying to say "IT IS WORNG" but I think it is true to some degree.

Comment author: FeatherlessBiped 17 January 2012 03:47:59AM 1 point [-]

Your strength as a rationalist is your ability to be more confused by fiction than by reality.

Yet, when a person of even moderate cleverness wishes to deceive you, this "strength" can be turned against you. Context is everything.

As Donald DeMarco asks in "Are Your Lights On?", WHO is it that is bringing me this problem?

Comment author: gwern 09 August 2012 01:30:41AM *  2 points [-]

Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.

Looking through Google Scholar for citations of Gilbert 1990 and Gilbert 1993, I see 2 replications which question the original effect:

(While looking for those, I found some good citations for my fiction-biases section, though.)

Comment author: Serious_Shenanigans 11 July 2013 10:29:22AM *  1 point [-]

Eliezer's model:

The Medical Establishment is always right.

Information given:

  • Person is feeling chest pain.
  • Paramedics say hospitalization is unnecessary.

Possible scenarios mentioned in the story:

  1. Person is feeling chest pain and is having a heart attack.
  2. Person is feeling chest pain but does not need to be hospitalized.
  3. Person is lying.

Between the model and the information given, only Scenario 1 can be ruled false; Scenarios 2 and 3 are both possible. If Eliezer is going to beat himself up for not knowing better, it should be because Scenario 3 did not occur to him -- not that Scenario 3 is the logical reality.

Comment author: falenas108 10 February 2014 02:24:41PM 0 points [-]

The way you phrase it hides the crucial part of the story. Rephrasing:

  1. Person is telling the truth a.) They are having a heart attack, but the paramedics judged wrongly, dismissed it, and didn't take him to the hospital. b.) They are not having a heart attack, the paramedics judged rightly, and the paramedics dismissed it and didn't take him to the hospital.
  2. Person is lying.

Eliezer is saying that he should have known scenario 1 is wrong because regardless of whether or not the paramedics think it's legit, they would have taken the person to the hospital anyway. So, 1a and 1b must be wrong, leaving 2.

Or, if I were going to add to your model, I would add "The Medical Establishment always takes in the ambulance if they call for a medical reason." Then, when the information given is "Paramedics say hospitalization is unnecessary," that would have been a direct conflict between model and information, where Eliezer had to choose between rejecting the model and rejecting the information.

Comment author: KnaveOfAllTrades 03 March 2014 01:03:31AM *  1 point [-]

I see two senses (or perhaps not-actually-qualiatively-different-but-still-useful-to-distinguish cases?) of 'I notice I'm confused':

(1) Noticing factual confusion, as in the example in this post. (2) Noticing confusion when trying to understand a concept or phenomenon, or to apply a concept.

Example of (2): (A) "Hrm, I thought I understood what, "Colorless green ideas sleep furiously" means when I first heard it; the words seemed to form a meaningful whole based on the way they fell together. But when I actually try to concretise what that could possibly mean, I find myself unable to, and notice that characteristic pang of confusion."

Example of (2): (B) "Hrm, I thought I understood how flight works because I could form words into intelligent-sounding sentences about things like 'lift' and 'Newton's third law'. But then when I tried to explain why a plane goes up instead of down, my word soup explained both equally well, and I noticed I was confused." (Compare, from the post: "I knew that the usefulness of a model is not what it can explain, but what it can't. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.")

Comment author: KnaveOfAllTrades 07 June 2014 02:56:29AM *  1 point [-]

It might be useful to identify a third type:

(3) Noticing argumentative confusion. Example of (3): "Hrm, those fringe ideas seem convincing after reading the arguments for them on this LessWrong website. But I still feel a lingering hesitation to adopt the ideas as strongly as lots of these people seem to have, though I'm not sure why." (Confusion as pointer to epistemic learned helplessness)

As in the parent to this comment, (3) is not necessarily qualitatively distinct (e.g. argumentative confusion could be recast as factual confusion: "Hrm, I'm confused by this hesitation I observe in myself to fully endorse these fringe ideas after seeing such seemingly-decisive arguments. Maybe this means something." (Observations of internal reaction are still observations about which one can be factually confused).