It's strange that it sounds like a rationalist is saying that he should have listened to his instincts. A true rationalist should be able to examine all the evidence without having to rely on feelings to make a judgment, or would be able to truly understand the source of his feelings, in which case it's more than just a feeling. The unfortunate thing is that people are more likely to remember the cases when they didn't listen to their feelings which ended up being correct in the end, than all the times when they were wrong.
The "quiet strain in the b...
Anon, see Why Truth?:
When people think of "emotion" and "rationality" as opposed, I suspect that they are really thinking of System 1 and System 2 - fast perceptual judgments versus slow deliberative judgments. Deliberative judgments aren't always true, and perceptual judgments aren't always false; so it is very important to distinguish that dichotomy from "rationality". Both systems can serve the goal of truth, or defeat it, according to how they are used.
"I should have paid more attention to that sensation of still feels a little forced."
The force that you would have had to counter was the impetus to be polite. In order to boldly follow your models, you would have had to tell the person on the other end of the chat that you didn't believe his friend. You could have less boldly held your tongue, but that wouldn't have satisfied your drive to understand what was going on. Perhaps a compromise action would have been to point out the unlikelihood, (which you did: "they'd have hauled him off if there was the tiniest chance of serious trouble"), and ask for a report on the eventual outcome.
Given the constraints of politeness, I don't know how you can do better. If you were talking to people who knew you better, and understood your viewpoint on rationality, you might expect to be forgiven for giving your bald assessment of the unlikeliness of the report.
Not necessarily.
You can assume the paramedics did not follow the proper procedure, and that his friend aught to go to the emergency room himself to verify that he is OK. People do make mistakes.
The paramedics are potentially unreliable as well, though given the litigious nature of our society I would fully expect the paramedics to be extremely reliable in taking people to the emergency room, which would still cast doubt on the friend.
Still, if you want to be polite, just say "if you are concerned, you should go to the emergency room anyway" and keep your doubts about the man's veracity to yourself. No doubt the truth would have come out at that point as well.
Reminds me of a family dinner where the topic of the credit union my grandparents had started came up.
According to my grandmother, the state auditor was a horribly sexist fellow. He came and audited their books every single month, telling everyone who would listen that it was because he "didn't think a woman could be a successful credit union manager."
This, of course, got my new-agey aunts and cousins all up-in-arms about how horrible it was that that kind of sexism was allowed back in the 60s and 70s. They really wanted to make sure everyone knew they didn't approve, so the conversation dragged on and on...
And about the time everyone was all thoroughly riled up and angry from the stories of the mean, vindictive things this auditor had done because the credit union was run by a woman my grandfather decided to get in on the ruckus and told his story about the auditor...
Seems like the very first time the auditor had come through, the auditor spent several hours going over the books and couldn't make it all balance correctly. He was all-fired sure this brand new credit union was up to something shady. Finally, my grandfather (who was the credit union accountant...
In it's strongest form, not believing system 1 amounts to not believing perceptions, hence not believing in empiricism. This is possibly the oldest of philosophical mistakes, made by Plato, possibly Siddhartha, and probably others even earlier.
Sounds like good old cognitive dissonance. Your mental model was not matching the information being presented.
That feeling of cognitive dissonance is a piece of information to be considered in arriving at your decision. If something doesn't feel right, usually either te model or the facts are wrong or incomplete.
T
"And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, "Well, if the paramedics told your friend it was nothing, it must really be nothing - they'd have hauled him off if there was the tiniest chance of serious trouble.""
My own "hold on a second" detector is pinging mildly at that particular bit. Specifically, isn't there a touch of an observer selection effect there? If the docs had been wrong and you ended up dying as a result, you wouldn't have been around to make that deduction, so you're (Well, anyone is) effectively biased to retroactively observe outcomes in which if the doctor did say you're not in a life threatening situation, you're genuinely not?
Or am I way off here?
A valid point, Psy-Kosh, but I've seen this happen to a friend too. She was walking along the streets one night when a strange blur appeared across her vision, with bright floating objects. Then she was struck by a massive headache. I had her write down what the blur looked like, and she put down strange half-circles missing their left sides.
That point was when I really started to get worried, because it looked like lateral neglect - something that I'd heard a lot about, in my studies of neurology, as a symptom of lateralized brain damage from strokes.
The funny thing was, nobody in the medical profession seemed to think this was a problem. The medical advice line from her health insurance said it was a "yellow light" for which she should see a doctor in the next day or two. Yellow light?! With a stroke, you have to get the right medication within the first three hours to prevent permanent brain damage! So we went to the emergency room - reluctantly, because California has enormously overloaded emergency rooms - and the nurse who signed us in certainly didn't seem to think those symptoms were very alarming.
The thing is, of course, that non-doctors are legally prohib...
Okie, and yeah, I imagine you would have noticed.
Also, of course, docs that habitually misdiagnose would presumably be sued or worse to oblivion by friends and family of the deceased. I was just unsure about the actual strength of that one thing I mentioned.
I think one would be the closest to truth by replying: "I don't quite believe that your story is true, but if it is, you should... etc" because there is no way for you to surely know whether he was bluffing or not. You have to admit both cases are possible even if one of them is highly improbable.
Doesn't any model contain the possibility, however slight, of seeing the unexpected? Sure this didn't fit with your model perfectly — and as I read the story and placed myself in your supposed mental state while trying to understand the situation, I felt a great deal of similar surprise — but jumping to the conclusion that someone was just totally fabricating is something that deserves to be weighed against other explanations for this deviation from your model.
Your model states that pretty much under all circumstances an ambulance is going to pick up a pat...
I don't see that you did anything at all irrational. You're talking to a complete stranger on the internet. He doesn't know you, and cannot have any possible interest in deceiving you. He tells you a fairly detailed story and asks for you advice. For him to make the whole thing up just for kicks is an example of highly irrational and fairly unlikely behavior.
Conversely, a person's panicking over chest pains and calling the ambulance is a comparatively frequent occurrence. Your having read somewhere something about ambulance policies does not amount to hav...
You're talking to a complete stranger on the internet. He doesn't know you, and cannot have any possible interest in deceiving you.
There's plenty of evidence that some people (a smallish minority, I think) will deceive strangers for the fun of it.
I read somewhere that if spin about and click my heels 3 times I will be transported to the land of Oz. Does that qualify as a concrete reason to believe that such a land does indeed exist?
That indeed serves as evidence for that fact, though we have much stronger evidence to the contrary.
N.B. You do not need to sign your comments; your username appears above every one.
That indeed serves as evidence for that fact, though we have much stronger evidence to the contrary.
And not just because clicking the heels three times is more canonically (and more often) said to be way to return to Kansas from Oz. and not to Oz.
Though I realize that you take issue with arguing over word definitions, to me the word evidence has certain meaning that goes beyond every random written sentence, whisper or rumor that you encounter.
Around these parts, a claim that B is evidence for A is a taken to be equivalent to claiming that B is more probable if A is true than if not-A is true. Something can be negligible evidence without being strictly zero evidence, as in your example of a fairy story.
Let's not get bogged down in the specific procedure of getting to Oz. My point was that if you truly adapt merely seeing something written somewhere as your standard for evidence, you commit yourself to analyzing and weighing the merits of EVERYTHING you read about EVERYWHERE.
No, you can acknowledge that something is evidence while also believing that it's arbitrarily weak. Let's not confuse the practical question of how strong evidence has to be before it becomes worth the effort to use it ("standard of evidence") with the epistemic question of what things are evidence at all. Something being written down, even in a fairy tale, is evidence for its truth; it's just many orders of magnitude short of the evidential strength necessary for us to consider it likely.
As always, I recommend against sarcasm, which can hide errors in reasoning that would be more obvious when you speak straightforwardly.
An alternative explanation? You put your energy into solving a practical problem with a large downside (minimizing the loss function in nerdese). Yes, to be perfectly rational you should have said: "the guy is probably lying, but if he is not then...".
It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading "EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG."
I wouldn't call it a flaw; blaring alarms can be a nuisance. Ideally you could adjust the sensitivity settings . . . hence the popularity of alcohol.
Thank you, Eliezer. Now I know how to dissolve Newcomb type problems. (http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/)
I simply recite, "I just do not believe what you have told me about this intergalactic superintelligence Omega".
And of course, since I do not believe, the hypothetical questions asked by Newcomb problem enthusiasts become beneath my notice; my forming a belief about how to act rationally in this contrary-to-fact hypothetical situation cannot pay the rent.
This sort of brings to my mind Pirsig's discussions about problem solving in ZATAOMM. You get that feeling of confusion when you are looking at a new problem, but that feeling is actually a really natural, important part of the process. I think the strangest thing to me is that this feeling tends to occur in a kind of painful way -- there is some stress associated with the confusion. But as you say, and as Pirsig says, that stress is really a positive indication of the maturation of an understanding.
I'm not sure that listening to ones intuitions is enough to cause accurate model changes. Perhaps it is not rational to hold a single model in your head, as your information is incomplete. Instead one can consciously examine the situation from multiple perspectives, in this way the nicer (simpler, more consistent, whatever your metric is) model response can be applied. Alternatively you could legitimately assume that all the models you hold have merit and produce a response that balances their outcomes e.g. if your model of the medical profession is wrong ...
Considering that medical errors apparently kill more people than car accidents each year in the United States, I suspect the establishment is not in fact infallible.
From TvTropes:
"According to legend, one night the students of Baron Cuvier (one of the founders of modern paleontology and comparative anatomy) decided to play a trick on their instructor. They fashioned a medley of skins, skulls and other animal parts (including the head and legs of a deer) into a credibly monstrous costume. One brave fellow then donned the chimeric assemblage, crept into the Baron's bedroom when he was asleep and growled "Cuvier, wake up! I am going to eat you!" Cuvier woke up, took one look at the deer parts that formed part of the costume and sniffed "Impossible! You have horns and hooves!" (one would think "what sort of animals have horns and hooves" is common knowledge).
More likely he was saying "Impossible! You have horns and hooves (and are therefore not not a predator.)" The prank is more commonly reported as: "Cuvier, wake up! I am the Devil! I am going to eat you!" His response was "Divided hoof; graminivorous! It cannot be done." Apparently Satan is vegan. Don't comment that some deer have been seen eating meat or entrails, I occasionally grab the last slice of my bud's pizza but that doesn't classify me as a scavenger."
I feel really uncomfortable with this idea: "EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG."
I think this statement suffers from the same limitations of propositional logic; consequently, it is not applicable to many real life situations.
Most of the times, our model contains rules of this type (at least if we are rationalists): Event A occurs in situation B with probability C, where C is not 0 or 1. Also, life experiences teach us that we should update the probabilities in our model over time. So beside the uncertainty caused by the probabili...
This post frustrated me for a while, because it seems right but not helpful. Saying to myself, "I should be confused by fiction" doesn't influence my present decision.
First concertize. Let's say I have a high level world model. A few of them perhaps, to reduce the chance that one bad example results in a bad principle.
"My shower produces hot water in the morning." "I have fresh milk to last the next two days." "The roads are no longer slippery."
What do these models exclude? "The water will be cold", "t...
Your strength as a rationalist is your ability to be more confused by fiction than by reality.
Yet, when a person of even moderate cleverness wishes to deceive you, this "strength" can be turned against you. Context is everything.
As Donald DeMarco asks in "Are Your Lights On?", WHO is it that is bringing me this problem?
Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.
Looking through Google Scholar for citations of Gilbert 1990 and Gilbert 1993, I see 2 replications which question the original effect:
Eliezer's model:
The Medical Establishment is always right.
Information given:
Possible scenarios mentioned in the story:
Between the model and the information given, only Scenario 1 can be ruled false; Scenarios 2 and 3 are both possible. If Eliezer is going to beat himself up for not knowing better, it should be because Scenario 3 did n...
I see two senses (or perhaps not-actually-qualiatively-different-but-still-useful-to-distinguish cases?) of 'I notice I'm confused':
(1) Noticing factual confusion, as in the example in this post. (2) Noticing confusion when trying to understand a concept or phenomenon, or to apply a concept.
Example of (2): (A) "Hrm, I thought I understood what, "Colorless green ideas sleep furiously" means when I first heard it; the words seemed to form a meaningful whole based on the way they fell together. But when I actually try to concretise what that co...
Was a mistake really made in this instance? Is it not correct to conclude 'there was no problem'? Yes, the author did not realise the story was fictional; but what of what he concluded implied the story was not fictional?
Furthermore, is it good to berate oneself because one does not immediately realise something? In this case, the author did not immediately realise the story was fictional. But evidently the author was already working toward that conclusion by throwing doubt on parts of the story. And the evidence the author had was obviously inconclusive;...
This looks like an instance of the Dunning-Kruger effect to me. Despite your own previous failures in diagnosis, you still felt competent to give medical advice to a stranger in a potentially life-threatening situation.
In this case, the "right answer" is not an analysis of the reliability of your friend's account, it is "get a second opinion, stat". This is especially true seeing as how you believed the description you gave above.
If a paramedic tells me "it's nothing", I complain to his or her superiors, because that is not a ...
Of course, it's also possible to overdo it. If you hear something odd or confusing, and it conflicts with belief that you are emotionally attached to, the natural reaction is to ignore the evidence that doesn't fit your worldview, thus missing an opportunity to correct a mistaken belief.
On the other hand, if you hear something odd or confusing, and it conflicts with belief or assumption that you aren't emotionally attached to, then you shouldn't forget about the prior evidence in light of new evidence. The state of confusion should act as a trigger mechanism telling you to tally up all the evidence, and decide which piece doesn't fit.
It is a design flaw in human cognition...
Since I think evolution makes us quite fit to our current environment I don't think cognitive biases are design flaws, in the above example you imply that even if you had the information available to guess the truth, your guess was another one and it was false, therefore you experienced a flaw in your cognition.
My hypotheses is that reaching the truth or communicating it in the IRC may have not been the end objective of your cognitive process, in this case just to dismiss the issue as something that was not impor...
Is EY saying that if something doesn't feel right, it isn't? I've been working on this rationalist koan for weeks and can't figure out something more believable! I feel like a doofus!
This article actually made me question „Wait, is this even true?“ when I read an article with weird claims; then I research whether the source is trustworthy and sometimes, it turns out that it isn‘t
Trying to understand this.
I *knew* that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.
I think what Yud means there is that a good model will break quickly. It only explains a very small set of things because the universe is very specific. So it's good that it doesn't explain many many things.
It's a bit like David Deutsch arguing that models should be sensitive to small changes. All of their elements should be important.
(The following happened to me in an IRC chatroom, long enough ago that I was still hanging around in IRC chatrooms. Time has fuzzed the memory and my report may be imprecise.)
So there I was, in an IRC chatroom, when someone reports that a friend of his needs medical advice. His friend says that he's been having sudden chest pains, so he called an ambulance, and the ambulance showed up, but the paramedics told him it was nothing, and left, and now the chest pains are getting worse. What should his friend do?
I was confused by this story. I remembered reading about homeless people in New York who would call ambulances just to be taken someplace warm, and how the paramedics always had to take them to the emergency room, even on the 27th iteration. Because if they didn't, the ambulance company could be sued for lots and lots of money. Likewise, emergency rooms are legally obligated to treat anyone, regardless of ability to pay. (And the hospital absorbs the costs, which are enormous, so hospitals are closing their emergency rooms... It makes you wonder what's the point of having economists if we're just going to ignore them.) So I didn't quite understand how the described events could have happened. Anyone reporting sudden chest pains should have been hauled off by an ambulance instantly.
And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, "Well, if the paramedics told your friend it was nothing, it must really be nothing—they'd have hauled him off if there was the tiniest chance of serious trouble."
Thus I managed to explain the story within my existing model, though the fit still felt a little forced...
Later on, the fellow comes back into the IRC chatroom and says his friend made the whole thing up. Evidently this was not one of his more reliable friends.
I should have realized, perhaps, that an unknown acquaintance of an acquaintance in an IRC channel might be less reliable than a published journal article. Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.
So instead, by dint of mighty straining, I forced my model of reality to explain an anomaly that never actually happened. And I knew how embarrassing this was. I knew that the usefulness of a model is not what it can explain, but what it can't. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.
Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
We are all weak, from time to time; the sad part is that I could have been stronger. I had all the information I needed to arrive at the correct answer, I even noticed the problem, and then I ignored it. My feeling of confusion was a Clue, and I threw my Clue away.
I should have paid more attention to that sensation of still feels a little forced. It's one of the most important feelings a truthseeker can have, a part of your strength as a rationalist. It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading "EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG."