An interesting thing happened to me yesterday, probably related to what happens with anosognosia. I was in my room at night, with computer turned on, in the opposite side of the room from the computer. Suddenly, the light went off. I looked around, and noticed that the light indicators around the computer were still on. "Circuit breaker must've overloaded on one of the lines, turning the light off, but not the computer", I thought. Then I heard a characteristic noise caused by the CRT monitor turning off. "Interesting coincidence", I thought, "exactly 15 minutes must've passed since I last touched the keyboard, just when the circuit breaker overloaded on another line". I went to a light switch and flipped it absent-mindedly. The lights went on. "Strange", I thought, "The switch must be to blame, it never happened before." This is all in the span of a few seconds.
Then it hit me: the lights were never on.
The room was illuminated only by the monitor, so when it switched off after 15 minutes of inactivity, it became dark. My mind confused this single thing for the light turning off, and then produced a whole sequence of complex thoughts around this single confusion, all the way relying on this fact being true. Inability to convince yourself that an equally simple fact is false must result in similarly complex justifications. There is nothing unnatural about justifications being long and detailed, the point of failure is where a fact can't be accepted, not where it just can't be noticed.
I've had similar experiences, especially when sleepy. The interesting thing is that, at least in my case, it's often difficult to remember the subjective experience of it--once the correction kicks in, the earlier rationalizations seem to be subject to the same fading effect that makes dreams tough to remember, especially when I haven't acted on the original confusion (as you did by turning on the lights).
Also, this is why it's probably reasonable for all of us to be confident that our left arms are not, in fact, paralyzed--because we have evidence of the anti-confabulation systems in our brain working as intended (if a bit slow to catch up, on occasion).
I do this all the time when I blink and wonder why the lights flickered. I used to verbalize my confusion and ask why the lights flickered. No one else saw it flicker and it took me awhile to realize that the flicker was me blinking. In addition, when the lights flicker, I usually blink. (To adjust for new light levels? You tell me...) Now, when the lights flicker or I blink I am stuck wondering which came first.
I'll bet that even though argument doesn't move anosognostics, they get argued with anyway, at least a little bit initially. The fact that no one has tried to convince me my arm is paralyzed is sufficient evidence for me to proclaim that the odds of it actually being paralyzed are significantly lower than 1 in 10 million.
The prior probability of 1 in 10^7 is already lower than I can handle in a well calibrated manner, and it would only get smaller after accounting for the evidence that I observe my left arm moving when I try to move, and I am successfully using it to type right now, and from the descriptions I read, anosognosiacs do not claim to make these observations, but attempt to make excuses for the lack of such observations.
In any event, it is not something I am too concerned about.
I enjoy fencing, which only requires the use of one arm. That seems like pretty strong evidence of anosognosia; if I could use both of my arms, I would have picked a sport that engages me more fully.
I wonder if the paradoxical undressing effect is a form of anosognosia.
Paradoxical undressing is when the person removes warm clothing while in a state of hypothermia, which further increases heat loss. Because they are suffering from the cold and their body temperature is below the safe threshold, it seems incongruous that they would be removing warm clothing and causing themselves to become even colder.
To be sure, the phenomenon of paradoxical undressing is an enigma counter to expected behavior. With their core temperature below 90 degrees F, hypothermia sufferers frequently undress themselves. Urban victims of hypothermia that are found in a state of undress are often thought to be victims of assault.
Paradoxical undressers remove their warm clothing even as they are freezing to death.
Suppose I put in a nerve block to artificially paralyze my left arm, and I notice that my left arm is indeed paralyzed. I then remove the nerve block and notice that my left arm is no longer paralyzed. Would that be sufficient to prove (to myself) that I don't have left-arm paralysis with anosognosia?
BTW, I'm not sure I understand the current fascination on this blog about anosognosia. Where is this train of thought going?
My estimated probability is extremely low because unlike the patient with anosognosia, I observe movement in my left arm. The patient did not observe motion and invoked absolute denial in inventing explanations for it.
Still, I'm not a diagnostician so I'll assume patients with anosognosia think they can actually observe motion in their left arms. In this case, it seems the popular proof is in doing something that requires both arms, like JulianMorrison did. Still, it seems that if you can deceive yourself about your arm moving, you can deceive yourself about such an experiment. It strikes me that one way to prove your left arm still works is to disable your right arm. After all, you have no illusions about your right side - if you disable your right arm, you'll know it. Therefore, I shall tape my right arm to the arm of the chair I'm sitting in, and finish typing this post with my left. There. It is currently 10:52 EST. The post timestamp will agree that this post was submitted before I released my right arm.
In retrospect, I advise against repeating this experiment, especially with duct tape, if you a) have hairy arms and b) have recently suffered a bad sunburn.
I'd expect that for a rationalist afflicted with anosognosia, denial would manifest itself in the form of absurdly low base rate estimates, maybe one in a thousand or rarer.
Ignoring, for the moment, the deeper metaphorical question of how many of us are any given brain failure, does anyone know whether anosognosics actually think that they're using their paralyzed arm? Because I have a very strong sense of using my arms, and I suspect from the earlier description that anosognosics deny their arm being paralyzed, but wouldn't claim that they are actually typing with two hands, for example. Anybody know more on that?
Clearly, they don't think they use their arm - until someone points out to them that only people with paralyzed arms don't use them, at which point they fabricate the relevant memories.
D:
If agnosognosics by definition place a probability of zero on their arm being paralyzed, and thus on being agnosognosics...
I'm kicking myself for not realizing this, but you're right. A probability of zero on their left arm being paralyzed only comes from people with anosognosia and people who are not paralyzed. Therefore, a non-zero probability only comes from people who are paralyzed and do not have anosognosia, in which case their probability is 1.
Estimating between zero and 1 by definition means you cannot be anosognosic. However, it also means you are not paralyzed, because only anosognosic paralytics place a non-1 probability on their condition. Therefore, if you are not certain you are paralyzed, you must be certain you are not. I am subsequently forced to place a probability of zero on my left arm being currently paralyzed.
I strongly suspect that a Less Wrong reader with anosognosia would at first reply, "Well, of course I'm not certain that my left arm isn't paralyzed - you can't be infinitely certain about anything." And they might well go on to say, "But the fact that I don't feel any absolute certainty along these lines is, in fact, evidence that I don't have anosognosia".
I have seen many would-be rationalists who say "I'm not certain" and then, secure in having proven their rationality as much as anyone could possibly ask, forge straight on without a second glance. See The Proper Use of Humility.
I'm still having trouble with this one. I don't know why this particularly morbid example popped into my head, but here it is: we have very strong survival instincts. These can be overcome by overdosing on pills or jumping off a bridge. However, they cannot be overcome by holding your head in a bucket of water and trying to drown. You may be determined to kill yourself, but every time you try, as soon as you start breathing water you're going to pull your head back out. Now, I can say that "I'm not certain" I can't kill myself this way -- but in reality, I know it's not possible. My brain has a very real physical process that just won't let me. Therefore, I don't feel that it is honest to say "I'm not certain" in this case. In fact, saying "I'm not certain" feels very much like saying "the sky is green" or "I like to eat glass", i.e. it feels like bullshit. Is that something a rationalist needs to overcome, if only so he can admit, "I could say I am not certain, but I am?" The argument against this seems to be saying, "I am not certain I am not a butterfly," even though it is not possible for a butterfly to have such a thought.
The Proper Use of Humility is one of my favorite articles of yours, by the way, and I do feel like I'm making progress in a worthy direction, even if it can seem like I'm random-walking on the way there.
However, they cannot be overcome by holding your head in a bucket of water and trying to drown. You may be determined to kill yourself, but every time you try, as soon as you start breathing water you're going to pull your head back out. Now, I can say that "I'm not certain" I can't kill myself this way -- but in reality, I know it's not possible. My brain has a very real physical process that just won't let me.
How do you know that? It doesn't seem intuitively obvious to me that you can't train to successfully drown this way. It'll take more than intuition, and I can't think of a way this could've been reliably studied, so I don't believe you can have a good reason to have this belief.
Anecdotal evidence: When I swim for distance underwater, and really push myself, I will often experience a strong compulsion to surface, even when I believe I can hold out for a few more feet and reach my goal. I am not even afraid of drowning, yet I consistently follow the compulsion to surface.
I can't think of a way to study this in an ethical controlled experiment, but data can be gathered from suicides and attempted suicides that would be relevant to the theory.
I have similar experiences when swimming underwater. I used to see if I could swim the length of the pool in one breath, and often would surface seemingly-prematurely out of a sudden strong desire to take a breath.
My old roommate reported having lots of trouble letting go of a handle when skydiving. He very much wanted to dive, and was not afraid of an unsafe landing, but instinct was very difficult to overcome.
Which reminds me that there are people who can hold their breath for insane amounts of time, so presumably they overcame this instinct, and start breathing only by intellectually deciding that they must do so to survive (and they likely know a lot about the properties of this danger).
I am with you in disagreeing with Eirenicon's assertion that self drowning in a bucket is impossible with probability 1, though I believe with high probability that it is difficult beyond the ability of most people. I was mainly objecting to your assertion that this couldn't be studied.
Also, merely holding your breath is not dangerous. You would pass out before suffering any permanent damage, and breathe normally while unconscious. It would be dangerous in an environment, such as under water, in which you could breathe normally after passing out.
It could be apocryphal, and it doesn't help that it seems like something I heard about a long time ago, but as far as I know, when you start to drown the best of your intentions are overcome by your instinct for self preservation. However, Google turns up a result from the Telegraph about a recent case in which someone may indeed have drowned himself in a bucket of water, although there seems to be some confusion over the case. Thanks for calling me on it -- I really am now, in fact, not certain I couldn't.
I couldn't think of a better example at the time, though, so the spirit of the argument will have to stand in for its questionable veracity.
Even if no examples of this were available, it's not the kind of evidence that is enough to claim that something is impossible.
You're right, and I won't argue it. The idea of not impossible is one I have difficulty with, though. In my original post, replace with , for lack of a better alternative. With anosognosia, that thing is "recognize left-arm paralysis". The reason I didn't stick with that is because I don't know if I have anosognosia or not, which is another layer of uncertainty. Stripped down, though, this is what I'm saying: it seems I should be uncertain about things I know to be certain, and that seems dishonest. I understand the argument against infinite certainty, and that 0 And 1 Are Not Probabilities. Perhaps it's because, as EY suggests, people often say "I can't be certain" simply to establish themselves as rational rather than actually assessing probability. Perhaps it's simply because I dislike an infinitely uncertain universe. Of course, the universe isn't interested in what I like. The map, as ever, is not the territory.
You should say that something is impossible, without intending that to mean zero probability, if you can safely antipredict that event. Antiprediction means that you think of an event as if it can't happen. Intuition resulting from thinking of a sufficiently low-probability event as impossible is more accurate than intuition resulting from thinking of it as still possible.
Antiprediction is a very interesting suggestion. Your aggressive reasoning in this thread has changed the way I think about a few things. Well done, and thanks!
That was precisely my first reaction, and in fact I originally wrote that admitting you can't be certain your left arm isn't paralyzed may seem like a stronger defense against an accusation of anosognosia than claiming infinite certainty. However, I realized that saying you can't be absolutely certain your left arm is not paralyzed while absolutely denying that it is seems like a pretty obvious contradiction. The very reason we're talking about anosognosia is because it is unique in that you aren't saying "I'm not certain, but I don't think I'm paralyzed," but "I'm not certain, but I completely reject any evidence that I'm paralyzed."
I don't fully understand the condition, though. Would a Less Wrong reader with anosognosia be able to realize he had it if you confronted him on the notion, not that he is paralyzed, but that he is totally rejecting the evidence instead of exhibiting real uncertainty? Difficult to wrap my head around.
I did study logic for a while, though, and it gave me an unfortunate predilection for resolving to certainty when I should at least be providing reasonable probability bounds.
You are arguing by definition (little can be learned with way), and throwing in infinite certainty. I doubt an anosognosic believes that it's impossible for them to be paralyzed more than I believe that 2+2=4, or that there is no God. Maybe that belief isn't even that strong, the only problem with it being that it won't go away in the face of counterevidence.
I doubt an anosognosic believes that it's impossible for them to be paralyzed more than I believe that 2+2=4, or that there is no God.
I don't know if that's correct, actually.
Anosognosia seems to be a symptom of a catastrophic failure of the brain's ability to reconsider current beliefs in light of new evidence; these systems are apparently localized to the right hemisphere, which is why you won't find anosognosiacs with paralyzed right arms, only left.
If a god descended from the heavens and spoke to you, personally, declaring existence and providing myriad demonstrations of divine power, I expect you would reconsider your belief at least a little bit. Anosognosiacs routinely deny equally compelling evidence for the paralysis of their arm!
If a god descended from the heavens and spoke to you, personally, declaring existence and providing myriad demonstrations of divine power, I expect you would reconsider your belief at least a little bit.
Given what you already know about the world (including the possibility of insanity and simulations), how much evidence should be necessary to convince you in that situation? A subjective year? One hundred? One thousand? More? Once you've already decided that you're insane or in a simulation with probability X, I can't see how any evidence of anything would be useful if you already assign less than probability X to that thing. It's a local minima you can't escape from as far as I can see. One reason I'm not especially anti-religion is that I think that at least some theists are in the same position: there's no evidence that is more likely to be real evidence that there is no god than it is to be evidence of testing by fallen angels, or whatever.
But maybe I've just missed the excellent discussion about this?
The ability to update on evidence is different from level of certainty. If I'm absolutely certain about something, I accept any bet on it being true. If I'm merely unable to update my belief but I'm not absolutely certain, I will only accept moderate bets, but I'll do that long after any reasonable trust in the statements should've been eliminated by the evidence.
Okay, true. I was thinking about it backwards; absolute certainty does, of course, lead to an inability to update (which is why we don't use 1 and 0 as probabilities).
Out of curiousity, which proposition do you have higher confidence in: "No being fitting the standard definitions of God exists" or "My left arm is not paralyzed"?
I don't know: these probabilities are not technically defined, so I'm unable to compute them, and too low for my intuition to compare.
For satisfying SoullessAutomaton's curiosity I think phrasing it differently would have been better: which one would you bet (say, $100) if you had to do and could only pick one? (Assuming that both questions would get truthfully answered immediately after making the bet. It's just so that you wouldn't pick one of these just because the question seems more interesting.)
This is a trivial transformation that I don't see how could change the interpretation of the question.
Funny, I was just reading the arguing by definition article, then clicked the red envelope and saw your reply. I looked it up because the post I just made here reminded me of it as well. However, I feel justified in this instance because anosognosia is characterized by absolute denial. As far as I can tell, this is an unusual form of brain damage because it is so black and white; 100% of anosognosics will absolutely deny their left arm is paralyzed. If they do not, it by definition (oops) is not anosognosia, just as someone with a paralyzed left arm by definition cannot move it. Consequently, I don't see the fallacy. I genuinely appreciate the criticism, though.
In any case, I avoided arguing the question, which itself is predicated on anosognosics assigning zero probability to their left arm being paralyzed. If they don't, then there is nothing to base our probability estimate on, and the question is meaningless, like asking "There are a hundred trees, one in ten trees has an apple on it, how many apples are there? (Some apples are oranges)".
Incidentally, I assign a probability of 1 to 2+2=4 and don't understand why you would not. Can you explain?
It seems that if I were an anosognosic, I would make some observations only indirectly related to the affected limb that they don't seem impaired at making. For instance, even if I didn't notice that I was typing with one hand, I would probably remember having been to the hospital, or hearing a friend point out that one of my limbs looks awfully limp (even if I didn't know what they were talking about). Since I haven't made such observations, my probability that I'm anosognosic (at least about a limb or anything that observable) is lower than the prior.
1 in 10 million is almost low enough for me to feel annoyed that this article exists.
And people actually play the lottery!
I guess I can imagine someone being awakened by your analogy. It would be swell if this article actually had enough millions of readers that we might experience some spectacularly delusional one-handed comments.
Does the absence of people around me pointing towards my arm insisting it does not move, while believing that I have done plenty of activities in which I used 2 arms mean I am an extreme Anosognosic. One who rewrites massive quantities of 1 arm experiences to 2 arm experiences on the fly?
Hrm... I wanted to say 10^-7, but then I realized I actually do have evidence that is different from the type of false justifications associated with anosonosia.
The latter, as I understand it, tends to be associated with making excuses for why one isn't moving the arm, etc... I notice though that I actually can move my arms. That is, I observe myself waving both arms freely, typing using both hands, etc.
I'd have to spend a bit more type to work out reasonable numbers associated with it, but these observations are different from the "making excuses for why one can't move the arm in question"
See Added: "This interests me because it seems to be a special case of the same general issue discussed in The Modesty Argument and Robin's reply Sleepy Fools - when pathological minds roughly similar to yours update based on fabricated evidence to conclude they are not pathological, under what circumstances can you update on different-seeming evidence to conclude that you are not pathological?"
Given the odds of 1 in 10 million, I'm confident that not only am I free of this condition, but no one I've ever met has it either. I'm not even going to take a typing test to prove that I still type at 70wpm (which may be possible to do with one hand, but I don't think anyone could learn it without being conscious of having done so). I will reevaluate this conclusion if a doctor (or anyone, really) tells me they think my left arm is paralyzed.
EDIT: As for an exact number, I am > 99% confident about this, which is high enough that it is not worth it to calculate a better estimate.
Some evidence supporting me having both fully functional hands:
Can anosognosia patients remember being in a hospital, being told that they are paralyzed or failing at a simple two-handed task? I don't remember any of those things. I'm even writing this last sentence while holding my keyboard up in the air.
From the popularity of the "Strangest thing an AI could tell you" post, and anosognosia tidbits in general, this topic seems to fascinate many people here. I for one would find it freakishly interesting to discover that I had such an impairment. In other words, I'd have motivation to at least genuinely investigate the idea, and even accept it.
How I'd come to accept it, would probably involve a method other than just "knowing it intuitively", like how I intuitively know the face of a relative to be that of a relative, or how I know with utter, gut level certainty that I have three arms. Considering that we are, well, rationalists, couldn't we be supposed to be able to use other methods, to discover truth, than our senses and intuitions ? Even if the truth is about ourselves, and contradicts our personal feeling ?
After all, it's not like people in the early 20th century had observed tiny pictures of atoms, they deduced their existence from relatively nonintuitive clues glued together into a sound theoretical framework. Observing nature and deducing its laws has often been akin to being blind, and yet managing to find your way around by using indirect means.
If I had to guess, I'd still not be certain that, even being a rationalist using scientific methods and all those tools that help straighten chains of inference, as well as finding anosognosia to be more of a treat than a pain to be rationalized, would make it a sure bet that I'd not yet retain a blindspot.
Maybe the prospect of some missing things could be too horrid to behold, not matter how abstractly, perhaps beholding them may require me to think in a way that's just too complicated, abstract and alien for me to ever notice it as being something salient, let alone comprehensible.
Still that's really not what my intuition would lead me to believe, what with truth being entangled and so forth. And such a feeling, such an intuition, may be exactly part of the problem of why and how I'd not pay attention to such an impairment. Perhaps I just don't want to know the truth, and willingly look away each time I can see it. Then again, if we're talking rationalization and lying to oneself, that has a particular feeling, and that is something one could be able to notice.
Evidence that I have anosognosia: my left arm doesn't seem to be paralysed to me; if it did, I'd be sure I don't have anosognosia.
Evidence that I don't have anosognosia: the title of this post (see Betteridge's law). :-)
My estimated probability is too low to consider alone without additional evidence. I like this probability. If you require a number, 1/10,000,000.
Now, I currently appear to be typing with both hands, and there is nobody else in sight. Does this evidence lower the probability that my left arm is paralyzed as much as you might expect? If my left arm is paralyzed, there's a good change that this is just a rationalization.
See if you can think of a statement S that contradicts a statement of high probability such that no observable phenomenon is evidence against it. (Of course, nothing will be evidence for it, either.)
let's say the base rate is 1 in 10,000,000 individuals
Supposing this to be the prior, what is your estimated probability that your left arm is currently paralyzed?
1 in 10,000,000
Very low, because I can (and just did, to make sure) easily contrive situations where the visible present is determined by the recent past presence of two arms and where I would have been forced to strain credulity if I were justifying a negative result. In this case, the experiment was dropping two things at once causing two other objects to roll and meet in the middle (and not on one side, indicating an asymmetric drop).
Of course, it's absolutely inconceivable that, knowing what you expected, you turned toward the point where they met...
So extend the causal chain a little. After a certain number of consequences-of-consequences, it can't be a trick of the eye, not in an integrated and causal world, you would have to have a fully convincing daylight hallucination or be faced with explaining a negative result.
Someone with anosognosia would just launch into a contrived explanation. So a positive result is as conclusive a clean bill of health as any analysis using human senses.
Followup to: The Strangest Thing An AI Could Tell You
Brain damage patients with anosognosia are incapable of considering, noticing, admitting, or realizing even after being argued with, that their left arm, left leg, or left side of the body, is paralyzed. Again I'll quote Yvain's summary:
A brief search didn't turn up a base-rate frequency in the population for left-arm paralysis with anosognosia, but let's say the base rate is 1 in 10,000,000 individuals (so around 670 individuals worldwide).
Supposing this to be the prior, what is your estimated probability that your left arm is currently paralyzed?
Added: This interests me because it seems to be a special case of the same general issue discussed in The Modesty Argument and Robin's reply Sleepy Fools - when pathological minds roughly similar to yours update based on fabricated evidence to conclude they are not pathological, under what circumstances can you update on different-seeming evidence to conclude that you are not pathological?