Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Bayes for Schizophrenics: Reasoning in Delusional Disorders

86 Post author: Yvain 13 August 2012 07:22PM

Related to: The Apologist and the Revolutionary, Dreams with Damaged Priors

Several years ago, I posted about V.S. Ramachandran's 1996 theory explaining anosognosia through an "apologist" and a "revolutionary".

Anosognosia, a condition in which extremely sick patients mysteriously deny their sickness, occurs during right-sided brain injury but not left-sided brain injury. It can be extraordinarily strange: for example, in one case, a woman whose left arm was paralyzed insisted she could move her left arm just fine, and when her doctor pointed out her immobile arm, she claimed that was her daughter's arm even though it was obviously attached to her own shoulder. Anosognosia can be temporarily alleviated by squirting cold water into the patient's left ear canal, after which the patient suddenly realizes her condition but later loses awareness again and reverts back to the bizarre excuses and confabulations.

Ramachandran suggested that the left brain is an "apologist", trying to justify existing theories, and the right brain is a "revolutionary" which changes existing theories when conditions warrant. If the right brain is damaged, patients are unable to change their beliefs; so when a patient's arm works fine until a right-brain stroke, the patient cannot discard the hypothesis that their arm is functional, and can only use the left brain to try to fit the facts to their belief.

In the almost twenty years since Ramachandran's theory was published, new research has kept some of the general outline while changing many of the specifics in the hopes of explaining a wider range of delusions in neurological and psychiatric patients. The newer model acknowledges the left-brain/right-brain divide, but adds some new twists based on the Mind Projection Fallacy and the brain as a Bayesian reasoner.


INTRODUCTION TO DELUSIONS

Strange as anosognosia is, it's only one of several types of delusions, which are broadly categorized into polythematic and monothematic. Patients with polythematic delusions have multiple unconnected odd ideas: for example, the famous schizophrenic game theorist John Nash believed that he was defending the Earth from alien attack, that he was the Emperor of Antarctica, and that he was the left foot of God. A patient with a monothematic delusion, on the other hand, usually only has one odd idea. Monothematic delusions vary less than polythematic ones: there are a few that are relatively common across multiple patients. For example:

In the Capgras delusion, the patient, usually a victim of brain injury but sometimes a schizophrenic, believes that one or more people close to her has been replaced by an identical imposter. For example, one male patient expressed the worry that his wife was actually someone else, who had somehow contrived to exactly copy his wife's appearance and mannerisms. This delusion sounds harmlessly hilarious, but it can get very ugly: in at least one case, a patient got so upset with the deceit that he murdered the hypothesized imposter - actually his wife.

The Fregoli delusion is the opposite: here the patient thinks that random strangers she meets are actually her friends and family members in disguise. Sometimes everyone may be the same person, who must be as masterful at quickly changing costumes as the famous Italian actor Fregoli (inspiring the condition's name).

In the Cotard delusion, the patient believes she is dead. Cotard patients will neglect personal hygiene, social relationships, and planning for the future - as the dead have no need to worry about such things. Occasionally they will be able to describe in detail the "decomposition" they believe they are undergoing.

Patients with all these types of delusions1 - as well as anosognosiacs - share a common feature: they usually have damage to the right frontal lobe of the brain (including in schizophrenia, where the brain damage is of unknown origin and usually generalized, but where it is still possible to analyze which areas are the most abnormal). It would be nice if a theory of anosognosia also offered us a place to start explaining these other conditions, but this Ramachandran's idea fails to do. He posits a problem with belief shift: going from the originally correct but now obsolete "my arm is healthy" to the updated "my arm is paralyzed". But these other delusions cannot be explained by simple failure to update: delusions like "the person who appears to be my wife is an identical imposter" never made sense. We will have to look harder.

ABNORMAL PERCEPTION: THE FIRST FACTOR

Coltheart, Langdon, and McKay posit what they call the "two-factor theory" of delusion. In the two-factor theory, one problem causes an abnormal perception, and a second problem causes the brain to come up with a bizarre instead of a reasonable explanation.

Abnormal perception has been best studied in the Capgras delusion. A series of experiments, including some by Ramachandran himself, demonstrate that Capgras patients lack a skin conductance response (usually used as a proxy of emotional reaction) to familiar faces. This meshes nicely with the brain damage pattern in Capgras, which seems to involve the connection between the face recognition areas in the temporal lobe and the emotional areas in the limibic system. So although the patient can recognize faces, and can feel emotions, the patient cannot feel emotions related to recognizing faces.

The older "one-factor" theories of delusion stopped here. The patient, they said, knows that his wife looks like his wife, but he doesn't feel any emotional reaction to her. If it was really his wife, he would feel something - love, irritation, whatever - but he feels only the same blankness that would accompany seeing a stranger. Therefore (the one-factor theory says) his brain gropes for an explanation and decides that she really is a stranger. Why does this stranger look like his wife? Well, she must be wearing a very good disguise.

One-factor theories also do a pretty good job of explaining many of the remaining monothematic delusions. A 1998 experiment shows that Cotard delusion sufferers have a globally decreased autonomic response: that is, nothing really makes them feel much of anything - a state consistent with being dead. And anosognosiacs have lost not only the nerve connections that would allow them to move their limbs, but the nerve connections that would send distress signals and even the connections that would send back "error messages" if the limb failed to move correctly - so the brain gets data that everything is fine.

The basic principle behind the first factor is "Assume that reality is such that my mental states are justified", a sort of Super Mind Projection Fallacy.

Although I have yet to find an official paper that says so, I think this same principle also explains many of the more typical schizophrenic delusions, of which two of the most common are delusions of grandeur and delusions of persecution. Delusions of grandeur are the belief that one is extremely important. In pop culture, they are typified by the psychiatric patient who believes he is Jesus or Napoleon - I've never met any Napoleons, but I know several Jesuses and recently worked with a man who thought he was Jesus and John Lennon at the same time. Here the first factor is probably an elevated mood (working through a miscalibrated sociometer). "Wow, I feel like I'm really awesome. In what case would I be justified in thinking so highly of myself? Only if I were Jesus and John Lennon at the same time!" A similar mechanism explains delusions of persecution, the classic "the CIA is after me" form of disease. We apply the Super Mind Projection Fallacy to a garden-variety anxiety disorder: "In what case would I be justified in feeling this anxious? Only if people were constantly watching me and plotting to kill me. Who could do that? The CIA."

But despite the explanatory power of the Super Mind Projection Fallacy, the one-factor model isn't enough.

ABNORMAL BELIEF EVALUATION: THE SECOND FACTOR

The one-factor model requires people to be really stupid. Many Capgras patients were normal intelligent people before their injuries. Surely they wouldn't leap straight from "I don't feel affection when I see my wife's face" to "And therefore this is a stranger who has managed to look exactly like my wife, sounds exactly like my wife, owns my wife's clothes and wedding ring and so on, and knows enough of my wife's secrets to answer any question I put to her exactly like my wife would." The lack of affection vaguely supports the stranger hypothesis, but the prior for the stranger hypothesis is so low that it should never even enter consideration (remember this phrasing: it will become important later.) Likewise, we've all felt really awesome at one point or another, but it's never occurred to most of us that maybe we are simultaneously Jesus and John Lennon.

Further, most psychiatric patients with the deficits involved don't develop delusions. People with damage to the ventromedial area suffer the same disconnection between face recognition and emotional processing as Capgras patients, but they don't draw any unreasonable conclusions from it. Most people who get paralyzed don't come down with anosognosia, and most people with mania or anxiety don't think they're Jesus or persecuted by the CIA. What's the difference between these people and the delusional patients?

The difference is the right dorsolateral prefrontal cortex, an area of the brain strongly associated with delusions. If whatever brain damage broke your emotional reactions to faces or paralyzed you or whatever spared the RDPC, you are unlikely to develop delusions. If your brain damage also damaged this area, you are correspondingly more likely to come up with a weird explanation.

In his first papers on the subject, Coltheart vaguely refers to the RDPC as a "belief evaluation" center. Later, he gets more specific and talks about its role in Bayesian updating. In his chronology, a person damages the connection between face recognition and emotion, and "rationally" concludes the Capgras hypothesis. In his model, even if there's only a 1% prior of your spouse being an imposter, if there's a 1000 times greater likelihood of you not feeling anything toward an imposter than to your real spouse, you can "rationally" come to believe in the delusion. In normal people, this rational belief then gets worn away by updating based on evidence: the imposter seems to know your spouse's personal details, her secrets, her email passwords. In most patients, this is sufficient to have them update back to the idea that it is really their spouse. In Capgras patients, the damage to the RDPC prevents updating on "exogenous evidence" (for some reason, the endogenous evidence of the lack of emotion itself still gets through) and so they maintain their delusion.

This theory has some trouble explaining why patients are still able to update about other situations, but Coltheart speculates that maybe the belief evaluation system is weakened but not totally broken, and can deal with anything except the ceaseless stream of contradictory endogenous information.

EXPLANATORY ADEQUACY BIAS

McKay makes an excellent critique of several questionable assumptions of this theory.

First, is the Capgras hypothesis ever plausible? Coltheart et al pretend that the prior is 1/100, but this implies that there is a base rate of your spouse being an imposter one out of every hundred times you see her (or perhaps one out of every hundred people has a fake spouse) either of which is preposterous. No reasonable person could entertain the Capgras hypothesis even for a second, let alone for long enough that it becomes their working hypothesis and develops immunity to further updating from the broken RDPC.

Second, there's no evidence that the ventromedial patients - the ones who lose face-related emotions but don't develop the Capgras delusion - once had the Capgras delusion but then successfully updated their way out of it. They just never develop the delusion to begin with.

McKay keeps the Bayesian model, but for him the second factor is not a deficit in updating in general, but a deficit in the use of priors. He lists two important criteria for reasonable belief: "explanatory adequacy" (what standard Bayesians call the likelihood ratio; the new data must be more likely if the new belief is true than if it is false) and "doxastic conservativism" (what standard Bayesians call the prior; the new belief must be reasonably likely to begin with given everything else the patient knows about the world).

Delusional patients with damage to their RDPC lose their ability to work with priors and so abandon all doxastic conservativism, essentially falling into a what we might term the Super Base Rate Fallacy. For them the only important criterion for a belief is explanatory adequacy. So when they notice their spouse's face no longer elicits any emotion, they decide that their spouse is not really their spouse at all. This does a great job of explaining the observed data - maybe the best job it's possible for an explanation to do. Its only minor problem is that it has a stupendously low prior, and this doesn't matter because they are no longer able to take priors into account.

This also explains why the delusional belief is impervious to new evidence. Suppose the patient's spouse tells personal details of their honeymoon that no one else could possibly know. There are several possible explanations: the patient's spouse really is the patient's spouse, or (says the left-brain Apologist) the patient's spouse is an alien who was able to telepathically extract the relevant details from the patient's mind. The telepathic alien imposter hypothesis has great explanatory adequacy: it explains why the person looks like the spouse (the alien is a very good imposter), why the spouse produces no emotional response (it's not the spouse at all) and why the spouse knows the details of the honeymoon (the alien is telepathic). The "it's really your spouse" explanation only explains the first and the third observations. Of course, we as sane people know that the telepathic alien hypothesis has a very low base rate plausibility because of its high complexity and violation of Occam's Razor, but these are exactly the factors that the RDPC-damaged2 patient can't take into account. Therefore, the seemingly convincing new evidence of the spouse's apparent memories only suffices to help the delusional patient infer that the imposter is telepathic.

The Super Base Rate Fallacy can explain the other delusional states as well. I recently met a patient who was, indeed, convinced the CIA were after her; of note she also had extreme anxiety to the point where her arms were constantly shaking and she was hiding under the covers of her bed. CIA pursuit is probably the best possible reason to be anxious; the only reason we don't use it more often is how few people are really pursued by the CIA (well, as far as we know). My mentor warned me not to try to argue with the patient or convince her that the CIA wasn't really after her, as (she said from long experience) it would just make her think I was in on the conspiracy. This makes sense. "The CIA is after you and your doctor is in on it" explains both anxiety and the doctor's denial of the CIA very well; "The CIA is not after you" explains only the doctor's denial of the CIA. For anyone with a pathological inability to handle Occam's Razor, the best solution to a challenge to your hypothesis is always to make your hypothesis more elaborate.

OPEN QUESTIONS


Although I think McKay's model is a serious improvement over its predecessors, there are a few loose ends that continue to bother me.

"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?

Likewise, how come delusions are so specific? It's impossible to convince someone who thinks he is Napoleon that he's really just a random non-famous mental patient, but it's also impossible to convince him he's Alexander the Great (at least I think so; I don't know if it's ever been tried). But him being Alexander the Great is also consistent with his observed data and his deranged inference abilities. Why decide it's the CIA who's after you, and not the KGB or Bavarian Illuminati?

Why is the failure so often limited to failed inference from mental states? That is, if a Capgras patient sees it is raining outside, the same process of base rate avoidance that made her fall for the Capgras delusion ought to make her think she's been transported to ther rainforest or something. This happens in polythematic delusion patients, where anything at all can generate a new delusion, but not those with monothematic delusions like Capgras. There must be some fundamental difference between how one draws inferences from mental states versus everything else.

This work also raises the question of whether one can one consciously use System II Bayesian reasoning to argue oneself out of a delusion. It seems improbable, but I recently heard about an n=1 personal experiment of a rationalist with schizophrenia who used successfully used Bayes to convince themselves that a delusion (or possibly hallucination; the story was unclear) was false. I don't have their permission to post their story here, but I hope they'll appear in the comments.

FOOTNOTES


1: I left out discussion of the Alien Hand Syndrome, even though it was in my sources, because I believe it's more complicated than a simple delusion. There's some evidence that the alien hand actually does move independently; for example it will sometimes attempt to thwart tasks that the patient performs voluntarily with their good hand. Some sort of "split brain" issues seem like a better explanation than simple Mind Projection.

2: The right dorsolateral prefrontal cortex also shows up in dream research, where it tends to be one of the parts of the brain shut down during dreaming. This provides a reasonable explanation of why we don't notice our dreams' implausibility while we're dreaming them - and Eliezer specifically mentions he can't use priors correctly in his dreams. It also highlights some interesting parallels between dreams and the monothematic delusions. For example, the typical "And then I saw my mother, but she was also somehow my fourth grade teacher at the same time" effect seems sort of like Capgras and Fregoli. Even more interestingly, the RDPC gets switched on during lucid dreaming, providing an explanation of why lucid dreamers are able to reason normally in dreams. Because lucid dreaming also involves a sudden "switching on" of "awareness", this makes the RDPC a good target area for consciousness research.

Comments (152)

Comment author: Kawoomba 13 August 2012 06:47:42AM 24 points [-]

Reminded me of The Three Christs of Ypsilanti:

To study the basis for delusional belief systems, [psychologist] Rokeach brought together three men who each claimed to be Jesus Christ and confronted them with one another's conflicting claims, while encouraging them to interact personally as a support group. Rokeach also attempted to manipulate other aspects of their delusions by inventing messages from imaginary characters. He did not, as he had hoped, provoke any lessening of the patients' delusions, but did document a number of changes in their beliefs.

While initially the three patients quarreled over who was holier and reached the point of physical altercation, they eventually each explained away the other two as being mental patients in a hospital, or dead and being operated by machines.

Comment author: Risto_Saarelma 13 August 2012 07:33:11AM 15 points [-]

Would a neurologist who has thus far been immersed daily with the fact that all brains can fail in all sorts of interesting ways be hit just as bad with these delusions if given brain damage as someone who might have operated all their life under a sort of naive realism that makes no difference between reality and their brain's picture of it? What about a philosopher with no neurological experience but with a well-seated obsession with the map not being the territory?

Comment author: someonewrongonthenet 14 August 2012 03:11:52AM *  44 points [-]

Had to make an account to answer this one, since I can give unique insight

I'm an atypical case in that I had the Capgras Delusion (along with Reduplicative Paramnesia) in childhood, rather than as an adult. The delusions started sometime around 6-9 years of age. I hid it from others, partly because I halfway knew it was ridiculous, partly because I didn't want to let out that I was on to them...and it caused me quite a bit of anxiety, because I felt like I lost my loved ones and slipped into parallel universes every few days. I would try to keep my eyes on my loved ones, because as soon as I looked away and looked back the feeling that something was different would return.

Sometime around 12-14, I realized how implausible it was for any kind of impostor to conduct such large scale conspiracy, and how implausible it was that I was slipping into parallel universe. I told my parents what I was experiencing and admitted it was irrational. I forced myself to ignore the feeling every time it came (though it still bothered me). Eventually around 17 the feeling stopped bothering me altogether, although little twinges still occured from time to time.

I'm currently in what I would consider to be above average mental health, and many years later learned I the name of the delusions that had plagued me as a child. Prior to identifying them as monothematic delusions, I had thought that imposters and parallel universes might simply be a gifted child's equivalent of monsters under the bed. My parents thought it was from reading/watching too much fiction. I never suspected a neurological disorder until years later.

I'm not sure if I was able to see past the delusion because I'm an atypical case (no known brain injury), because I was a child, because my brain healed via biological mechanism, or because I'm intelligent...but I can tell you that my memory of the event involves me figuring out that the delusion was improbable and consciously working to bring it to an end.

So unless my memories are false (it was a long time ago) or I am engaging in mis-attribution, the answer to your question is that yes, in some cases it would be possible for someone to use rational thinking to overcome this kind of disorder.

Comment author: Risto_Saarelma 14 August 2012 04:52:58AM 14 points [-]

This is yet again a different scenario, but very interesting, thanks! It does occur to me now that there might be adult trauma patients who can see through the delusion, and never get diagnosed with it, since they don't start raving about impostor family members but just go, whoa, brain seems messed, better go see the stroke doctor.

Comment author: Vulture 25 January 2014 09:40:52PM 0 points [-]

This raises the obvious question: Could training in bayesian reasoning effectively increase the insight of delusional patients?

Comment author: smk 20 August 2012 02:39:50PM *  4 points [-]

Some strangely common childhood beliefs:
Everyone except you is a robot
Your life is like the Truman Show

Comment author: RomanDavis 20 August 2012 04:03:46PM 2 points [-]

I ocassionally entertained ideas like that in the back of my mind. Truman Show, Teachers are aliens, Parents somehow know everything/ everything about me and are just fucking with me in the way that Zeus would to test character, except over a much longer Santa Clause/ Jesus esque period of time, the mothman is watching me, there are invisible monsters/ demons all around me and I need to be very sneaky not to be seen.

I'm not sure I believed them, exactly. Maybe I did. Maybe I didn't. I still do the same stuff sometimes, with equally wierd things. Whenever I start half way believing in god or track of thought in my bain giving arbitrary commands is the voice of God, I just start doing experiments against the rest of reality til the shadow of belief goes away, since they never line up with testable reality. I've never had actual hallucinations, though, as far as I know.

Comment author: Rickasaurus 21 August 2012 04:05:05AM 1 point [-]

For me it was that I suspected I was the robot. Never told anyone though.

Comment author: someonewrongonthenet 20 August 2012 10:38:46PM 0 points [-]

Hehe...to be honest I half-believed those too...not that everyone was a robot, but that everyone was a philosophical zombie. It wasn't until high school that I figured out that for all intents and purposes, I'm a philosophical zombie too.

But in my opinion, those really ARE normal childhood beliefs that are not the result of any neuropathology... beliefs that many philosophers still entertain in the form of solipsism.

Comment author: RichardKennaway 20 August 2012 05:26:50PM 0 points [-]

Some strangely common childhood beliefs:
Everyone except you is a robot

Do those turn into these when they grow up?

Comment author: RichardKennaway 13 August 2012 12:13:48PM 5 points [-]

Jill Bolte has provided a case study. She is a neurologist who had a stroke. Her experience is recounted in her TED talk and her book.

Comment author: MaoShan 17 August 2012 03:51:18AM 8 points [-]

I have read the book (I recently received it from an elderly friend who hoarded books--I picked through about $20,000 worth of books and chose several hundred dollars worth), and it started off interesting, to hear of her personal experience of the stroke and its accompanying mind-states. She seems to have fought her way through various delusions, but not with any more success than other examples cited here. Yes, she is/was a neuroscientist. She also proudly proclaims that she tells her bowels "Good job! I am so thankful that you do exactly what you are meant to do!" every time she takes a dump, and concluded the book with some painfully New Age-y exhortations which gave me the same urge to roll around frothing at the mouth that I often experienced with clearly delusional Christian preachers in church.

Comment author: Risto_Saarelma 13 August 2012 04:22:30PM 1 point [-]

The Amazon page for the book doesn't describe her getting any of the sort of very specific delusions described in the OP though, just general debilitation and paradoxical feelings of euphoria.

Comment author: RichardKennaway 13 August 2012 05:08:34PM 0 points [-]

It's the closest we're likely to get, though, given the rarity of both neurologists and anosognosias.

Comment author: Risto_Saarelma 13 August 2012 06:35:37PM 1 point [-]

Well, neurologists are rare, but I think we do know how to create targeted brain lesions that can cause pretty specific symptoms.

Comment author: faul_sname 13 August 2012 08:09:10PM 6 points [-]

Any volunteers?

Comment author: [deleted] 14 August 2012 05:23:22PM *  1 point [-]

I might. Anybody got $20,000,000?

Comment author: faul_sname 14 August 2012 11:45:34PM 6 points [-]

Well, if we're going there I'll do it for $10M.

Comment author: MBlume 13 August 2012 06:02:38AM *  24 points [-]

Where are we on selectively/temporarily/safely de-activating brain regions? Magnetic field to the RDPC sounds like it'd be <s>fantastically fun at parties</s>extremely informative under the right circumstances.

Comment author: Cosmos 14 August 2012 09:41:51PM 6 points [-]

I had the exact same thought myself back in 2008, so I asked an experimental psych professor about this. At the same, he said that the TMS devices that we had are somewhat wide-area and also induce considerable muscle activation. This doesn't matter very much when studying the occipital lobe, but for the prefrontal cortex you basically start scrunching up the person's face, which is fairly distracting. Maybe worth trying anyway.

I've wanted to get my hands on a TMS device for years. Building one at home does not seem particularly feasible, and the magnetism involved is probably dangerous for nearby metal/electronics...

Comment author: jimmy 15 August 2012 11:13:45PM 8 points [-]

Building one at home does not seem particularly feasible, and the magnetism involved is probably dangerous for nearby metal/electronics...

A few minutes on Google makes this seem very unlikely.

I'm scared as hell to induce currents in my brain without knowing the neurobiology of it, but I do understand the electrical engineering half, so if you want an electromagnet and driver, I'll help you build one.

Comment author: MaoShan 17 August 2012 03:55:20AM 1 point [-]

I had a very similar thought while reading this post. I have the Shakti system, maybe this weekend I'll target my RDPC with various frequencies and see what happens.

Comment author: MaoShan 24 August 2012 02:23:39AM 1 point [-]

Follow-up: I didn't experience anything outside of the typical Shakti effects for me (a feeling similar to a strong nicotine buzz); however, there are many variables to tweak before I declare it a wash. I'll continue to experiment and post the final results somewhere.

Comment author: Yvain 15 August 2012 02:35:49AM 1 point [-]

I don't know the technical differences between TMS and TDCS, but http://flowstateengaged.com/ looks promising.

Comment author: Cosmos 15 August 2012 04:08:06PM 3 points [-]

TDCS isn't depolarizing neurons with magnetism, it doesn't disable brain regions at all. Instead it is running a direct current across them. This appears to permanently increase or decrease its level of excitability. o_O

Comment author: Luke_A_Somers 13 August 2012 06:26:58PM 21 points [-]

Note to self: Do not attend any party organized by MBlume without making sure that all participants have signed an iron-clad NDA in advance.

Comment author: Kawoomba 13 August 2012 07:01:16PM 7 points [-]

Don't worry, what happens in la la land stays in la la land.

Comment author: magfrump 14 August 2012 09:30:36PM 6 points [-]

Note to self: Always sign NDAs associated to parties thrown by MBlume.

Comment author: Eugine_Nier 14 August 2012 10:44:22PM 0 points [-]

Personally, I'm wondering how to use these as brainwashing devices. And then use my brainwashed slaves to TAKE OVER THE WORLD. BWAHAHAHAHAHA.

Sorry, got carried away there for a second. In any case, do you know where I can get my hands on one of these things?

Comment author: scav 15 August 2012 09:13:45AM 1 point [-]

I think safely de-activating that part of your brain while you are still awake and able to act on your beliefs is a contradiction in terms. I'd want an experienced psychiatric nurse present, personally. And a million quid.

Comment author: handoflixue 22 August 2012 01:45:39AM 0 points [-]

<i>"Magnetic field to the RDPC sounds like it'd be..."</i>

... fairly similar to high doses of psychadelics...?

Comment author: CronoDAS 14 August 2012 02:12:12AM 8 points [-]

A similar mechanism explains delusions of persecution, the classic "the CIA is after me" form of disease. We apply the Super Mind Projection Fallacy to a garden-variety anxiety disorder: "In what case would I be justified in feeling this anxious? Only if people were constantly watching me and plotting to kill me. Who could do that? The CIA."

My mom (a psychiatrist) was listening to a continuing education program on schizophrenia, and the lecturer said that schizophrenia tends to develop slowly, and in stages; before a person ends up with delusions of persecution, they usually start out by feeling intense fear and anxiety that they can't come up with any explanations for.

Comment author: kentastic 15 August 2012 06:07:45AM 1 point [-]

Yes it can develop slowly, but also fast as hell, depends on what pulled the trigger. Its pretty relative, and it varies from person to person..

Also schizophrenia is not "one single" disease or diagnosis, its kind of many diagnosis under " schizophrenia". Very complicated and rare.

And just because you are delusional, dosent mean you're schizophrenic immediately.

Comment author: juliawise 24 August 2012 02:06:02AM 2 points [-]

Not that rare. ~1%.

Comment author: David_Gerard 13 August 2012 08:54:30AM *  8 points [-]

I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?

Off the top of my head, that people believe what their brain tells them above any outside evidence, c.f. religious conversion originating from what, to the outside view, is clearly a personal delusion - but, from the inside view, is incontrovertible evidence of God.

It takes very good and clear thinking for the lens to actually see its flaws even when you don't have brain damage to the bit that evaluates evidence. I'm somewhat surprised a rationalist with schizophrenia actually managed this. Though TheOtherDave has mentioned being able to work out that a weird perception was almost certainly due to the stroke he was recovering from, and Eliezer mentions someone else managing it as well.

Comment author: CronoDAS 14 August 2012 02:19:24AM 8 points [-]

John Nash claimed that he recovered from schizophrenia because "he decided to think rationally" - but this only happened after he took medications, so...

Comment author: [deleted] 14 August 2012 05:26:15PM 2 points [-]

religious conversion from what to the outside view is clearly a personal delusion but from the inside view is incontrovertible evidence of God

For what it's worth, in order to understand the syntax of this phrase, I had to start over about five times.

Comment author: David_Gerard 14 August 2012 08:17:27PM 1 point [-]

Commas added!

Comment author: gwern 13 August 2012 01:56:47AM 7 points [-]

This provides a reasonable explanation of why we don't notice our dreams' implausibility while we're dreaming them - and Eliezer specifically mentions he can't use priors correctly in his dreams.

Have I ever mentioned my theory that it may be partially due to overloaded working memory?

Comment author: Gabriel 13 August 2012 01:44:57AM 7 points [-]

"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?

Maybe it's really hard to really get that you are a brain on an intuitive level. Human intuitions seem to be pretty dualistic (well, at least mine do). So 'you have brain damage' doesn't sound very explanatory unless you've spent lot of time convincing yourself that it should.

By the way, the last link is broken.

Comment author: Cosmos 14 August 2012 09:58:24PM *  6 points [-]

Yvain, it seems like some of this is potentially answered by how this interacts with other cognitive biases present.

Re: specific delusions, when you have an entire class of equally-explanatory hypotheses, how do you choose between them? The availability heuristic! These hypotheses do have to come from somewhere inside the neural network after all. You could argue that availability is a form of "priors", but these "priors" are formed on the level of neurons themselves and not a specific brain region: some connection strengths are stronger than others.

I would not wish brain damage on anyone, but should one of us have that unfortunate circumstance befall us I would be extremely inclined to go talk to them. I am so damn curious what this feels like from the inside! I am somewhat embarrassed to admit that the thought of having to build completely new neural connections to get around existing damage sounds like an insanely interesting challenge...

I also wonder about reasoning our way out of delusional states. The closest parallel that most people have access to would be the use of various psychoactives. I have heard multiple reports of people who have reasoned their way out of delusional conclusions on cannabinoid agonists and 5-HT2A agonists (and dopamine agonists, with lesser evidence).

The most difficult challenge would appear to be kappa opioid agonism, a dissociative state induced by the federally-legal herb salvia divinorum. Most users report being unaware they ingested a substance at all, no awareness of having a body, and no concept of self-identity, coincident with extreme perceptual distortions. I am no longer clear what Bayesian reasoning would even look like for some points in mindspace.

Edit: I thought of another relevant state: delirium induced by anticholinergics. Unlike 5-HT2A agonists where people do not confuse perceptual distortions for reality, in delirious states people do routinely believe that what they are perceiving is actually occurring. Unfortunately these states are widely regarded as unpleasant, and no rationalist I know personally has experimented with sufficiently large doses of anticholinergics.

Comment author: MaoShan 17 August 2012 03:21:17AM 2 points [-]

I can give some personal anecdotes regarding salvia if you are interested. If I had to come up with a rationalist explanation to the experience, I would say that the affected consciousness accepts, without question, fantastically generated priors as absolute truth, and largely ignoring actual external sensory input, and even then modifying it to fit the delusion.

Comment author: OnTheOtherHandle 19 August 2012 04:01:57AM *  2 points [-]

Do you think it's at all feasible for someone under the influence of salvia to record their thoughts as they occur? Would it help if they do so often? For example, I write a stream-of-consciousness monologue every day on 750words.com. Would I be competent enough to write down what I'm thinking while under the influence of the drug?

If you lose all sense of self, would you still be able to understand the concept of another person? For example, I wonder if it would be possible for someone under the influence of salvia to answer questions about their mental state.

Considering the description, I'd guess that even if you were physically capable of talking to someone or writing down your experiences, you probably won't be inclined to, am I right? Or if you did speak, you wouldn't be aware of it.

Sorry if these questions are intrusive; I'm very curious about this sort of thing.

Comment author: MaoShan 20 August 2012 04:37:56AM 6 points [-]

Considering that I provoked the questions, I don't consider them intrusive. First of all, due to the extreme distortion of sense of time, the whole episode may occur in less time than it would take to have a useful conversation. However, I have very vivid memories of my stream of consciousness--maybe one of the main reasons that it makes such an impression is that one remembers the whole thing, even if it is difficult to put into words. I'll recount here a few such memories; this is from quite a few years back, but many facets of it changed my mindset.

First of all, I began to feel a little bit dizzy and noticed a kind of echoing effect in the ambient sounds around me. Soon afterward, I got the impression that I was sweating profusely from my temples, and I reached up to see if it was only a feeling, or if it was actually sweat, but could not reliably analyze my hands, due to a sort of increasing pulsation feeling, like when you get up from sleep and straight into a brightly lit bathroom, but involving all of my senses. I began having difficulty moving around, due to the sensation that "down" was now where "north" used to be, so I had to sit down on the floor to avoid falling out the back door (I use the word sensation in an objective sense; at the time I truly believed that gravity had turned ninety degrees). Continuing to sit on the floor while feeling like I was pressed to the floor by centrifugal force, I became aware of whispering sounds all around the room. I discovered that the room, and all of waking life, was filled with ghosts, whispering to each other and observing the living.

Two things to clarify here: The sweating temples feeling happened consistently. Among this and other descriptions, many times "to hear" something also implies "to see" something, and yet it was not photonically visible. The best I can describe is like a subliminal HUD. Maybe more like what I imagine one might sense with echolocation. It's to know the shape of something without seeing it. Also, I am completely aware that these impressions are hallucinations, but actually experiencing it changed my thinking even so.

Another time, I "saw" that all words were made of the word EGGERHEXE. It was as if the air were filled with a 3-dimensional crossword puzzle, and at least one letter of every printed word intersected with EGGERHEXE perpendicularly (as in, only the place where they intersected was visible, making up our dimension of words; all the EGGERHEXEs were in another spatial dimension). Even if a word did not have any letters contained in EGGERHEXE, the curves from the "G" or the straight lines from the "X" were the intersection point. And most special of all, of course, was the printed word EGGERHEXE, which was fully and exclusively made of the intersecting versions of itself, making it the "word within words", which entered an infinite recursion. The whole time I was observing this, I heard "EGGERHEXE" whispered constantly.

Comment author: Multiheaded 20 August 2012 08:43:04AM 3 points [-]

Another time, I "saw" that all words were made of the word EGGERHEXE... <snip>

Heh. This is a lot like how Erik Davis describes Jewish mystics viewing the Torah as a compressed encoding of all possible texts ever, and the Tetragrammaton, YHWH, as the source of all the words in the Torah.

Comment author: MaoShan 21 August 2012 03:23:54AM 2 points [-]

Now we know what they were smoking!

Comment author: Multiheaded 21 August 2012 08:46:46AM *  2 points [-]

Yeah, not exactly - Salvia divinorum is native to Mexico - but I've read scholars implying that the Middle Eastern mystics often used psychoactive mushroooms in addition to generic techniques like prayer and fasting.

Comment author: chaosmage 18 September 2012 09:21:46PM 1 point [-]

Isn't it much more likely they were brain damaged in a more permanent way? Religious people who use psychoactives tend to openly praise their drugs much like they praise their gods (think soma, peyote, ayahuasca) - middle eastern mystics didn't do that. And with malnutrition, rampant child abuse and almost no health care, there's bound to have been enough brain damage around.

Comment author: MaoShan 22 August 2012 01:26:47AM 1 point [-]

That is also implied in The Transmigration of Timothy Archer. Or, maybe some of the Nephites returned to Jerusalem with a stash...

And of course, as Risto_Saarelma mentions in a comment further down, it may be possible to attain similar states through mental exercises without benefit of pharmaceutical remedies.

Comment author: OnTheOtherHandle 20 August 2012 04:49:10AM 1 point [-]

Thanks! That sounds fascinating, if scary. Did any of these experiences affect your beliefs and actions while sober? I've heard of people having life-changing revelations on LSD, for example, although I'd be skeptical of the accuracy of any beliefs suddenly revealed to people while tripping.

I can easily imagine more subtle and potentially helpful behavioral changes, though.

Comment author: MaoShan 21 August 2012 03:38:26AM 2 points [-]

That was something I failed to get across in my reply, I guess. I feel like I owe a part of my mental composition of today to those experiences, I mean, imagining infinity is not the same as experiencing infinity, and even though it was internally generated, the memories and impressions and rewired synapses are very real. I was fully aware when the effects wore off that it was not "revealed knowledge", but it exposed me to viewpoints and thoughts that I might not have otherwise had access to. My description of the events was my flow of thoughts during the events, not my "usual" philosophy. On a side note, as a child I had the unfortunate combination of truth-seeking and logic, and a strong neurological tendency toward magical thinking. Perhaps my familiarity with walking the line between Spock and Q allowed me the ability to interpret the otherworldly impressions with quiet detachment, while simultaneously benefiting from the sense of wonder they conveyed.

Comment author: evand 20 August 2012 05:29:26AM 3 points [-]

I have had mild but long-lasting effects from revelations under the influence of MDMA and 2C-E. The revelations were personal, not about the nature of reality. I would say that they could generally be described as resulting from a reduced avoidance of thinking about things that I already had plenty of information on, and had basically positive results. Both took some time to integrate afterwards, and the 2C-E trip was at times a somewhat unpleasant look at myself. The MDMA trip was unambiguously pleasant at the time, even considering that I spent time thinking about some fairly unpleasant stuff.

Comment author: Mitchell_Porter 20 August 2012 05:20:23AM 0 points [-]

LSD is a source of metaphysical spectacle and entertainment, not of edification. It will give you a lot to think about, but it's not a source of answers, and I mildly recommend against it if you value intellectual achievement.

Comment author: Risto_Saarelma 20 August 2012 06:43:46AM *  3 points [-]

I've understood the claims of LSD therapy to be mostly about fixing psychological hang-ups, like the recent research claim that it helps with alcoholism. This is mostly a separate direction from both entertainment and intellectual achievement. Of course psychological well-being can indirectly lead to more intellectual achievement, and an altered psychological outlook can change the set of hypotheses you will entertain as the starting point for intellectual work. No idea whether the post-LSD hypothesis pool will necessarily be better than the pre-LSD one. If it's larger, then it might help discover some unlikely ideas that actually do pan out when you take the time to think through them off-LSD.

Incidentally, there are some interesting anecdotes that deep meditative states achieved by long-term meditators resemble the states you end up on LSD. At least MCTB alludes to this.

Comment author: evand 20 August 2012 05:24:51AM 1 point [-]

My personal experience with salvia is limited (2 times, one much more intense than the other), but here are my thoughts.

I don't think I would want to try to record a salvia experience while it was occurring. While I found the experience interesting, valuable, and rewarding, it was also scary, intimidating, and awe-inspiring. It is not something I would want interrupted by things like conversation or writing. The time dilation might well be too profound for that to even work well. Also, I found noises, light, and rapid changes in sense input to be distracting. Having other people move about the room was... scary. I did not experience the extreme disconnect with reality some people describe, but it was a different mindspace in a way that all other substances I've tried were not. Doing anything other than experiencing it to the fullest would seem inappropriate.

(It's possible many of these problems would fade with repeated use. I would consider such a result disappointing, and have no particular desire to attempt to produce it.)

In contrast, I would be happy to talk with anyone about the experience while on any of the other substances I've taken. Depending on mood, I might feel anxious or nervous about talking to someone who was sober, especially if they had no personal experience or were someone I did not know well. Some experiences I've had would make writing about them difficult, because of distractability, visual distortion, a tendency to stop and stare at the beauty of the pencil eraser, etc. Others would be easier. DiPT might be easier to write about than talk about; auditory distortion makes conversation difficult / distracting.

Have you read any books on the subject? There are many good ones out there. I could recommend a couple if you'd be interested, though I haven't read much (or partaken of the substances) lately.

Comment author: OnTheOtherHandle 20 August 2012 05:45:46AM *  1 point [-]

Thanks for the response! :) As you can probably tell, I'm trying to decide whether it's worth my while to dabble in psychoactive drugs.

I'm not actually very curious about the having the experience itself; it sounds scary and disorienting. I would, however, be willing to endure that scary experience if it's likely to teach me something interesting and important about myself, which is why I asked about recording it. I'm teaching myself lucid dreaming and meditation in the hopes that I'll be better aware of my own personal quirks/subconscious obsessions, for example. A sudden, massive shift in perception might help bring things to the forefront which I had avoided addressing before.

In your experience, do drugs as a whole actually help with that, given that I'm not all that interested in the experience for its own sake?

Edit: Actually, real science books would be even better, thanks. I previously avoided drugs because Drugs Are Bad, then because Drugs Are Dangerous, and now I figure I ought to do an accurate cost-benefit analysis. And because I'm biased to think drugs are awful things which awful people partake in, I should explicitly seek out some empirically supported benefits.

Comment author: evand 20 August 2012 02:03:00PM *  2 points [-]

You're welcome!

I would say my experiences with Salvia were somewhat scary and disorienting, but not problematically so. I'm not quite sure how to describe what I mean here, but "scary" should be a very minor part of the description. I certainly felt no need to do anything about it at the time, or surprise that I didn't need to after the fact. Think scary as in "I go rock climbing, and looking down makes me a bit nervous". Except without the adrenaline, and otherwise in a completely different emotional context. I hope that puts it in perspective. Anyway, personally, I would not recommend starting with salvia, though I know a couple people that did exactly that and had good things to say about it.

I would say that drugs can help with what you're asking about, but that it isn't guaranteed. Of course, I didn't go into it hoping for such results at all, so it's probably far more likely that you'll get what you're looking for than not, imho. Set and setting matter a lot. On a related note, if you go into your experience expecting it to be scary, well, you'll probably get what you wished for. Basically, I think you should do this because you're expecting to enjoy the experience, and I think that's an entirely reasonable expectation. I'd also add that my description of salvia as being slightly scary does not apply to any other substance I've taken.

For starters on reading, I would suggest Phenethylamines I Have Known And Loved (aka PiHKAL) by Alexander Shulgin, and its sequel TiHKAL (Tryptamines ...). Alexander Shulgin is a scientist and basically rational thinker, with a strong interest in the human mind. He's a synthetic organic chemist, and personally invented, synthesized, and took what might literally be a majority of the synthetic psychedelics known.

Comment author: Mitchell_Porter 20 August 2012 05:28:22AM 0 points [-]

s/salvia/saliva/g for fun.

Comment author: Yvain 15 August 2012 02:39:59AM 2 points [-]

Availability heuristic seems related, but still doesn't explain why delusions are so much more fixed than ordinary conclusions.

I think dreams are also a good parallel for psychosis, but it's hard to tell how good without having been psychotic.

Comment author: Cosmos 16 August 2012 12:53:19AM 1 point [-]

To continue with the bias theme, how about confirmation bias? They settled on the most available theory that fits all the facts, and then it becomes part of their identity, they begin to rally the soldiers. Is their delusion that they are Jesus really that much less sticky than someone's political party?

Comment author: prase 16 August 2012 07:48:04PM 1 point [-]

Seems unlikely. First, confirmation bias has its limits and normally is never capable beating direct observational evidence. Second, people basing their identity on their being Jesus sounds like a plausible idea, but identity based on the fact that my arm isn't paralysed not that much. Third, it takes some time to associate own identity and status feeling with an idea - one doesn't become political partisan overnight - while the anosognosic delusions emerge immediately after the brain is damaged (well, I suppose this is so, but I can easily be mistaken here).

Comment author: TheOtherDave 16 August 2012 08:02:25PM 5 points [-]

identity based on the fact that my arm isn't paralysed not that much

I dunno. During the period after my stroke where I was suffering from partial right-side paralysis, a lot of the emotional suffering I experienced could reasonably be described as caused by having my identity as a person whose arm wasn't paralyzed challenged. I would probably say "self-image" instead of "identity", granted, but I'm not sure the difference is crisp.

Comment author: prase 16 August 2012 08:21:43PM 1 point [-]

Interesting. Did thinking about the paralysis feel similar to (learning a good argument against your favourite political ideology / seeing your favourite sports team lose / listening to an offensive but true remark made by your enemy / any situation in which you fell victim to confirmation bias)?

Comment author: TheOtherDave 17 August 2012 12:23:37AM 0 points [-]

It did not feel especially similar to any of the examples you list.
The general case is harder to think about... I'm not sure.

Comment author: TruePath 14 August 2012 09:55:21PM 14 points [-]

All of the theories presented in this post seem to make the implausible assumption that somehow the brain acts like a hypothetical ideally rational individual and that impairment somehow breaks some aspect of this rationality.

However, there is a great deal of evidence the brain works nothing like this. In contrast, it has many specific modules that are responsible for certain kinds of thought or behavior. These modules are not weighed by some rational actor that sifts through them, they are the brain. When these modules come into conflict, e.g., in the standard word/color test where yellow is spelled in red, fairly simply conflict resolution methods are brought into play. When things go wrong in the brain, either an impairment in conflict resolution mechanisms or in the underlying modules themselves, things will go wonky in specific (not general) ways.

Speaking from personal experience, being in a psychotic/paranoid state simply makes certain things seem super salient to you. You can be quite well aware of the rational arguments against the conclusion you are worrying about but it's just so salient that it 'wins.' In other words it also feels like there is just a failure in your ability to override certain misbehaving brain processes rather than some general inability to update appropriately. This is further supported by the fact that skizophrenics and others with delusions seem to be able to update largely appropriately in certain aspects, e.g., what answer is expected on a test, while still maintaining their delusional state.

Comment author: prase 16 August 2012 08:06:51PM *  3 points [-]

This is generally a good comment, but I think the views of the original post and your comment are actually pretty similar. For example, seeing the brain as a rational Bayesian agent is compatible with the modular view. One module might store beliefs, another might be responsible for forming new candidate beliefs on the basis of sensory input, another module may enforce consistency and weaken beliefs which don't fit in... The "rational actor that sifts through [the modules]" could easily be embodied by one or several of the modules themselves. Whether this is a good model is a more complicated question (it certainly isn't perfect since we know people diverge from the Bayesian ideal quite regularly), but it is not implausible.

Comment author: OnTheOtherHandle 19 August 2012 04:44:45AM 2 points [-]

However, even if there are modules that try to form accurate beliefs about some things or even most things (and there probably are), it's still true that taken in aggregate, your various brain modules push you to have beliefs that would be locally optimal in the evolutionary ancestral environment, not necessarily true. Many modules in our brain push us toward believing things that would be praised, avoiding things that would be condemned or ridiculed, etc.

It's too costly to be a perfect deceiver, so evolution hacked together a system where if it's consistently beneficial to your fitness for others to believe you believe X, most of the time you just go ahead and believe X.

In large realms of thought, especially far mode beliefs, political beliefs, and beliefs about the self, the net result of all your modules working together is that you're pushed toward status and social advantage, not truth. Maybe there aren't even any truth-seeking modules with respect to these classes of belief. Maybe we call it delusion when your near-mode, concrete anticipations start behaving like your far-mode, political beliefs.

Comment author: Vaniver 13 August 2012 06:23:33PM *  5 points [-]

It seems improbable, but I recently heard about an n=1 personal experiment of a rationalist with schizophrenia who used successfully used Bayes to convince themselves that a delusion (or possibly hallucination; the story was unclear) was false. I don't have their permission to post their story here, but I hope they'll appear in the comments.

I was under the impression that learning to recognize hallucinations was a standard component of schizophrenia therapy.

Comment author: Yvain 14 August 2012 01:41:34AM 0 points [-]

Therapists can very very carefully try to talk patients out of their delusions, but I've always heard of it as a complicated long-term process and I've never before heard of Bayes being used directly.

Comment author: metaphysicist 19 August 2012 10:40:54PM *  1 point [-]

You seem to be conflating the original schizophrenic state with the residual after the patients get antipsychotic medication: the latter may be readily amenable to reason; the former, the therapist would breach rapport with the patient, by challenging full delusions.

Medication is part of the standard treatment for schizophrenia--usually, the major part. Drawing conclusions about delusions from the residuals following treatment seems to shield you from what would be obvious had you observed unmedicated patients. Delusions aren't failures of Bayesian rationality: they involve, typically, accepting a few self-evident priors, and these are driven by intense affect.

Comment author: Marcy_Azraelle 15 August 2012 09:05:01AM 11 points [-]

It is embarrassing to admit but I used to think I really had dog ears and a tail until I was about 16.

Well, at least older students found it completely adorable when I made noises...and the school authorities thought I was like smart or something and didn't really care either.

I don't really know the cause, I don't remember knowing about kemonomimi until a bit later but I had delusions not only about seeing these body parts in myself but also felt them. I thought I broke my tail once, for example.

Comment author: handoflixue 22 August 2012 01:43:28AM 4 points [-]

For what it's worth, the "Super Base Rate Fallacy" seems to line up with my own experiences, except that there's sometimes an independent part of my mind that can go "Okay, I have 99.999% confidence that the floor will eat us. But what's the actual odds of that confidence, and what evidence did I use to reach it?". While I can't just dismiss the absurd confidence value as absurd, I can still (sometimes) do a meta-evaluation about the precise confidence.

It's sort of like how if a friend says that global warming is 99.99% likely to be true, I can't simply rewrite my friend to have 50% confidence. But I can question the evidence and see how he reached his conclusion, and if it's just "oh, I read a newspaper article that said it was real", my actual confidence will be vastly lower.

I only recently figured out this trick (and suspect LessWrong probably helped me develop it), so I couldn't say why it sometimes works and sometimes doesn't. I can say it's much harder to ignore paranoia about people, and much easier to ignore anything that would be easily objectively checked ("the floor will eat me", step on to floor, "the floor failed to eat me. Falsified!")

Comment author: quen_tin 13 August 2012 08:41:40PM *  4 points [-]

I wonder if the same mechanisms could be invovled in conspiracy theorists. Their way of thinking seems very similar. I also suspect a reinforcement mechanism: it becomes more and more difficult for the subject to deny his own beliefs, as it would require abandonning large parts of his present (and coherent) belief system, leaving him with almost nothing left.

This could explain why patients are reluctant to accept alternative versions afterwards (such as "you have a brain damage").

Comment author: scav 15 August 2012 08:56:36AM 3 points [-]

It seems to me that many people who believe extremely improbable conspiracy theories may well have undiagnosed brain damage. But you probably couldn't get most of them to agree to come in for a brain scan.

Comment author: Kaj_Sotala 14 August 2012 10:48:30AM *  3 points [-]

Feedback: I thought that this post was interesting and at times quite amusing. However, I didn't upvote (nor downvote) because I felt that the concerns you discussed under the open questions section were serious enough that this post could basically be summed up as "here are some theories which feel like they might be on the right track, but basically we're still clueless".

Comment author: orthonormal 19 August 2012 06:35:26PM 6 points [-]

I want to see more posts that explain the current state of knowledge of interesting rationality-related fields, and that explicitly state what questions are still troubling. Thus I upvoted the post.

Comment author: torekp 14 August 2012 12:40:56AM 3 points [-]

Do you have any evidence of brain damage in schizophrenia that isn't explainable by drug use (including antipsychotics especially) and is fairly common among schizophrenics?

Regarding arguing oneself out of delusion, cognitive therapy for schizophrenia has a decent track record. More info on request, after my wife gets home (she's a psychologist).

Comment author: Yvain 14 August 2012 01:38:34AM 2 points [-]

See for example http://www.schizophrenia.com/research/schiz.brain.htm on structural brain damage. For functional brain damage, read the above-linked paper by McKay where he starts talking about change in patterns of prediction error signal activation in the right prefrontal cortex.

Comment author: torekp 16 August 2012 10:18:54AM 4 points [-]

Here's a better source (PDF), link-chained from yours.

On brain changes due to drug use:

Medication-Matched Subjects. To address the possibility that neuroleptic exposure and/or lower IQ could have determined differential gray matter loss in the schizophrenics, we mapped 10 serially imaged subjects referred to the childhood schizophrenia study who did not meet diagnostic criteria for schizophrenia (labeled Psychosis NOS - Not Otherwise Specified - in DSM terms; (24)). These subjects received identical medication to the patients in this study through adolescence, primarily for control of aggressive outbursts, and at follow-up, none had progressed to schizophrenia (35) but all continued to exhibit chronic mood and behavior disturbance. While medication is unlikely to be responsible for a loss profile that moves across the brain, clozapine, for example, may increase Fos-immunoreactivity in the thalamus (36), and might, logically, modulate rates of cortical change. (In addition, brain regions important for motor function, including the basal ganglia, show increased volumes in response to some older, conventional neuroleptics, although these effects are renormalized after treatment with the atypical antipsychotics used in this study). As seen in Figure 6, while the non-schizophrenic group did show some subtle but significant tissue loss, this was much less marked than for the schizophrenics. Moreover, no temporal lobe deficits were observed in the PNOS group (Fig. 6), suggesting that the wave of disease progression into temporal cortices may be specific to schizophrenia, regardless of medication, and also regardless of gender or IQ. Intriguingly, the psychosis NOS subjects, who share some of the deficit symptoms but do not satisfy criteria for schizophrenia, exhibited significantly accelerated gray matter loss in frontal cortices relative to healthy controls, in approximately the same, but a less pervasive, region than schizophrenics (a significant loss of 1.9%±0.7%/yr. was detected in both left and right superior frontal gyri; p<0.03).

So the answer to my question appears to be that drugs may or may not be doing some brain damage, but not nearly as much as the whole change seen in schizophrenia.

Comment author: Will_Newsome 13 August 2012 09:31:14PM *  3 points [-]

(Well-written post. There are more interesting subjects in the general 'schizophrenic reasoning' space though. If anyone ends up writing more on the subject I'd like if they sent me a draft; I know quite a bit, both theoretically and experientially.)

Comment author: Spurlock 13 August 2012 12:37:53PM 3 points [-]

but it's also impossible to convince him he's Alexander the Great (at least I think so; I don't know if it's ever been tried).

At the very least (pretending that there are no ethical concerns), it seems that you ought to be able to exaggerate a patient's delusions. "We ran some tests, and it turns out that you're Jesus, John Lennon, and George Washington!".

To this same question, I can't help but notice that the brain damage being discussed is right-side brain aka "revolutionary" brain damage. So if it turns out that it isn't possible to get a paranoid patient to switch from FBI to KGB, it might simply be a case of inability to discard hypotheses (it seems like the original delusion, the CIA, wouldn't count because for most of us "the CIA isn't following me" isn't an explicit belief). But then, I am not a neurologist or psychologist, so the pool of data I'm working with is 100% limited to that which has been written about by Yvain on LW :-)

Comment author: Yvain 14 August 2012 01:44:33AM *  9 points [-]

The patient who believes he is Jesus and John Lennon will pretty much agree he is any famous figure you mention to him, but he never seems to make a big deal of it, whereas those two are the ones he's always going on about.

Comment author: Alicorn 14 August 2012 01:52:09AM 6 points [-]

Are random people allowed to visit harmless psych patients with those patients' consent? This sounds fascinating.

Comment author: Tuukka_Virtaperko 24 August 2012 03:01:21AM 2 points [-]

Hehe. I'm a psych patient and I'm allowed to visit LessWrong.

Comment author: Alicorn 24 August 2012 03:38:43AM 2 points [-]

Do you have fascinating delusions you would like to let us try to do Bayes to?

Comment author: DaFranker 24 August 2012 03:55:11AM 0 points [-]

A better phrasing might be to contextualize it from someone else's viewpoint. The person having the "delusions" might not perceive them as such, and might not find them particularly fascinating at all.

Comment author: Alicorn 24 August 2012 06:35:05AM 0 points [-]

I think it was a fair response in context. I did write it tongue-in-cheek.

Comment author: juliawise 24 August 2012 02:13:06AM *  0 points [-]

Can you think of a way to do this that would not feel like a freak show? Psych hospitals are full of staff who actually need to talk the the patients, plus students and interns and the patients' friends and family who visit. Almost all the patients get tired of being asked about how they're doing, since they have to explain how they're doing so many times a day to a lot of near-strangers. Introducing tourists seems like a bad plan.

Comment author: DaFranker 24 August 2012 03:53:55AM 0 points [-]

I think the idea was to find psych patients willing to speak with one or more Bayesians about whatever interesting beliefs that got them there in the first place, and let them furiously jot down notes and do all kinds of arcane math in the process.

Comment author: duckduckMOO 13 August 2012 02:27:20AM *  3 points [-]

"Coltheart et al pretend that the prior is 1/100, but this implies that there is a base rate of your spouse being an imposter one out of every hundred times you see her (or perhaps one out of every hundred people has a fake spouse) either of which is preposterous."

What if their prior on not feeling anything upon seeing their wife is 0? What if most of the reason for reasonable people's prior on this being much lower it is low status, instrumentally bad, etc, but their rational sincere thinking about it prior is close to 50/50? I notice you called the idea preposterous and something reasonable people wouldn't take seriously which are both quite status-ey. So if their aversion to instrumentally bad ideas and/or their aversion to ideas people will think them crazy for gets switched off they can easily get the wrong answer. Perhaps a fear of being of being fooled, or a fight or flight paranoia spiral could be what makes them think so.

I have no idea if any of that is true.

Comment author: selylindi 14 August 2012 05:10:49PM 5 points [-]

Similarly, I think Coltheart's criticism described here was flawed because it made the prior too specific. How often do you see a person at a distance or facing away and you "recognize" them as a loved one, but then the person comes closer or turns around and you realize you were wrong? It's not often, but it happens enough that we all know that feeling of sudden non-recognition. I often see it in children who come up to me expecting to find their father. The likelihood ratio of priors doesn't have to be for "my wife" versus "an imposter", but could be for "my wife" versus "not my wife". If that is the case, then the brain-damaged person uses the imposter theory to explain the general "not my wife" endogenous evidence.

Comment author: Dr_Manhattan 27 August 2012 12:50:28PM 2 points [-]
Comment author: SilasBarta 14 August 2012 10:31:33PM *  5 points [-]

For example, one male patient expressed the worry that his wife was actually someone else, who had somehow contrived to exactly copy his wife's appearance and mannerisms. This delusion sounds harmlessly hilarious ...

It's harmless to claim that someone is observationally equivalent to his wife, but not his wife? When that kind of thing happens on a large scale, it's called "the debate about p-zombies".

Comment author: duckduckMOO 15 August 2012 01:20:49PM 4 points [-]

isn't claimed actual equivalence the problem with P-zombies. Someone being observationally equivalent but different is merely extremely unlikely (maybe she has an identical twin, maybe aliens etc.) P-zombies are supposed to be indistingishable in principle, which is impossible/requires souls that aren't subject to testing for distinguishability.

Comment author: RomanDavis 23 August 2012 10:08:56AM 1 point [-]

I don't think P Zombie debates are a reat sign of rationality either, but I think the debate itself probably does nearly zero harm, if you don't count wasted time.

Comment author: SilasBarta 24 August 2012 12:04:43AM -2 points [-]

"If you don't count wasted time"? Okay, but likewise, if you don't count her husband getting shot, Mrs. Lincoln really enjoyed the play...

Comment author: shokwave 24 August 2012 12:12:56AM 2 points [-]

That's not likewise.

Comment author: SilasBarta 24 August 2012 12:39:04AM -1 points [-]

How so? A bunch of philosophers blowing valuable time on a worthless debate is a major harm, almost as if they were forcibly held in unemployment but drew the same resources from society.

Comment author: Eugine_Nier 14 August 2012 12:03:52AM *  3 points [-]

There must be some fundamental difference between how one draws inferences from mental states versus everything else.

Talking about "drawing inferences from mental states" strikes me as a case of the homunculus fallacy, i.e., thinking that there's some kind of homunculus sitting inside our brains looking at the mental states and drawing inferences. Whereas in reality mental states are inferences.

Comment author: TruePath 15 August 2012 09:57:07AM 2 points [-]

This objection points largely in the right direction but I don't think it's fair to accuse the view of adopting the homunculus fallacy. After all, the very suggestion is that our brains have circuitry that (in effect) performs Bayesian updating and that neurological damage and psychiatric conditions can cause this circuitry to misbehave. This is a way the brain could have worked. If the view adopted the homunculus fallacy then the Bayesian updating machinery couldn't, itself, be broken. It could only recieve bad input.

However, as I delineate in my comment, we have every reason to believe the brain doesn't have anything like a Bayesian updating module exercising control over all the other brain modules. Instead, the empirical evidence suggests a much simpler structure in which different brain regions vie to control our actions without any arbitration by some master Bayesian updating module. Otherwise, one couldn't explain our inclination to answer wrongly on tests that pit one part of the brain against another, e.g., our mistakes in identifying the color of text spelling the name of another color.

Also, to be pedantic the mental states aren't inferences. .The mental states merely determine behavior patterns that we can (sometimes) usefully describe as making certain inferences.

Comment author: Yvain 14 August 2012 01:40:29AM 1 point [-]

Really? I don't see that at all. The same mental state can be both an inference and a premise for the next inference. For example, "I feel really tired lately -> Maybe I'm sick" seems pretty straightforward, as does "I am a guy and feel really attracted to other guys -> maybe I'm gay".

Comment author: Eugine_Nier 14 August 2012 02:07:47AM 1 point [-]

You're thinking of the inference as "I don't feel affection when I see her face -> She's not my wife". Whereas, another way to think about it is "Her face looks like [insert description of wife's face here] -> She's not my wife".

Comment author: Kaj_Sotala 14 August 2012 07:35:30AM 1 point [-]

You can have a module in a certain state and another module which draws an inference from that. No homunculus needed.

Comment author: Eugine_Nier 14 August 2012 10:38:35PM -1 points [-]

Module A doesn't "draw an inference" from the state of module B, that would require module A to have a sub-module dedicated to drawing inferences from module B and evaluating their reliability. Module A simply treats the output of module B as an inference of similar weight to the one it itself makes.

Comment author: dlthomas 21 August 2012 05:25:57PM 0 points [-]

But one or more drawing-inferences-from-states-of-other-modules module could certainly exist, without invoking any separate homunculus. Whether they do and, if so, whether they are organized in a way that is relevant here are empirical questions that I lack the data to address.

Comment author: ialdabaoth 07 November 2012 02:27:31AM *  2 points [-]

Prefrontal cortex damage can be really weird. I'd really like to see how these different syndromes manifest in an fMRI.

Contextual preface: my own brand of crazy tends to interfere with getting helped by professionals, so I've done a lot of amateur-level neurobiology research on my own, trying to pin it down. An "inability to update priors" does seem to be a component of it, but it seems primarily triggered by emotional intensity.

Anyone who would like to prod me with Science is extremely welcome to do so.

Comment author: Strange7 07 November 2012 03:01:29AM 1 point [-]

By what mechanism does it interfere with professional assistance?

Comment author: ialdabaoth 07 November 2012 03:11:18AM *  2 points [-]

Twofold:

  1. I tend to display resistance to authority of all kind (ESPECIALLY therapy), because as much as I try to behave as a rationalist, I appear to actually behave as if I believed that most human beings are strategizing explicitly to inflict maximum emotional harm on me, and that any human being who is "playing friendly" has a deeply sinister game that will either inflict maximum harm on me by either playing on my trustfulness ("haha! you thought I was trying to help you!") or playing on my lack of trust ("haha! I tricked you into distrusting a genuine path to getting better!"). I appear to believe that the question of which human beings want to befriend me, and which ones only want to trick me to inflict harm, is only determined after I have chosen who to trust. (Yes, I realize this is absurd.)

  2. I tend to shut down whenever I attempt to motivate to help myself, because as much as I try to behave as a rationalist, I appear to actually behave as if I believed that every choice I make will ALWAYS turn out - retroactively - to be the worst choice I could have made. (Yes, I realize this is absurd.)

Comment author: TimS 07 November 2012 04:25:46AM *  1 point [-]

You might look to structured social interactions to help fit your emotional reactions to your intellectual beliefs about social interactions. For example, board games have relatively limited variation in social interaction between people who rate you a 6 and those that rate you a 4 on a 10-point likeability scale. It's a chance to gain additional data at low risk. Look to places like Meetup.com (I'm not sure that's international). Boardgamegeek.com is a chance to see what you might like.

Regarding therapy, keep in mind that good fit between therapist and patient is very important. If you haven't gotten good value from therapy but are still willing to try it, finding a new therapist might yield benefit.

Comment author: ialdabaoth 07 November 2012 04:44:25AM 1 point [-]

You might look to structured social interactions to help fit your emotional reactions to your intellectual beliefs about social interactions. For example, board games have relatively limited variation in social interaction between people who rate you a 6 and those that rate you a 4 on a 10-point likeability scale. It's a chance to gain additional data at low risk.

Well, board games (and card games, and the like) run into a problem where I'm perceived as focused, smart, and competent, so everyone tends to team up to eliminate me quickly - so I tend to get a lot of people actually reinforcing the idea that groups conspire against me.

Regarding therapy, keep in mind that good fit between therapist and patient is very important. If you haven't gotten good value from therapy but are still willing to try it, finding a new therapist might yield benefit.

Yeah, back when I had money for therapy, I shopped around a lot. Anymore, well... you get what you pay for.

Comment author: Strange7 09 November 2012 10:44:02PM 2 points [-]

I'd recommend finding a game where the players are working together against an automated hostile environment, such as Zombicide. If it seems like you have a workable plan, the other players will go along with it out of self-interest if nothing else. (D&D /can/ work like that, but there are a lot of other tricky factors when it's a GM rather than a program)

As for emotional intensity... try to find some little ritual that relaxes you, like sitting still with your eyes closed and breathing slowly in and out ten times, and start doing it at semi-random times during the day. Once that becomes habitual, focus on remembering to go through the ritual whenever you start to get excited or upset. There is no plausible mechanism by which following these instructions as intended could cause kidney failure.

If self-improvement fails, what sorts of things do motivate you to act?

Absurdity is a tricky thing. Have you ever tried constructing an explicit formulation of your inferred emotional beliefs and (temporarily) acting as if it was an accepted part of your intellectual beliefs, with the goal of seeing it torn down?

Comment author: ialdabaoth 10 November 2012 04:56:39AM *  2 points [-]

I'd recommend finding a game where the players are working together against an automated hostile environment, such as Zombicide. If it seems like you have a workable plan, the other players will go along with it out of self-interest if nothing else. (D&D /can/ work like that, but there are a lot of other tricky factors when it's a GM rather than a program)

I've done stuff like this; in some situations, that works reasonably well, but in others I wind up send out flags that I'm too low-status to "deserve" being listened to, no matter how reasonable or workable my plans are.

If self-improvement fails, what sorts of things do motivate you to act?

For a very long time, fear motivated me to act, but that wore out. After that, shame motivated me to act, but that's almost fully eroded. I don't know what I'll have once shame runs out.

Have you ever tried constructing an explicit formulation of your inferred emotional beliefs and (temporarily) acting as if it was an accepted part of your intellectual beliefs, with the goal of seeing it torn down?

I have done exactly and explicitly this - I got the idea, weirdly enough, from Aleister Crowley via Robert A Wilson. Unfortunately, I'm VERY good at crafting mindsets / "reality tunnels" and following them - consciously embracing my inferred emotional beliefs tends to reinforce them, not tear them down. I can enter a sort of "1984" mode where holding onto my beliefs is explicitly more important than my own survival, and relish in the self-destructivity that the absurdity of my beliefs is inflicting upon me.

Comment author: Strange7 11 November 2012 07:25:02PM 1 point [-]

Aha! In that case, possibly what you need is a code of honor. Lay down some rules of constructive behavior (I'd recommend studying a variety of historical precedents first, particularly the ways in which they can go wrong... Bushido, Ms. Manners, etc.) and pretend to be the sort of person who thinks that following those rules is the Most Important Thing.

Done correctly, you can stop worrying about the uncertainty of whether some other choice would have had a better outcome, since in any given situation there is only one honorable course of action. Simply calculate what the correct action is, and follow by rote. Under some circumstances honor may compel you to trust someone who most people would not, pass up opportunities for personal gain, dive into a frozen lake to rescue a complete stranger, openly defy the law, or otherwise engage in heroically self-destructive behavior, but it is entirely possible for the gains (from following a calculated strategy, and from other people learning to trust and rely on your consistent behavior) to predominate.

This may be controversial, but I would recommend against keeping an explicit, external record of how honorable or dishonorable your behavior has been. A journal or blog can be useful in other ways, but the plan here is eternal striving toward an ideal, not 3% improvement over last month.

Comment author: ialdabaoth 11 November 2012 08:32:30PM *  3 points [-]

I actually have a code of honor, and operate explicitly as if those rules are the Most Important Thing.

Rule 0 is "Should does not imply can; should only implies must." - or, put another way, "Just because you cannot do something does not excuse you for not having done it."

Rule 1 is "Always fulfill other peoples' needs. If two people have mutually exclusive needs, failing to perfectly fulfill both is abject failure."

Rule 2 is "All successes are private, all failures are public."

Rule 3 is "Behave as if all negative criticisms of you were true; behave as if all compliments were empty flattery. Your worth is directly the lower of your adherence to these rules and your public image."

Past 3 the rule-sorting gets fuzzier, but somewhere around rule 5 or 6 is "always think the best of people", around rule 7 is "It's wrong to win a challenge", somewhere around rule 10 is "losers suck".

Comment author: Strange7 13 November 2012 04:00:32AM 7 points [-]

Every rule I see there seems to be you shooting yourself in the foot. I was thinking of something which would produce exactly one correct course of action under most reasonable circumstances, whereas you seem to have quite rigorously worked out a system with fewer correct courses of action than that.

How comfortable are you with arbitrarily redefining your code, voluntarily but with external prompting? I mean, given the ambient levels of doom already involved.

Comment author: army1987 13 November 2012 01:03:30PM 1 point [-]

Rule 0 is this one, and Rule 1 is a subcase of it, but rules 2 and (especially) 3 wouldn't work for me -- I seem to function better when my status and (especially) my self-esteem are high than when they're low. And I don't understand Rule 7.

Comment author: Kindly 07 November 2012 05:46:39AM 0 points [-]

Well, board games (and card games, and the like) run into a problem where I'm perceived as focused, smart, and competent, so everyone tends to team up to eliminate me quickly - so I tend to get a lot of people actually reinforcing the idea that groups conspire against me.

You could play games where this is not something people can really do. For example, Settlers of Catan would be a bad choice, but Apples to Apples would be a good one.

Comment author: ialdabaoth 07 November 2012 09:05:04AM 1 point [-]

You could play games where this is not something people can really do. For example, Settlers of Catan would be a bad choice, but Apples to Apples would be a good one.

Is there a good way to make such games enjoyable?

Comment author: TimS 07 November 2012 05:42:04PM 1 point [-]

Let's remember that the purpose of this activity is to give to a safe opportunity for you to have social interactions. Hopefully, this will help you be more comfortable with the idea that other people do not interact with you for the purpose of causing you distress. To that extent, beware trivial inconveniences.

Still, losing is no fun - you might not be able to force yourself to keep something that only might be helpful but is not enjoyable. Games have a variety of mechanics for preventing attack the leader mechanics based solely on player reputation.

First, you can anonymize player input. That's what Apples to Apples does. But it is a light party game (not my cup of tea).

Second, you can restrict the player's ability to target specific other players. Dominion works that way - generally, attacks target everyone at the table equally.

Third, you can pick games with much higher complexity. One of my favorite games, Brass, is at least an order of magnitude more complex than a simply game like Monopoly. You are unlikely to find that others target you simply because you are smart and analytical when it's almost a prerequisite to play. In fact, it might be worth some time looking at Boardgamegeek (warning: potential time-sink) to find interesting looking games where your analytic nature is unlikely to make you a target.

I really do think that practice is safe social interactions will provide helpful to you, both because it is providing data to adjust your social predictions and because improving social skills will make you more effective at avoiding unpleasant social interactions.

Comment author: Kindly 07 November 2012 02:34:59PM 0 points [-]

I've never tried forcing myself to like a game, but why do you think that you need to?

There are very many games in which you win by doing better than other players and you can't really make specific other players do worse. Odds are you'll like some of them.

There's Dominion or Race for the Galaxy. There's trivia games. In general, many games classified as "party games" are good, but not all: Mafia, for example, would be a terrible choice. There's cooperative games like Pandemic.

There's also two-player games (like chess) in which you at least won't have a group teaming up against you, or team games (like spades) in which you'll have (at least) one person on your side.

Comment author: Strange7 20 November 2012 12:25:20AM 0 points [-]

Before I prod any further, what would your preferred outcome be?

Comment author: ialdabaoth 20 November 2012 01:03:20AM 1 point [-]

In the most abstract? Some way to demonstrate to people (including myself) that I'm a sapient being that deserves respect, and not a worthless, lazy, broken, scary parasite.

More concretely, some mechanistic description of why I've had trouble operating within existing social norms, and why I tend to operate under different base assumptions than others - preferably a description that might suggest methods of interacting with the human world that allows me to maintain my dignity and self-respect, without having to immediately acknowledge my abject worthlessness and helplessness as a unilateral precondition for requesting assistance.

It would be nice if someone could point at a bit of my brain, or a specific pattern of answers on behavioral tests, and say "you follow this descriptive pattern which we've labeled X, whereas most people follow this other descriptive pattern which we've labeled Y. There's a lot of research that shows that X does not interact well with Y", in a way that isn't an obvious attempt to reinforce their own social assumptions against a threatening Other.

Comment author: Epiphany 07 November 2012 02:29:36AM *  1 point [-]

Related Research:

Harvard did a study on LLI (Low latent inhibition. It means that you don't block as much stimulus and can mean having a lot more ideas to sort through) and discovered that people with high LLI and high IQs tend to be more creative whereas people with low IQs and high LLI are more likely to be schizophrenic. This may be because people with higher IQs are able to evaluate a larger number of ideas whereas those with lower IQs may find themselves overwhelmed trying to do so.

This suggests that schizophrenic people could benefit from assistance with processing their ideas. It also suggests that teaching reasoning skills all by itself might not be enough for many of them. If a key part of the problem turns out to be that they're generating more weird ideas than they can process, it may be more useful to have someone to talk it over with.

Then again, if Bayes is faster than whatever technique they're using, it could theoretically bring a lot of them over that "sanity waterline" threshold if it makes them able to judge ideas faster than they generate them.

Comment author: juliawise 27 August 2012 09:24:33PM 1 point [-]

In the hospital where I worked, there was a woman who was able to articulate that it was very unlikely that her neighbor could read her mind. But, she reasoned, there were a lot of people in the world, so surely someone could read minds. And she had the bad luck to live next door to that person.

So sometimes people are able to acknowledge that their beliefs are statistically unlikely but still believe them.

Comment author: MaoShan 17 August 2012 03:00:16AM 1 point [-]

I suspect that, especially in dreams, and to a lesser degree in déjà vu, the output of place cells have the ability to be combined in novel ways that normally might be rejected when fully conscious. I am not aware that anything similar has been discovered regarding familiar people, but if so, that would work in a surprisingly similar way ("Don't I know you from somewhere?"), and would accommodate the typical example. What the unconscious mind composes as a shorthand template for my mother is later detailed, but still contains the "my mother" flag; although my fourth grade teacher has many similar qualities, she has the "my fourth grade teacher" flag. Maybe the reasoning that the RDPC enables is the choosing between simultaneous data streams, and diminished or overactive capabilities of the RDPC can cause delusions accordingly.

Comment author: potato 11 September 2012 03:57:41AM *  0 points [-]

"

"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?"

IMHO All human psychologies have a hard time updating to believe they're poorly built. We are by nature arrogant. Do not forget that common folk often "choose" what to believe after they think about how it feels to believe it.

(Brilliant article btw)

(eidt):"Likewise, how come delusions are so specific? It's impossible to convince someone who thinks he is Napoleon that he's really just a random non-famous mental patient, but it's also impossible to convince him he's Alexander the Great (at least I think so; I don't know if it's ever been tried). But him being Alexander the Great is also consistent with his observed data and his deranged inference abilities. Why decide it's the CIA who's after you, and not the KGB or Bavarian Illuminati?"

IMHO I think there are plenty of cognitive biases that can explain that sort of behavior in healthy patients. Confirmation bias, and the affective heuristic are the first to come to mind.

Comment author: TimS 11 September 2012 04:31:42AM 0 points [-]

You have brain damage" is also a theory with perfect explanatory adequacy.

If you don't have the right understanding of how the brain works, I've not sure this thesis adequately explains.

By comparison, the expected observations from "Your car has engine damage" is a car that doesn't drive at all, not one that turns right but not left.

Comment author: wstrinz 23 August 2012 02:53:22PM *  0 points [-]

Once I understood the theory, my first question was has this been explained to any delusional patient with a good grasp of probability theory? I know this sort of thing generally doesn't work, but the n=1 experiment you mention is intriguing. I suppose what is more often interesting to me is what sorts of things people come up with to dismiss conflicting evidence, since it is in a strange place between completely random and clever lie. If you have a dragon in your garage about something you tend to give the most plausible excuses because you know, deep down, the truth about the phenomenon so you can construct your explanation around that recognition of the way the world actually is. Delusional patients, by contrast, say things like "this is my daughters arm" that just don't make any sense, and indicate to us in an eerie way just how deeply they believe their delusions. I'm surprised that, given the contributions from the study of injured brains to neurobiology, there's not a bigger focus on the study of abnormal mental systems in cognitive science and decision theory, not that I'm the first person to wonder this or anything

Comment author: OnTheOtherHandle 19 August 2012 03:56:02AM 0 points [-]

Is it possible that what specific delusions a patient develops after their brain damage correlates with their experiences before the brain damage? Maybe paranoid schizophrenics in the US tend to think the CIA is after them, but those in Soviet Russia used to think the KGB was? How would these delusions have manifested in the past, before any such organizations existed? Perhaps some of them convinced themselves that God's wrath was being brought down upon them, or that Satan was haunting them.

Also, does Capgras delusion apply to everyone the patient has an emotional reaction to, or just their spouse/parents? If you were a very political person, and felt great pride/joy when looking at your favored political leader, and then you got Capgras delusion, would you assume they were replaced by aliens? What about your teachers, doctors, and friends?

Comment author: Nornagest 19 August 2012 09:40:23PM *  1 point [-]

Also, does Capgras delusion apply to everyone the patient has an emotional reaction to, or just their spouse/parents? If you were a very political person, and felt great pride/joy when looking at your favored political leader, and then you got Capgras delusion, would you assume they were replaced by aliens? What about your teachers, doctors, and friends?

If there's been any research into this I haven't been able to find it; but a few people outside academia seem to have associated the "Paul is dead" meme from the Beatles era with the Capgras delusion. There's also a number of conspiracy theories that seem to fit the general pattern, including David Icke's reptilian humanoid theory.

Comment author: OnTheOtherHandle 19 August 2012 11:01:36PM *  1 point [-]

Interesting; thanks.

Also, do you know if Capgras delusion only wipes away all previous emotions you associated with faces, or if it also makes it impossible to form new emotions related to other faces? What if, for some reason, the spouse decided to go along with the charade that they were a different person, and managed to convince the Capgras patient to stay married to them anyway? Would the patient eventually form an emotional connection the way normal people do when they meet, date, and marry someone? Or if a Capgras patient had a child after the brain damage, would they associate their child's face with emotions while still considering their spouse and parents to be imposters?

Comment author: adavies42 25 August 2012 07:18:46AM *  1 point [-]

There is alleged to have been a Capgras patient who wasn't very happy with her marriage beforehand, but decided she liked the "imposter" much better. No cite, I think it was in a TED talk.

Comment author: prase 19 August 2012 08:59:05PM *  0 points [-]

Maybe paranoid schizophrenics in the US tend to think the CIA is after them, but those in Soviet Russia used to think the KGB was?

It seems almost certain. In the least one should know about CIA's existence to have that sort of delusion.

Also, does Capgras delusion apply to everyone the patient has an emotional reaction to, or just their spouse/parents? If you were a very political person, and felt great pride/joy when looking at your favored political leader, and then you got Capgras delusion, would you assume they were replaced by aliens? What about your teachers, doctors, and friends?

This is a great question to test the emotional reaction hypothesis. I would add: what about their enemies? A negative emotional response is still an emotional response (well, maybe, I wouldn't be so surprised if negative and positive emotions were each associated with a different part of the brain).

Comment author: complexmeme 14 August 2012 06:01:14PM *  0 points [-]

"You have brain damage" is also a theory with perfect explanatory adequacy.... Why not?

This led me to think of two alternate hypotheses:

One is that the same problem underlying the second factor ("abnormal belief evaluation") is at fault, that self-evaluation for abnormal beliefs involves the same sort of self-modelling needed for a theory like "I have brain damage" to seem explanatory (or even coherent). The other is that there are separate systems for self-evaluation and belief-probability-evaluation that are both damaged in the case of such delusions.

One might take the Capgras delusion and similar as evidence that those systems at least overlap, but there's some visibility bias involved, since people who hold beliefs that seem (to them) to be both probable and crazy are likely to conceal those beliefs (see someonewrongonthenet's comment).

Comment author: MaoShan 17 August 2012 03:26:04AM 4 points [-]

"Brain damage makes my brain stop working properly. If I have brain damage, I wouldn't be able to reason like this, therefore I cannot have brain damage. The CIA just told my doctor to say that I do."

Comment author: ialdabaoth 12 November 2012 01:28:34AM *  3 points [-]

There's a good check for this.

I have, every 2 years or so since 2002, taken a series of IQ tests and averaged the results together. (Side note: in 1997, an in-person IQ test rated me at 155. This isn't calibrated to the other tests, of course, but it's an interesting anecdote.)

In 2002, my IQ according to this process was 148. In 2004, it was 150. In 2006, it was 145. In 2009, it was 135. In 2011, it was 120. Today, it was 115.

I keep asking myself "now what", but I'm not even sure I'm qualified to answer that question anymore. (This will sound hilariously cliche'd, but... I don't FEEL any dumber. It's just become more and more frustrating to think about deep problems. I feel like my domain expertise is just as good as it ever was - but how the hell could I TELL, if the very instrument which measures my expertise is the instrument which is failing?)

Comment author: gwern 12 November 2012 04:06:00AM 0 points [-]

I have, every 2 years or so since 2002, taken a series of IQ tests and averaged the results together.

All the same test? Those are troubling results indeed, since the 2pt change from 2002-2004 looks like a practice effect, but a 35pt fall is surely not a practice effect.

It's just become more and more frustrating to think about deep problems. I feel like my domain expertise is just as good as it ever was - but how the hell could I TELL, if the very instrument which measures my expertise is the instrument which is failing?

Presumably you'd measure your domain expertise by your domain results. That's how most experts get by: lots of domain knowledge, not so much need for fluid intelligence.

Comment author: ialdabaoth 12 November 2012 04:31:09AM 1 point [-]

The problem is that, in many situations, I was so poor at playing political games that I wound up accepting other people's political measurements of my domain expertise, instead of accurate, objective measurements. I've eventually developed a sort of neurotic "learned helplessness" that makes it nigh-impossible to accept accurate, objective measurements of any of my capacities, if they would have a positive connotation.

Comment author: MaoShan 12 November 2012 02:21:27AM *  0 points [-]

Well, there you just said that you don't have the patience for those type of problems, which (unless your area of expertise is identifying patterns of lines) doesn't necessarily mean that you are not extremely well-suited to the work that you do. If you are worried about specific cognitive deficits, test for those--an IQ test is not going to help identify that.

Comment author: Epiphany 07 November 2012 02:07:11AM *  0 points [-]

A Related Experiment:

I once read about an experimental mental hospital for people with schizophrenic symptoms in California called Soteria House.

At Soteria house, the philosophy was to let the mental patients do whatever they wanted with the exception of hurting people. They got to run around naked if they wanted to, and there was a room for them to break things in (with breakable objects).

The staff was trained on a method to help the schizophrenics sort out reality from delusion. They were assisted by being told which things others couldn't see and were asked to interpret them as they would a dream. The result was that most of them were better in three months, were able to be independent in six months and a very low proportion of them (I think 15%?) had another schizophrenia episode.

This experiment was repeated at another location in California, though I forgot the name of the sister house. You can also check out Soteria Bern in Germay.

I think it may have been important that the emphasis was on "try interpreting that a different way" instead of "that isn't real" - because there is likely to have been some emotional or belief content in what they were experiencing that they needed to process (for the same reason we have to process feelings and can't just repress them). It was probably also very important that the patients didn't feel trapped. If you feel trapped, you're less likely to trust the people helping you. It might be hard for a person who is already confused about reality to tell whether someone is gaslighting them. It probably takes a lot of trust to accept this type of guidance.

They weren't using Bayes specifically to convince patients that their delusions weren't real, but I think this is still relevant because they were essentially getting the patients interpret delusions in a more rational way.