Kaj Sotala has an outstanding review of Unlocking The Emotional Brain; I read the book, and Kaj’s review is better.
^_^ <3 ^_^
Richard might be able to say “I know people won’t hate me for speaking, but for some reason I can’t make myself speak”, whereas I’ve never heard someone say “I know climate change is real, but for some reason I can’t make myself vote to prevent it.” I’m not sure how seriously to take this discrepancy.
I haven't heard this either, but I have heard (and experienced) "I know that eating meat is wrong, but for some reason I can't make myself become a vegetarian". Jonathan Haidt uses this as an example of an emotional-rational valley in The Happiness Hypothesis:
During my first year of graduate school at the University of Pennsylvania, I discovered the weakness of moral reasoning in myself. I read a wonderful book—Practical Ethics—by the Princeton philosopher Peter Singer. Singer, a humane consequentialist, shows how we can apply a consistent concern for the welfare of others to resolve many ethical problems of daily life. Singer's approach to the ethics of killing animals changed forever my thinking about my food choices. Singer proposes and justifies a few guiding principles: First, it is wrong to cause pain and suffering to any sentient creature, therefore current factory farming methods are unethical. Second, it is wrong to take the life of a sentient being that has some sense of identity and attachments, therefore killing animals with large brains and highly developed social lives (such as other primates and most other mammals ) is wrong, even if they could be raised in an environment they enjoyed and were then killed painlessly. Singer's clear and compelling arguments convinced me on the spot, and since that day I have been morally opposed to all forms of factory farming. Morally opposed, but not behaviorally opposed. I love the tast e of meat, and the only thing that changed in the first six months after reading Singer is that I thought about my hypocrisy each time I ordered a hamburger.
But then, during my second year of graduate school, I began to study the emotion of disgust, and I worked with Paul Rozin, one of the foremost authorities on the psychology of eating. Rozin and I were trying to find video clips to elicit disgust in the experiments we were planning, and we met one morning with a research assistant who showed us some videos he had found. One of them was Faces of Death, a compilation of real and fake video footage of people being killed. (These scenes were so disturbing that we could not ethically use them.) Along with the videotaped suicides and executions, there was a long sequence shot inside a slaughterhouse. I watched in horror as cows, moving down a dripping disassembly line, were bludgeoned, hooked, and sliced up. Afterwards, Rozin and I went to lunch to talk about the project. We both ordered vegetarian meals. For days afterwards, the sight of red meat made me queasy. My visceral feelings now matched the beliefs Singer had given me. The elephant now agreed with the rider, and I became a vegetarian. For about three weeks. Gradually, as the disgust faded, fish and chicken reentered my diet. Then red meat did, too, although even now, eighteen years later, I still eat less red meat and choose nonfactory-farmed meats when they are available.
That experience taught me an important lesson. I think of myself as a fairly rational person. I found Singer's arguments persuasive. But, to paraphrase Medea's lament (from chapter 1): I saw the right way and approved it, but followed the wrong, until an emotion came along to provide some force.
I expect it's common for people to say (or at least be in a position to say truly, if they chose) "I know that climate change is real, but for some reason I can't persuade myself not to vote Republican". In some cases that will be because they like the Republicans' other policies, in which case there isn't necessarily an actual "valley" here. But party loyalty is a thing, and I guarantee there are people who could truly say "I know that Party X's actual policies are a better match for my values, but I can't bring myself to vote for them rather than for Party Y".
(As a relatively-unimportant side-note, I'd like to add that sometimes it's not so much a matter of party loyalty as it is party spite. For example, "Party X's actual policies are a better match for my values, but I despise the groupthink-enforcing anti-intellectual cultural forces associated with their local supremacy so much that I'm going to vote for Party Y, not because Party Y would actually be any better on the free-speech/pro-intellectualism front if they took power, but because I feel better supporting the currently-losing side of an Evil vs. Evil conflict.")
Exploring the connection to politics a bit more, Coherence Therapy: Practice Manual And Training Guide has this page where it claims that emotional learning forms our basic assumptions for a wide variety of domains, including ones that we would commonly think of as being the domains of rationality:
---
Unconscious constructs constituting people's pro-symptom positions tend to be constructs that define these areas of personal reality and felt meaning:
Examples (verbalizations of unconscious, nonverbal constructs/schemas held in the limbic system and body)
Ontology: "People are attackers. If they see me, they'll try to kill me."
Causality: "If too much is going well for me, that will make a big blow happen to me."
Purpose: "I've got to keep Dad from withdrawing his love from by never, ever disagreeing with him."
Attachment: "I'll get attention and connection only if I'm visibly unwell, failing, hurting." "You'll reject and disconnect from me if I differ from you in any way."
Values: "It is selfish and bad to pay attention to my own feelings, needs and views; it is unselfish and good to be what others want me to be."
Power: "The one who has the power in a personal relationship is the one who withdraws love; the other is the powerless one."
---
It seems pretty easy to take some of those examples and see how they, or something like them, could form the basis of ideologies. E.g. "people are attackers" could drive support for authoritarian policing and hawkish military policy, with elaborate intellectual structures being developed to support those conclusions. On the other side, "people are intrinsically good and trustworthy" could contribute support to opposite kinds of policies. (Just to be clear, I'm not taking a position on which one of those policies is better nor saying that they are equally good, just noting that there are emotional justifications which could drive support for either one.)
That might be one of the reasons why you don't see "I know that X is correct, but can't bring myself to support it" in politics so much. For things like "will you be hated if you speak up", there's much more of a consensus position; most people accept on an intellectual level that speaking up doesn't make people hated, because there's no big narrative saying the opposite. But for political issues, people have developed narratives to support all kinds of positions. In that case, if you have a felt position which feels true, you can often find a well-developed intellectual argument which has been produced by other people with the same felt position, so it resonates strongly with your intuitions and tells you that they are right.
This could also be related to the well-known thing where people in cities tend to become more liberal: different living conditions give rise to different kinds of implicit learning, changing the kinds of ideologies that feel plausible.
I think this post is a useful companion to Kaj's posts; it feels like much of what 'feels settled now' that was fresh at the time was this sort of conceptual work on what's going on with therapy and human psychology in general.
My own experience with my mental mountains has led me to what I call the "One, Two, Many" model of emotion formation and annihilation.
1: There is an initial event which causes a sensory memory of the experience to get stuck in my mind, usually a visual/tactile memory with an associated specific type of feeling bad, or more rarely, feeling good.
2: There is a reinforcing event, which has a specific similar characteristic that makes my mind go, "these are the same type of thing," like having a hard time remembering the names of both Al Pacino and Robert De Niro at the same time. (Seriously, I had to google a De Niro role just to be able to type his name right now!)
Many: Every subsequent event that shares that characteristic gets lumped into the sea of "it always happens" or "it never happens" barring further conscious examination, but I can only remember the current or most recent such occurrence no matter how often or rarely such events actually occurred in my past.
For me, the "TNT" that can usually blast through this mental mountain is to identify the similar characteristic by tracing the memory of that specific type of feeling bad. I trace it back to the pair of self-reinforcing memories, and they disintegrate, turning from sense memories into simple narrative of something that happened to me, usually with a sense of relieved tension mingled with the feeling of being miffed that I had been tripped up by my own mind's processing artifacts.
I perform my process using the "fourth step" tools developed for Twelve Step programs, which I now believe function on UtEB-style self reflection. The "fourth step" tools work because they focus on the interaction between a resentment emotion which drives behaviors, the person and specific action which caused that resentment, and one's updated (sober) understanding of the world.
I wouldn't be surprised if UtEB-style reconsolidation underlies the success many have reported with Twelve Step programs, and I wouldn't be surprised if most of the people who drop out of Twelve Step programs do so before they experience a mental mountain's disappearance from their minds.
Promoted to curated: I particularly liked how this post responded to and integrated a lot of the ideas in Kaj's review (and broader sequence). I've also just gotten a good amount of mileage out of the "mental mountains" phrase, which has replaced a lot of my vague gesturing at neural annealing in the past.
I expect to use this post primarily as a good reference post. The broad concept of mental mountains has been around for a while in many different guises, but it seems likely to me that this post will become the best reference for that concept, which I think is quite valuable, since it seems to show up in a lot of different cognitive models.
I observe: There are a techniques floating around the rationality community, with models attached, where the techniques seem anecdotally effective, but the descriptions seem like crazy woo. This post has a model that predicts the same techniques will work, but the model is much more reasonable (it isn't grounded out in axon-connections, but in principle it could be). I want to resolve this tension in this post's favor. In fact I want that enough to distrust my own judgment on the post. But it does look probably true, in the way that models of mind can ever be true (ie if you squint hard enough).
The mental mountains model of change has been really helpful to my thinking on therapeutic change.
I don't know. In "Epistemic Learned Helplessness" you pointed out that both right and wrong positions have many convincing arguments, so becoming more open to arguments is just as likely to make someone wrong as right.
I definitely agree with you here - I didn't talk about it as much in this post, but in the psychedelics post I linked, I wrote:
People are not actually very good at reasoning. If you metaphorically heat up their brain to a temperature that dissolves all their preconceptions and forces them to basically reroll all of their beliefs, then a few of them that were previously correct are going to come out wrong. F&CH’s theory that they are merely letting evidence propagate more fluidly through the system runs up against the problem where, most of the time, if you have to use evidence unguided by any common sense, you probably get a lot of things wrong.
The best defense of therapy in this model is that you're concentrating on the beliefs that are currently most dysfunctional, so by regression to the mean you should expect them to get better!
Maybe, but I don't think that we developed our tendency to lock in emotional beliefs as a kind of self-protective adaptation. I think that all animals with brains lock in emotional learning by default because brains lock in practically all learning by default. The weird and new thing humans do is to also learn concepts that are complex, provisional, dynamic and fast-changing. But this new capability is built on the old hardware that was intended to make sure we stayed away from scary animals.
Most things we encounter are not as ambiguous, complex and resistant to empirical falsification as the examples in the Epistemic Learned Helplessness essay. The areas where both right and wrong positions have convincing arguments usually involve distant, abstract things.
What ever the case I am often exhausted, when dealing with such issues.
Good post though.
For instance certain high pitch sounds are terrible for my ears. Makes me lose focus, and makes my eyes close.
Its so bad, that I literally feel as though there is pain in my mind.
Schema? Or auditory thing?
It never happens with other sounds, just with this pitch.
Same problem with focus.
I can clearly be aware how the little tribes in my mind come together to defeat the invaders, but once the battle is over they part ways, and go back, or if they have to do something, the infighting, metaphorically starts.
For some odd reason though they have the oddest moments and reasons to come together.
Its not though where my rational mind wants. This explanation could make sense.
Its also extremely exhausting.
The sheer amount of mental effort that goes into this just feels like I am overclocking my mind just to do something that "might seem to outsiders" like am barely alive.
On further thinking I also have issue naming emotions or putting them in context.
What people say and feel is hard to match to my own "schema or whatever".
Like I can feel sad, but what makes me feel that way?
For example I can be more productive when "depressed", but those two don't go together do they?
So you can see how being productive and sad at the same time can be pretty unsettling.
It seems related to how we view additivity. One stone is a stone, twenty stones are a heap and usually "matter" as a heap - separate valley. One tactless relative is a tactless relative, five of them are family history of no tact. And we know that stones, even lonely, usually only appear so - there's got to be the heap somewhere. I can imagine a GW critic counting on a fortress of anti-GW evidence which he hasn't found in finite time.
Interesting what makes us go looking for it, in any case, be it a good heap of stones or a bad one.
I.
Kaj Sotala has an outstanding review of Unlocking The Emotional Brain; I read the book, and Kaj’s review is better.
He begins:
So in one of the book’s example cases, a man named Richard sought help for trouble speaking up at work. He would have good ideas during meetings, but felt inexplicably afraid to voice them. During therapy, he described his narcissistic father, who was always mouthing off about everything. Everyone hated his father for being a fool who wouldn’t shut up. The therapist conjectured that young Richard observed this and formed a predictive model, something like “talking makes people hate you”. This was overly general: talking only makes people hate you if you talk incessantly about really stupid things. But when you’re a kid you don’t have much data, so you end up generalizing a lot from the few examples you have.
When Richard started therapy, he didn’t consciously understand any of this. He just felt emotions (anxiety) at the thought of voicing his opinion. The predictive model output the anxiety, using reasoning like “if you talk, people will hate you, and the prospect of being hated should make you anxious – therefore, anxiety”, but not any of the intermediate steps. The therapist helped Richard tease out the underlying model, and at the end of the session Richard agreed that his symptoms were related to his experience of his father. But knowing this changed nothing; Richard felt as anxious as ever.
Predictions like “speaking up leads to being hated” are special kinds of emotional memory. You can rationally understand that the prediction is no longer useful, but that doesn’t really help; the emotional memory is still there, guiding your unconscious predictions. What should the therapist do?
Here UtEB dives into the science on memory reconsolidation.
Scientists have known for a while that giving rats the protein synthesis inhibitor anisomycin prevents them from forming emotional memories. You can usually give a rat noise-phobia by pairing a certain noise with electric shocks, but this doesn’t work if the rats are on anisomycin first. Probably this means that some kind of protein synthesis is involved in memory. So far, so plausible.
A 2000 study found that anisomycin could also erase existing phobias in a very specific situation. You had to “activate” the phobia – get the rats thinking about it really hard, maybe by playing the scary noise all the time – and then give them the anisomycin. This suggested that when the memory got activated, it somehow “came loose”, and the brain needed to do some protein synthesis to put it back together again.
Thus the idea of memory reconsolidation: you form a consolidated memory, but every time you activate it, you need to reconsolidate it. If the reconsolidation fails, you lose the memory, or you get a slightly different memory, or something like that. If you could disrupt emotional memories like “speaking out makes you hated” while they’re still reconsolidating, maybe you could do something about this.
Anisomycin is pretty toxic, so that’s out. Other protein synthesis inhibitors are also toxic – it turns out proteins are kind of important for life – so they’re out too. Electroconvulsive therapy actually seems to work pretty well for this – the shock disrupts protein formation very effectively (and the more I think about this, the more implications it seems to have). But we can’t do ECT on everybody who wants to be able to speak up at work more, so that’s also out. And the simplest solution – activating a memory and then reminding the patient that they don’t rationally believe it’s true – doesn’t seem to help; the emotional brain doesn’t speak Rationalese.
The authors of UtEB claim to have found a therapy-based method that works, which goes like this:
First, they tease out the exact predictive model and emotional memory behind the symptom (in Richard’s case, the narrative where his father talked too much and ended up universally hated, and so if Richard talks at all, he too will be universally hated). Then they try to get this as far into conscious awareness as possible (or, if you prefer, have consciousness dig as deep into the emotional schema as possible). They call this “the pro-symptom position” – giving the symptom as much room as possible to state its case without rejecting it. So for example, Richard’s therapist tried to get Richard to explain his unconscious pro-symptom reasoning as convincingly as possible: “My father was really into talking, and everybody hated him. This proves that if I speak up at work, people will hate me too.” She even asked Richard to put this statement on an index card, review it every day, and bask in its compellingness. She asked Richard to imagine getting up to speak, and feeling exactly how anxious it made him, while reviewing to himself that the anxiety felt justified given what happened with his father. The goal was to establish a wide, well-trod road from consciousness to the emotional memory.
Next, they try to find a lived and felt experience that contradicts the model. Again, Rationalese doesn’t work; the emotional brain will just ignore it. But it will listen to experiences. For Richard, this was a time when he was at a meeting, had a great idea, but didn’t speak up. A coworker had the same idea, mentioned it, and everyone agreed it was great, and congratulated the other person for having such an amazing idea that would transform their business. Again, there’s this same process of trying to get as much in that moment as possible, bring the relevant feelings back again and again, create as wide and smooth a road from consciousness to the experience as possible.
Finally, the therapist activates the disruptive emotional schema, and before it can reconsolidate, smashes it into the new experience. So Richard’s therapist makes use of the big wide road Richard built that let him fully experience his fear of speaking up, and asks Richard to get into that frame of mind (activate the fear-of-speaking schema). Then she asks him, while keeping the fear-of-speaking schema in mind, to remember the contradictory experience (coworker speaks up and is praised). Then the therapist vividly describes the juxtaposition while Richard tries to hold both in his mind at once.
And then Richard was instantly cured, and never had any problems speaking up at work again. His coworkers all applauded, and became psychotherapists that very day. An eagle named “Psychodynamic Approach” flew into the clinic and perched atop the APA logo and shed a single tear. Coherence Therapy: Practice Manual And Training Guide was read several times, and God Himself showed up and enacted PsyD prescribing across the country. All the cognitive-behavioralists died of schizophrenia and were thrown in the lake of fire for all eternity.
This is, after all, a therapy book.
II.
I like UtEB because it reframes historical/purposeful accounts of symptoms as aspects of a predictive model. We already know the brain has an unconscious predictive model that it uses to figure out how to respond to various situations and which actions have which consequences. In retrospect, this framing perfectly fits the idea of traumatic experiences having outsized effects. Tack on a bit about how the model is more easily updated in childhood (because you’ve seen fewer other things, so your priors are weaker), and you’ve gone a lot of the way to traditional models of therapy.
But I also like it because it helps me think about the idea of separation/noncoherence in the brain. Richard had his schema about how speaking up makes people hate you. He also had lots of evidence that this wasn’t true, both rationally (his understanding that his symptoms were counterproductive) and experientially (his story about a coworker proposing an idea and being accepted). But the evidence failed to naturally propagate; it didn’t connect to the schema that it should have updated. Only after the therapist forced the connection did the information go through. Again, all of this should have been obvious – of course evidence doesn’t propagate through the brain, I was writing posts ten years ago about how even a person who knows ghosts exist will be afraid to stay in an old supposedly-haunted mansion at night with the lights off. But UtEB’s framework helps snap some of this into place.
UtEB’s brain is a mountainous landscape, with fertile valleys separated by towering peaks. Some memories (or pieces of your predictive model, or whatever) live in each valley. But they can’t talk to each other. The passes are narrow and treacherous. They go on believing their own thing, unconstrained by conclusions reached elsewhere.
Consciousness is a capital city on a wide plain. When it needs the information stored in a particular valley, it sends messengers over the passes. These messengers are good enough, but they carry letters, not weighty tomes. Their bandwidth is atrocious; often they can only convey what the valley-dwellers think, and not why. And if a valley gets something wrong, lapses into heresy, as often as not the messengers can’t bring the kind of information that might change their mind.
Links between the capital and the valleys may be tenuous, but valley-to-valley trade is almost non-existent. You can have two valleys full of people working on the same problem, for years, and they will basically never talk.
Sometimes, when it’s very important, the king can order a road built. The passes get cleared out, high-bandwidth communication to a particular communication becomes possible. If he does this to two valleys at once, then they may even be able to share notes directly, each passing through the capital to get to each other. But it isn’t the norm. You have to really be trying.
This ended out a little more flowery than I expected, but I didn’t start thinking this way because it was poetic. I started thinking this way because of this:
Frequent SSC readers will recognize this as from Figure 1 of Friston and Carhart-Harris’ REBUS And The Anarchic Brain: Toward A Unified Model Of The Brain Action Of Psychedelics, which I review here. The paper describes it as “the curvature of the free-energy landscape that contains neuronal dynamics. Effectively, this can be thought of as a flattening of local minima, enabling neuronal dynamics to escape their basins of attraction and—when in flat minima—express long-range correlations and desynchronized activity.”
Moving back a step: the paper is trying to explain what psychedelics do to the brain. It theorizes that they weaken high-level priors (in this case, you can think of these as the tendency to fit everything to an existing narrative), allowing things to be seen more as they are:
These ascending prediction errors (ie noticing that you’re wrong about something) can then correct the high-level priors (ie change the narratives you tell about your life):
This makes psychedelics a potent tool for psychotherapy:
Am I imagining this, or are Friston + Carhart-Harris and Unlocking The Emotional Brain getting at the same thing?
Both start with a piece of a predictive model (= high-level prior) telling you something that doesn’t fit the current situation. Both also assume you have enough evidence to convince a rational person that the high-level prior is wrong, or doesn’t apply. But you don’t automatically smash the prior and the evidence together and perform an update. In UtEB‘s model, the update doesn’t happen until you forge conscious links to both pieces of information and try to hold them in consciousness at the same time. In F+CH’s model, the update doesn’t happen until you take psychedelics which make the high-level prior lose some of its convincingness. UtEB is trying to laboriously build roads through mountains; F+CH are trying to cast a magic spell that makes the mountains temporarily vanish. Either way, you get communication between areas that couldn’t communicate before.
III.
Why would mental mountains exist? If we keep trying to get rid of them, through therapy or psychedelics, or whatever, then why not just avoid them in the first place?
Maybe generalization is just hard (thanks to MC for this idea). Suppose Goofus is mean to you. You learn Goofus is mean; if this is your first social experience, maybe you also learn that the world is mean and people have it out for you. Then one day you meet Gallant, who is nice to you. Hopefully the system generalizes to “Gallant is nice, Goofus is still mean, people in general can go either way”.
But suppose one time Gallant is just having a terrible day, and curses at you, and that time he happens to be wearing a red shirt. You don’t want to overfit and conclude “Gallant wearing a red shirt is mean, Gallant wearing a blue shirt is nice”. You want to conclude “Gallant is generally nice, but sometimes slips and is mean.”
But any algorithm that gets too good at resisting the temptation to separate out red-shirt-Gallant and blue-shirt-Gallant risks falling into the opposite failure mode where it doesn’t separate out Gallant and Goofus. It would just average them out, and conclude that people (including both Goofus and Gallant) are medium-niceness.
And suppose Gallant has brown eyes, and Goofus green eyes. You don’t want your algorithm to overgeneralize to “all brown-eyed people are nice, and all green-eyed people are mean”. But suppose the Huns attack you. You do want to generalize to “All Huns are dangerous, even though I can keep treating non-Huns as generally safe”. And you want to do this as quickly as possible, definitely before you meet any more Huns. And the quicker you are to generalize about Huns, the more likely you are to attribute false significance to Gallant’s eye color.
The end result is a predictive model which is a giant mess, made up of constant “This space here generalizes from this example, except this subregion, which generalizes from this other example, except over here, where it doesn’t, and definitely don’t ever try to apply any of those examples over here.” Somehow this all works shockingly well. For example, I spent a few years in Japan, and developed a good model for how to behave in Japanese culture. When I came back to the United States, I effortlessly dropped all of that and went back to having America-appropriate predictions and reflexive actions (except for an embarrassing habit of bowing whenever someone hands me an object, which I still haven’t totally eradicated).
In this model, mental mountains are just the context-dependence that tells me not to use my Japanese predictive model in America, and which prevents evidence that makes me update my Japanese model (like “I notice subways are always on time”) from contaminating my American model as well. Or which prevent things I learn about Gallant (like “always trust him”) from also contaminating my model of Goofus.
There’s actually a real-world equivalent of the “red-shirt-Gallant is bad, blue-shirt-Gallant is good” failure mode. It’s called “splitting”, and you can find it in any psychology textbook. Wikipedia defines it as “the failure in a person’s thinking to bring together the dichotomy of both positive and negative qualities of the self and others into a cohesive, realistic whole.”
In the classic example, a patient is in a mental hospital. He likes his doctor. He praises the doctor to all the other patients, says he’s going to nominate her for an award when he gets out.
Then the doctor offends the patient in some way – maybe refuses one of his requests. All of a sudden, the doctor is abusive, worse than Hitler, worse than Mengele. When he gets out he will report her to the authorities and sue her for everything she owns.
Then the doctor does something right, and it’s back to praise and love again.
The patient has failed to integrate his judgments about the doctor into a coherent whole, “doctor who sometimes does good things but other times does bad things”. It’s as if there’s two predictive models, one of Good Doctor and one of Bad Doctor, and even though both of them refer to the same real-world person, the patient can only use one at a time.
Splitting is most common in borderline personality disorder. The DSM criteria for borderline includes splitting (there defined as “a pattern of unstable and intense interpersonal relationships characterized by alternating between extremes of idealization and devaluation”). They also include things like “markedly and persistently unstable self-image or sense of self”, and “affective instability due to a marked reactivity of mood”, which seem relevant here too.
Some therapists view borderline as a disorder of integration. Nobody is great at having all their different schemas talk to each other, but borderlines are atrocious at it. Their mountains are so high that even different thoughts about the same doctor can’t necessarily talk to each other and coordinate on a coherent position. The capital only has enough messengers to talk to one valley at a time. If tribesmen from the Anger Valley are advising the capital today, the patient becomes truly angry, a kind of anger that utterly refuses to listen to any counterevidence, an anger pure beyond your imagination. If they are happy, they are purely happy, and so on.
About 70% of people diagnosed with dissociative identity disorder (previously known as multiple personality disorder) have borderline personality disorder. The numbers are so high that some researchers are not even convinced that these are two different conditions; maybe DID is just one manifestation of borderline, or especially severe borderline. Considering borderline as a failure of integration, this makes sense; DID is total failure of integration. People in the furthest mountain valleys, frustrated by inability to communicate meaningfully with the capital, secede and set up their own alternative provincial government, pulling nearby valleys into their new coalition. I don’t want to overemphasize this; most popular perceptions of DID are overblown, and at least some cases seem to be at least partly iatrogenic. But if you are bad enough at integrating yourself, it seems to be the sort of thing that can happen.
In his review, Kaj relates this to Internal Family Systems, a weird form of therapy where you imagine your feelings as people/entities and have discussions with them. I’ve always been skeptical of this, because feelings are not, in fact, people/entities, and it’s unclear why you should expect them to answer you when you ask them questions. And in my attempts to self-test the therapy, indeed nobody responded to my questions and I was left feeling kind of silly. But Kaj says:
This is a model I can get behind. My guess is that in different people, the degree to which mental mountains form a barrier will cause the disconnectedness of valleys to manifest as anything from “multiple personalities”, to IFS-findable “subagents”, to UtEB-style psychiatric symptoms, to “ordinary” beliefs that don’t cause overt problems but might not be very consistent with each other.
IV.
This last category forms the crucial problem of rationality.
One can imagine an alien species whose ability to find truth was a simple function of their education and IQ. Everyone who knows the right facts about the economy and is smart enough to put them together will agree on economic policy.
But we don’t work that way. Smart, well-educated people believe all kinds of things, even when they should know better. We call these people biased, a catch-all term meaning something that prevents them from having true beliefs they ought to be able to figure out. I believe most people who don’t believe in anthropogenic climate change are probably biased. Many of them are very smart. Many of them have read a lot on the subject (empirically, reading more about climate change will usually just make everyone more convinced of their current position, whatever it is). Many of them have enough evidence that they should know better. But they don’t.
(again, this is my opinion, sorry to those of you I’m offending. I’m sure you think the same of me. Please bear with me for the space of this example.)
Compare this to Richard, the example patient mentioned above. Richard had enough evidence to realize that companies don’t hate everyone who speaks up at meetings. But he still felt, on a deep level, like speaking up at meetings would get him in trouble. The evidence failed to connect to the emotional schema, the part of him that made the real decisions. Is this the same problem as the global warming case? Where there’s evidence, but it doesn’t connect to people’s real feelings?
(maybe not: Richard might be able to say “I know people won’t hate me for speaking, but for some reason I can’t make myself speak”, whereas I’ve never heard someone say “I know climate change is real, but for some reason I can’t make myself vote to prevent it.” I’m not sure how seriously to take this discrepancy.)
In Crisis of Faith, Eliezer Yudkowsky writes:
He goes on to describe how hard this is, to discuss the “convulsive, wrenching effort to be rational” that he thinks this requires, the “all-out [war] against yourself”. Some of the techniques he mentions explicitly come from psychotherapy, others seem to share a convergent evolution with it.
The authors of UtEB stress that all forms of therapy involve their process of reconsolidating emotional memories one way or another, whether they know it or not. Eliezer’s work on crisis of faith feels like an ad hoc form of epistemic therapy, one with a similar goal.
Here, too, there is a suggestive psychedelic connection. I can’t count how many stories I’ve heard along the lines of “I was in a bad relationship, I kept telling myself that it was okay and making excuses, and then I took LSD and realized that it obviously wasn’t, and got out.” Certainly many people change religions and politics after a psychedelic experience, though it’s hard to tell exactly what part of the psychedelic experience does this, and enough people end up believing various forms of woo that I hesitate to say it’s all about getting more rational beliefs. But just going off anecdote, this sometimes works.
Rationalists wasted years worrying about various named biases, like the conjunction fallacy or the planning fallacy. But most of the problems we really care about aren’t any of those. They’re more like whatever makes the global warming skeptic fail to connect with all the evidence for global warming.
If the model in Unlocking The Emotional Brain is accurate, it offers a starting point for understanding this kind of bias, and maybe for figuring out ways to counteract it.