After the release of Ben Pace's extended interview with me about my views on religion, I felt inspired to publish more of my thinking about religion in a format that's more detailed, compact, and organized. This post is the first publication in my series of intended posts about religion.

Thanks to Ben Pace, Chris Lakin, Richard Ngo, Damon Pourtahmaseb-Sasi, Marcello Herreshoff, Renshin Lauren Lee, Mark Miller, and Imam Ammar Amonette for their feedback on this post, and thanks to Kaj Sotala, Tomáš Gavenčiak, Paul Colognese, and David Spivak for reviewing earlier versions of this post. Thanks especially to Renshin Lauren Lee and Imam Ammar Amonette for their input on my claims about religion and inner work, and Mark Miller for vetting my claims about predictive processing.


In Waking Up, Sam Harris wrote:[1] 

But I now knew that Jesus, the Buddha, Lao Tzu, and the other saints and sages of history had not all been epileptics, schizophrenics, or frauds. I still considered the world’s religions to be mere intellectual ruins, maintained at enormous economic and social cost, but I now understood that important psychological truths could be found in the rubble.

Like Sam, I’ve also come to believe that there are psychological truths that show up across religious traditions. I furthermore think these psychological truths are actually very related to both rationality and moral philosophy. This post will describe how I personally came to start entertaining this belief seriously.

“Trapped Priors As A Basic Problem Of Rationality”

“Trapped Priors As A Basic Problem of Rationality” was the title of an AstralCodexTen blog post. Scott opens the post with the following:

Last month I talked about van der Bergh et al’s work on the precision of sensory evidence, which introduced the idea of a trapped prior. I think this concept has far-reaching implications for the rationalist project as a whole. I want to re-derive it, explain it more intuitively, then talk about why it might be relevant for things like intellectual, political and religious biases.

The post describes Scott's take on a predictive processing account of a certain kind of cognitive flinch that prevents certain types of sensory input from being perceived accurately, leading to beliefs that are resistant to updating.[2] Some illustrative central examples of trapped priors:

  • Karl Friston has written about how a traumatized veteran might not hear a loud car as a car, but as a gunshot instead.
  • Scott mentions phobias and sticky political beliefs as central examples of trapped priors.

I think trapped priors are very related to the concept that “trauma” tries to point at, but I think “trauma” tends to connote a subset of trapped priors that are the result of some much more intense kind of injury. “Wounding” is a more inclusive term than trauma, but tends to refer to trapped priors learned within an organism’s lifetime, whereas trapped priors in general also include genetically pre-specified priors, like a fear of snakes. 

My forays into religion and spirituality actually began via the investigation of my own trapped priors, which I had previously articulated to myself as “psychological blocks”, and explored in contexts that were adjacent to therapy (for example, getting my psychology dissected at Leverage Research, and experimenting with Circling). It was only after I went deep in my investigation of my trapped priors that I learned of the existence of traditions emphasizing the systematic and thorough exploration of trapped priors. These tended to be spiritual traditions, which is where my interest in spirituality actually began.[3] I will elaborate more on this later. 

Active blind spots as second-order trapped priors

One of the hardest things about working with trapped priors is recognizing when we’ve got them in the first place. When we have trapped priors, we’re either consciously aware we’ve got a trapped prior (for example, in the case of a patient seeking treatment for a phobia of dogs), or we can have a second-order (meta-level) trapped prior that keeps us attached to the idea that the problem is entirely external. Consider the difference between “I feel bad around dogs, but that’s because I have a phobia of dogs” and “I feel bad around [people of X political party], and that’s because [people of X political party] are BAD“.  

I think second-order trapped priors are related to the phenomenon where people sometimes seem to actively resist getting something that you try to point out to them. Think of a religious fundamentalist, or a family member who resists acknowledging their contributions to relational conflicts. I call this an active blind spot

One thing that distinguishes active blind spots from blind spots in general is that there’s an element of fear and active resistance around “getting it”. In contrast, someone could have a “passive blind spot” in which they’re totally open to “getting it”, but simply haven’t yet been informed about what they’ve been missing.[4] 

I think active blind spots and second-order trapped priors actually correspond pretty directly. This element of fear around “getting it” is captured in the first-order trapped prior, and the second-order trapped prior functions as a mechanism to obfuscate that you’re trying to "not get it”.

There are many parallels between active blind spots and lies – they both spread and grow; their spreading and growing can both lead to outgrowths that have “lives of their own” disconnected from the larger whole from which they originated; and they’re both predicated on second-order falsehoods that “double down” on first-order falsehoods (a lie involves both a false assertion X and the second-order false assertion “the assertion X is true”, the latter of which distinguishes a lie from something false said by mistake). In some sense, an active blind spot is a lie, with the first-order falsehood being a perceptual misrepresentation (like the veteran “mishearing” the loud car as a gunshot) rather than a verbal misrepresentation.

I think it can get arbitrarily difficult to recognize when you’ve got active blind spots, especially when your meta-epistemology (i.e., how you discern where your epistemology is limited) might have active blind spots baked into them since before you've developed episodic memory, which I’ll describe later in this post. 

Inner work ≈ the systematic addressing of trapped priors

For me, the concept of “inner work” largely refers to the systematic addressing of trapped priors, with the help of tools like therapy, psychedelics, and meditation – all of which Scott Alexander explicitly mentioned as potential tools for addressing trapped priors (see the highlighted section here). I’ve found inner work particularly valuable for noticing and addressing my own active blind spots, which has led to vastly improved relationships with family, romantic partners, colleagues, and friends, by virtue of me drastically improving at taking responsibility for my contributions to relational conflicts. 

I think a lot of modern-day cults (e.g. Scientology, NXIVM) were so persuasive because their leaders were able to guide people through certain forms of inner work, leading to large positive effects in their psychology that they hadn’t previously conceived of as even being possible.

There are major risks involved in going deep into inner work. If one goes deep enough, it can amount to a massive refactor of “the codebase of one’s mind”, all the while one tries to continue living their life. Just as massively refactoring a product’s codebase risks breaking the product (e.g. because spaghetti code that was previously sufficient to get you by can no longer function without getting totally rewritten), refactoring the codebase of your mind can “break” your ability to perform a bunch of functions that had previously come easily.

A commonly reported example is people switching away from coercion as a source of motivation, and then being less capable of producing output, at least for a while (like publishing on the internet, in my case 😅). In more extreme cases, people may lose the ability to hold down jobs, or may get stuck in depressive episodes.

Because of the risks involved, I think going deep into inner work is best done with the support of trustworthy peers and mentors. Cults often purport to offer these, but often end up guiding people’s inner work in ways that end up exploiting and abusing them.

This naturally invites the question of how to find ethical and trustworthy traditions of inner work. I will now describe a formative experience I had that led me to seriously entertain the hypothesis that religious mystical traditions fit the bill.

Religious mystical traditions as time-tested traditions of inner work?

My entire worldview got turned upside-down the first time I experienced the healing of a trauma from infancy. It was late 2018, and I was in San Francisco, having my third or fourth session with a sexological bodyworker[5] recommended to me by someone in the rationalist community.[6] The experience started with me saying that I’d felt very small and lonely and that I’d wanted to curl up into a little ball. To my shock, my bodyworker suggested that I do exactly that. She proceeded to sit next to me, envelop her arms around me like I was a baby, rocking me, and telling me that everything would be okay. I suddenly had a distinct somatic memory of being a baby (when I recall memories of kindergarten, there’s a corresponding somatic sense of being short and having tiny limbs; with the activation of this memory, I had a body-sense of being extremely tiny and having very tiny limbs).[7] I found myself wailing into her arms as she rocked me back and forth, and feeling the release of a weight I’d been carrying on my shoulders my whole life, that I’d never had any conscious awareness or recollection of having carried.

When I sat up, my moment-to-moment experience of reality was radically different. I could suddenly feel my body more fully, and immediately thereafter understood what people meant when they told me that I was constantly “up in my head”. My very conception of what conscious experience could be expanded, since all my prior conceptions of conscious experience had involved this weight on my shoulder, for as long as I’d had episodic memory. 

I was hungry for ways to account for this experience. I felt like I had just been graced with a profound and bizarre experience, with enormous philosophical implications, that very few people even recognize exist. It seemed obviously relevant for our attempts to understand personal identity and human values that our senses of who we are and what we value might be distorted by active blind spots rooted in experiences from before we’d developed episodic memory. I had also been pondering the difficulty of metaphilosophy in the context of AI alignment, and it seemed obviously relevant for metaphilosophy that people’s philosophical intuitions could get distorted by preverbal trapped priors, and therefore that humanity’s understanding of metaphilosophy might be bottlenecked by an awareness of preverbal trapped priors.

For the first time, it seemed plausible to me that the millennia-old questions about moral philosophy[8] might only have seemed intractable because most of the people thinking about them didn’t know about the existence of preverbal trapped priors. This led me to become very curious about the worldviews held by people who were familiar with preverbal trapped priors. Every person I’d trusted who’d recognized this experience when I described it to them (including the bodyworker who facilitated this experience, some Circling facilitators, and a Buddhist meditation coach[9]) had done lots of inner work themselves, had received significant guidance from religious and spiritual traditions, and had broad convergences among their worldviews that also seemed consistent with the commonalities between the major world religions.

I was pretty sure all these people I'd trusted were on to something, which was what led me to start seriously considering the hypothesis that the major world religions implicitly claim to have solutions to the big problems of moral philosophy because they actually once did.[10] (WTF, RIGHT???) To be more precise, I’d started to seriously consider the hypothesis that:

  • people who go deep enough exploring inner work “without going off the rails” tend to notice subtle psychological truths that hold the keys to solving the big problems of moral philosophy
  • humanity has implicitly stumbled upon the solutions to the big problems of moral philosophy many times over, and whenever this happens, the solutions typically get packaged in some sort of religious tradition
  • the reason this is not obvious is because religious memes tend to mutate in ways that select for persuasiveness to the masses, rather than faithfulness to the original psychological truths, which is why they suck so much in all the ways LessWrongers know they suck

The more deeply I explored religions, and the deeper I went down my inner work journey, the more probable my hypothesis came to seem. I’ve come to believe that the mystical traditions of the major world religions are still relatively faithful to these core psychological truths, and that this is why there are broad convergences in their understandings of the human psyche, the nature of reality,[11] their prescriptions for living life well, and their approaches toward inner work.[12] I think these traditions, whose areas of convergence could together be referred to as the perennial philosophy, are trustworthy insofar as they constitute humanity’s most time-tested traditions of inner work.

The next post will go into further detail about my interpretations of some central claims of the perennial philosophy.

 

  1. ^

    I have a number of substantial disagreements with Sam Harris about how to think about religion, and in general think he interprets religious claims in overly uncharitable ways (that nevertheless seem understandable and defensible to me). I do appreciate the clarity and no-bullshit attitude he brings toward his interpretations of spirituality, though, and wish more people adopted an analogous stance when sifting through spiritual claims.

  2. ^

    Scott says the more official predictive processing term for this is “canalization”. I think this is mostly correct, with one caveat – canalization doesn’t necessarily imply maladaptiveness, whereas I think “trapped priors” imply a form of canalization that prevents the consideration of more appropriate alternative beliefs. In other words, I think someone’s belief can only be judged as trapped relative to an alternative belief that’s more truthful and more adaptive.

    By analogy, there’s a trope that trauma healing is a first-world concern, because “trauma responses” for those in the first-world may just be effective adaptations for those in the third-world. It might make perfect sense for someone growing up hungry in the third-world to hoard food and money, because starvation is always a real risk. It’s only if they move to a first-world country where they will clearly never again be at risk of starvation, yet continue to hoard food and money as though starvation remains a constant risk, that it would make sense to consider this implicit anticipation of starvation a trapped prior.

    Often, it’s clear from the context what the superior alternative belief is – for example, a veteran hearing the sound of a loud car as a gunshot would obviously do better hearing it as a car than as a gunshot. But I think the concept of “trapped prior” can get slippery or confusing sometimes if this contextuality isn’t made explicit, so I’m making an explicit note of it here.

  3. ^

    Renshin Lauren Lee notes that Buddhism could be thought of as a religion based in letting go of all trapped priors, and actually, to let go of all priors, period. Renshin also notes that this doesn't capture all of Buddhism, since it's also about compassion and ethics, but that Buddhism does make the radical claim that relieving all priors is critical for ethics / compassion / happiness / living a good life.

  4. ^

    I will mention that it’s not obvious to me that the distinction between active and passive blind spots is always as clear and clean-cut as I’m presenting it to be, and that I might be oversimplifying things a bit.

  5. ^

    Her name is Kai Wu

  6. ^

    Thanks for changing my life, Tilia!

  7. ^

    People often express skepticism that I can actually access such a memory, and I think this is partly because the thing I mean by “memory” here is different from what most people imagine by “memory”. In particular, it’s more like an emotional memory than it is an episodic memory, and the experience is more somatic and phenomenological than it is visual or verbal. To further illustrate – if a dog bit me when I was a toddler, I might have no explicit recollection of the event, but my fight-or-flight response might still activate in the presence of dogs. If I were to do exposure therapy with dogs, I would consider the somatic experiences of fear I feel in the presence of these dogs to be a form of "memory access". As I continue titrating into this fear, I might even feel activation around the flesh where I’d gotten bitten, without necessarily any episodic recollection of the event. These are the kinds of “memory access” that I’d experienced in the bodywork session.

  8. ^

    The linked excerpt does not explicitly mention moral philosophy per se, but I consider the subjects of the excerpt to be substantially about moral philosophy.

  9. ^

    When I described my experience to Michael Taft, he said something like “Infant traumas? That’s old news, Alex. Buddhists have known about this for thousands of years. They didn’t have a concept of trauma, so they called it ‘evil spirits leaving the body’, but this is really what they were referring to.”

  10. ^

    As a concrete illustration for how this might not be totally crazy, I think metaethics is largely bottlenecked on the question “where do we draw the boundaries around the selves that are alleged to be moral patients?” and Buddhism has a lot of insight into personal identity and the nature of self – including that our conceptions of ourselves are distorted by preverbal trapped priors.

  11. ^

    Truths about psychology can bleed into truths about the nature of reality. This might be counterintuitive, because truths about psychology ostensibly concern our maps of reality, whereas truths about reality concern reality itself. But some of these psychological truths take the form “most of our maps of reality are biased in some particular way, leading our conceptions of reality to also be biased in that particular way; if we correct these biases in our best guesses of what reality is actually like, we find that reality might actually be very different from what we’d initially thought”.

  12. ^

    I often employ an analogy with geometry, which a bunch of civilizations figured out (semi-)independently. The civilizations didn’t prove the exact same theorems, some civilizations figured out way more than others, and some civilizations got some important details wrong (e.g. the Babylonians thought π = 3.125), but there was nevertheless still a shared thing they were all trying to get at.

New Comment
18 comments, sorted by Click to highlight new comments since:

What I don't understand is why there should be a link between trapped priors and an moral philosophy. 

I mean, if moral realism was correct, i.e. if moral tenets such as "don't eat pork", "don't have sex with your sister", or "avoid killing sentient beings" had an universal truth value for all beings capable of moral behavior, then one might argue that the reason why people's ethics differ is that they have trapped priors which prevent them from recognizing these universal truths. 

This might be my trapped priors talking, but I am a non-cognitivist. I simply believe that assigning truth values to moral sentences such as "killing is wrong" is pointless, and they are better parsed as prescriptive sentences such as "don't kill" or "boo on killing". 

In my view, moral codes are intrinsically subjective. There is no factual disagreement between Harry and Professor Quirrell which they could hope to overcome through empiricism, they simply have different utility functions.

--

My second point is that if moral realism was true, and one of the key roles of religion was to free people from trapped priors so they could recognize these universal moral truths, then at least during the founding of religions, we should see some evidence of higher moral standards before they invariably mutate into institutions devoid of moral truths. I would argue that either, our commonly accepted humanitarian moral values are all wrong or this mutation process happened almost instantly:

  • Whatever Jesus thought about gender equality when he achieved moral enlightenment, Paul had his own ideas a few decades later. 
  • Mohammed was clearly not opposed to offensive warfare.
  • Martin Luther evidently believed that serfs should not rebel against their lords. 

On the other hand, for instances where religions did advocate for tenets compatible with humanitarianism, such as in Christian abolitionism, do not seem to correspond to strong spiritualism. Was Pope Benedict XIV condemning the slave trade because he was more spiritual (and thus in touch with the universal moral truth) than his predecessors who had endorsed it?

--

My last point is that especially with regard to relational conflicts, our map not corresponding to the territory might often not be a bug, but a feature. Per Hanson, we deceive ourselves so that we can better deceive others. Evolution has not shaped our brains to be objective cognitive engines. In some cases, objective cognition is what it advantageous -- if you are alone hunting a rabbit, no amount of self-deception will fill your stomach -- but in any social situation, expect evolution to put the hand on the scales of your impartial judgement. Arguing that your son should become the new chieftain because he is the best hunter and strongest warrior is much more effective than arguing for that simply because he is your son -- and the best way to argue that is to believe it, no matter if it is objectively true. 

The adulterer, the slave owner and the wartime rapist all have solid evolutionary reasons to engage in behaviors most of us might find immoral. I think their moral blind spots are likely not caused by trapped priors, like an exaggerated fear of dogs is. Also, I have no reason to believe that I don't have similar moral blind spots hard-wired into my brain by evolution.

I would bet that most of the serious roadblocks to a true moral theory (if such a thing existed) are of that kind, instead of being maladaptive trapped priors. Thus, even if religion and spirituality are effective at overcoming maladaptive trapped priors, I don't see how they would us bring closer to moral cognition. 

The adulterer, the slave owner and the wartime rapist all have solid evolutionary reasons to engage in behaviors most of us might find immoral. I think their moral blind spots are likely not caused by trapped priors, like an exaggerated fear of dogs is.

I don't think the evopsych and trapped-prior views are incompatible. A selection pressure towards immoral behavior could select for genes/memes that tend to result in certain kinds of trapped prior.

Regarding your second point, I'm leaving this comment as a placeholder to indicate my intention to give a proper response at some point. My views here have some subtlely that I want to make sure I unpack correctly, and it's getting late here! 

In response to your third point, I want to echo ABlue's comment about the compatibility of the trapped prior view and the evopsych view. I also want to emphasize that my usage of "trapped prior" includes genetically pre-specified priors, like a fear of snakes, which I think can be overriden. 

In any case, I don't see why priors that predispose us to e.g. adultery couldn't be similarly overriden. I wonder if our main source of disagreement has to do with the feasibility of overriding "hard-wired" evolutionary priors? 

In response to your first point, I think of moral codes as being contextual more than I think of them as being subjective, but I do think of them as fundamentally being about pragmatism ("let's all agree to coordinate in ABC way to solve PQR problem in XYZ environment, and socially punish people who aren't willing to do so"). I also think religions often make the mistake of generalizing moral codes beyond the contexts in which they arose as helpful adaptations. 

I think of decision theory as being the basis for morality -- see e.g. Critch's take here and Richard Ngo's take here. I evaluate how ethical people are based on how good they are at paying causal costs for larger acausal gains. 

I also suspect something along the lines of "Many (most?) great spiritual leaders were making a good-faith effort to understand the same ground truth with the same psychological equipment and got significantly farther than most normal people do." But in order for that to be plausible, you would need a reason why the almost-truths they found are so goddamn antimemetic that the most studied and followed people in history weren't able to make them stick. Some of the selection pressure surely comes down to social dynamics. I'd like to think that people who have grazed some great Truth are less likely to torture and kill infidels than someone who thinks they know a great truth. Cognitive blind spots could definitely explain things, though.

The problem is, the same thing that would make blind spots good at curbing the spread of enlightenment also makes them tricky to debate as a mechanism for it. They're so slippery that until you've gotten past one yourself it's hard to believe they exist (especially when the phenomenal experience of knowing-something-that-was-once-utterly-unknowable can also seemingly be explained by developing a delusion). They're also hard to falsify. What you call active blind spots are a bit easier to work with, I think most people can accept the idea of something like "a truth you're afraid to confront" even if they haven't experienced such a thing themselves (or are afraid to confront the fact that they have).

I look forward to reading your next post(s) as well as this site's reaction to them

But in order for that to be plausible, you would need a reason why the almost-truths they found are so goddamn antimemetic that the most studied and followed people in history weren't able to make them stick.

A few thoughts: 

  1. I think many of the truths do stick (like "it's never too late to repent for your misdeeds"), but end up getting wrapped up in a bunch of garbage. 
  2. The geeks, mops, and sociopaths model feels very relevant, with the great spiritual leaders / people who were serious about doing inner work being the geeks. 
  3. In some sense, these truths are fundamentally about beating Moloch, and so long as Moloch is in power, Moloch will naturally find ways to subvert them. 

They're so slippery that until you've gotten past one yourself it's hard to believe they exist (especially when the phenomenal experience of knowing-something-that-was-once-utterly-unknowable can also seemingly be explained by developing a delusion).

YES. I think this is extraordinarily well-articulated. 

My own story is a little different, but maybe not too different.

I wrote some of it a while ago in this post. I don't know if I totally endorse the way I framed it there, so let me try again.

For basically as long as I can remember, my moment-to-moment experience of the world sucked. But of course when your every experience feels net negative, you adapt and learn to live with it. But I also have the kind of mind that likes to understand things and won't rest if it doesn't understand the mechanism by which something works, so I regularly turned this to myself. I was constantly dissatisfied with everything, and just when I'd think I'd nailed down why, it would turn out I had missed something huge and had to start over again.

Eventually this led to some moments of insight when I realized just how trapped by my own ontology I had become, and then found a way threw to a new way of seeing the world. These happened almost instantly, like a dam breaking and releasing all the realizations that had been held back.

This led me to positive psychology, because I noticed that sometimes I could make my life better, and eventually led me to realize that religions weren't totally full of bunk, despite having been a life-long atheist. I'm not saying they're right about the supernatural—as best I can tell, those claims are just false if interpreted straightforwardly—but I am saying I discovered that one of the things religions try to do is tell you how to live a happy life, and some do a better job of teaching you to do this than others.

To skip ahead, that's what led me to Buddhism, Zen, and eventually practicing enough that I my moment-to-moment experience flipped. Now everything is always deeply okay, even if in a relative sense it's not okay and needs to change, and it was thanks to taking all my skills as a rationalist and then using them with teachings from religion to find my way through.

Thanks a lot for sharing your experience! I would be very curious for you to further elaborate on this part: 

Eventually this led to some moments of insight when I realized just how trapped by my own ontology I had become, and then found a way threw to a new way of seeing the world. These happened almost instantly, like a dam breaking and releasing all the realizations that had been held back.

Sure. This happened several times to me, each of which I interpret as a transition from one developmental level to the next, e.g. Kegan 3 -> 4 -> 5 -> Cook-Greuter 5/6 -> 6. Might help to talk about just one of these transitions.

In the Summer of 2015 I was thinking a lot about philosophy and trying to make sense of the world and kept noticing that, no matter what I did, I'd always run into some kind of hidden assumption that acted as a free variable in my thinking that was not constrained by anything and thus couldn't be justified. I had been going in circles around this for a couple years at this point. I was also, coincidentally, trying to figure out how to manage the work of a growing engineering team and struggling because, to me, other people looked like black boxes that I only kind of understood.

In the midst of this I read The e-Myth on the recommendation of a coworker, and in the middle of it there was this line about how effective managers are neither always high or low status, but change how they act based on the situation, and combined with a lot of other reading I was doing this caused a lot of things to click into place.

The phenomenology of it was the same as every time I've had one of these big insights. It felt like my mind stopped for several seconds while I hung out in an empty state, and then I came back online with a deeper understanding of the world. In this case, it was something like "I can believe anything I want" in the sense that there really was some unjustified assumptions being made in my thinking, this was unavoidable, and it was okay because there was no other choice. All I could do was pick the assumptions to be the ones that would be most likely to make me have a good map of the world.

It then took a couple years to really integrate this insight, and it wasn't until 2017 that I really started to grapple with the problems of the next one I would have.

In the midst of this I read The e-Myth on the recommendation of a coworker, and in the middle of it there was this line about how effective managers are neither always high or low status, but change how they act based on the situation, and combined with a lot of other reading I was doing this caused a lot of things to click into place.

I'm interested in the object level of "what are some nuts and bolts of how the high/low status manager thing worked, and how it applied", and maybe a bit more meta-but-still-object-ish level of how that insight integrated with the rest of your worldview. (or, if that second part seems wrongly phrased... idk substitute the better question you think I should have asked? lol)

Sure. I'll do my best to give some more details. This is all from memory, and it's been a while, so I may end up giving ahistorical answers that mix up the timeline. Appologies in advance for any confusion this causes. If you have more questions or I'm not really getting at what you want to know, please follow up and I'll try again.

First, let me give a little extra context on the status thing. I had also not long before read Impro, which has a big section on status games, and that definitely informed how The e-Myth hit me.

So, there's this way in which managers play high and low. When managers play high they project high confidence. Sometimes this is needed, like when you need to motivate an employee to work on something. Sometimes it's counterproductive, like when you need to learn from an employee. Playing too high status can make it hard for you to listen and for the person you need to listen to to feel like you are listening to them and thus encourage them to tell you what you need to know. Think of the know-it-all manager who can do your job better than you, or the aloof manager uninterested in the details.

Playing low status is often a problem for managers, and not being able to play high is one thing that keeps some people out of management. No one wants to follow a low status leader. A manager doesn't necessarily need to be high status in the wider world, but they at least need to be able to claim higher status than their employees if those employees are going to want to do what they say.

The trouble is, sometimes managers need to play high playing low, like when a manager listens to their employee to understand the problems they are facing in their work, and actually listen rather than immediately dismiss the concerns or round them off to something they've dealt with before. A key technique can be literally lowering oneself, like crouching down to be at eye level of someone sitting at a desk, as this non-verbally makes it clear that the employee is now in the driver seat and the manager is along for the ride.

Effective managers know how to adjust their status when needed. The bests are naturals who never had to be taught. Second best are those who figure out the mechanics and can deploy intentional status play changes to get desired outcomes. I'm definitely not in the first camp. To any extent I'm successful as a manger, it's because I'm in the second.

Ineffective managers, by contrast, just don't understand any of this. They typically play high all the time, even at inappropriate times. That will keep a manager employed, but they'll likely be in the bottom quartile of manager quality, and will only succeed in organizations where little understanding and adaptation is needed. The worst is low playing high status (think Michael Scott in The Office). You only stay a manager if you are low playing high due to organizational disfunction.

Okay, so all that out of the way, the way this worked for me was mostly in figuring out how to play high straight. I grew up with the idea that I was a smart person (because I was in fact more intelligent than lots of people around me, even if I had less experience and made mistakes due to lack of knowledge and wisdom). The archetypal smart person that most closely matched who I seemed to be was the awkward professor type who is a genius but also struggles to function. So I leaned into being that type of person and eschewed feedback I should be different because it wasn't in line with the type of person I was trying to be.

This meant my default status mode was high playing low playing high, by which I mean I saw myself as a high status person who played low, not because he wanted to, but because the world didn't recognize his genius, but who was going to press ahead and precociously aim for high status anyway. Getting into leadership, this kind of worked. Like I had good ideas, and I could convince people to follow them because they'd go "well, I don't like the vibe, but he's smart and been right before so let's try it", but it didn't always work and I found that frustrating.

At the time I didn't really understand what I was doing, though. What I realized, in part, after this particular insight, was that I could just play the status I wanted to straightforwardly. Playing multilayer status games is a defense mechanism, because if any one layer of the status play is challenges, you can fall back one more layer and defend from there. If you play straight, you're immediately up against a challenge to prove you really are what you say you are. So integration looked like peeling back the layers and untangling my behaviors to be more straightforward.

I can't say I totally figured it out from just this one insight. There was more going on that later insights would help me untangle. And I still struggle with it despite having a thorough theory and lots of experience putting it into play. My model of myself is that my brain literally runs slow, in that messages seem to propagate across it less quickly than they do for other people, as suggested by my relatively poor reaction times (+2 sd), and this makes it difficult for me to do high-bandwidth real-time processing of information like is required in social settings like work. All this is to say that I've had to dramatically over-solve almost every problem in my life to achieve normalcy, but I expect most people wouldn't need so much as I have. Make of this what you will when thinking about what this means for me to have integrated insights: I can't rely on S2 thinking to help me in the moment; I have do things with S1 or not at all (or rather with a significant async time delay).

Thanks!

I don't have a very substantive response, but wanted to say:

A key technique can be literally lowering oneself, like crouching down to be at eye level of someone sitting at a desk, as this non-verbally makes it clear that the employee is now in the driver seat and the manager is along for the ride.

This is something I've intentionally done more of lately (not in a management capacity, but in other contexts), inspired by making yourself small. It's seemed to work reasonably well but it's hard to get a clear feedback signal on how it's coming across. 

I'm curious on your thoughts of this notion of perennial philosophy and convergence of beliefs. One interpretation that I have of perennial philosophy is purely empirical: imagine that we have two "belief systems". We could define a belief system as a set of statements about the way the world works and valuation of world states (i.e. statements "if X then Y could happen" and "Z is good to have"). You can probably formalize it some other way, but I think this is a reasonable starter pack to keep it simple. (You can also imagine further formalizing it by using numbers or lattices for values and probabilities and some well-defined FSM to model parts of the world.) We could say that the religions have converged if they share a lot of features, by which I mean that for some definition of a feature the feature is present in both belief systems. We can define a feature in many ways, but for our simple thought experiment it can be effectively a state or relation between states in the two world views. For example, we could imagine that a feature is a function from states and their values/causal relations such that under the mapping it remains unchanged (i.e. there is some notion of this mapping being like an isomorphism on the projection of the set via the function). For example, in one belief system you might have some sort of "god" character that is somehow the cause of many things. The function here could be "(int(god is cause of x1) + int(god is cause of x2) + ...) /  num_objects". If we map common objects (this spoon) to themselves in the other system (still the spoon) and god to god, we will see that in both systems the function representing how causal god is, remains close to 1, and so we may say that both systems have a notion of a "god" and therefore there has been some degree of convergence in the "having a god" stuff for the two systems.

So now with all this formal BS out of the way, which I had to do, because it highlights what is missing, the question is clear: under some reasonable such definition of what convergence means, how do you decide whether two religions have converged? The vibe I get from the perennial philosophy believers that I have thus spoken to is that "you have to go through the journey to understand" and generally it appears to be a sort of dispositional convergence, at least on face value—though I do not observe people of very different religions, who claim to have convergence, conviving for a long time (meaning that it is not verifiable whether indeed, the dispositions are truly something that we could call converged). Of course, it may be possible to find mappings that claim that two belief systems have converged or not when the opposite is a more honest appreciation.

Obviously no one is going to come out here and create a mathematical definition and just "be right" (idt that's even a fair thing to consider to be possible), but I do not particularly like making such assertions totally "on vibes". Often people will say that they are "spiritual" and that "spirituality" helped them overcome some psychological challenge or who knows what, but what does "spiritual" mean here? Often it's associated with some belief system that we would, as laymen, call religious or spiritual (i.e. in the enumerable list of christianity and its sub-branches, buddhism and it's, etc...), but who is to say that it is not only some part of the phenomenon that person experienced, which happened to be caused by the spiritual system which was present at the time and place, that was the truer cause of the change of psyche? It seems compelling to me to want to decouple these "core truths" from the religions that hold them so as to have them presentable in a more neutral way, since in the alternative world where you must "go through the journey" of spirituality via some specific religion, you cannot know beforehand that you won't be effectively brainwashed—and you cannot even know afterwards either... you can only get faint hints at it during the process.

So this is not to say that that anyone is getting brainwashed or that anything is good or bad or that anything should be embraced or not. I'm just saying that from an outside perspective, it is not verifiable whether religions actually converge, without diving into this stuff. However, it is is also not verifiable whether diving in is actually good, and it's not verifiable whether afterwords it even will be verifiable. Maybe I'm stumbling into some core metaphysical whirlwind of "you cannot know anything" but I do truly believe that a more systematic exposition of how we should interpret spirituality, trapped priors, convergence, and the like is possible and would enable more productive discussion.

PS I think you've touched on something tangential in the statement that you should do this with trusted people. That's trying to bootstrap, however, a resistance to manipulative misappropriation of spirituality, whereas what I'm saying I would also like more of a logical bootstrapping to the whole notion of spirituality and ideas like "convergence" so that one can leave the conversation with solid conclusions, knowing their limitations, and having a higher level of actionability.

PPS: I feel like treating a belief system, like "rationality" as a machine/tool: something which has a certain reach, certain limitations, and that usually behaves as expected in most situations but might have some bugs, is a good way to go. This will make it easier to decouple rationality with, say, spiritual traditions. At each point of time and space you can basically decide on common sense which of these two machines/tools is best to apply. Each different tool can be shown, hopefully, to be good for some cases and thus most decision making happens on the routing level: which tool to use. If you understand the tool from a third person point of view there is less of a tendency to rely on it in the wrong cases purely on dogma.

And not mutually exclusive with convergence due to exploiting the same flaws.

I'm not sure what you mean by that, but the claim "many interpretations of religious mystical traditions converge because they exploit the same human cognitive flaws" seems plausible to me. I mostly don't find such interpretations interesting, and don't think I'm interpreting religious mystical traditions in such a way. 

I'm saying it's difficult to distinguish causation

This is curious. The usual is atheism using psychology to discredit theism. Roles are being reversed here with trapped priors, the suggestion being some veritas are being obscured by kicking religion out of our system. I half-agree since I consider this demonstration non finito

As for philosophia perennis, I'd say it's a correlation is causation fallacy. It looks as though the evident convergence of religions on moral issues is not due to the mystical and unprovable elements therein but follows from common rational aspects present in most/all religions. To the extent this is true, religion may not claim moral territory. 

That said, revelatory moral knowledge is a fascinating subject.