That reminds me a bit of PJ Eby's list of ways people sometimes do his RMI technique wrong. (PJ, if you're reading this, would you mind if I posted it? I'm referring to the list from page 55 of TTD
That's fine; I've posted a similar list here previously, too.
I know RMI isn't exactly the same as what Alicorn is talking about,
It's sort of the same, in that the same basic mental state applies. It's simply a question of utilization.
My model differs in that I assume there are really only two "parts" to speak of:
The "near" brain, composed of a network of possibly-conflicting interests, and a warehouse of mental/physical motor programs, classified by context and expected effects on important variables (such as SASS-derived variables).
The logical, confabulating, abstract, verbal "far" brain... whose main role sometimes seems to be to try to distract you from actually observing your motivations!
Anyway, the near brain doesn't have a personality - it embodies personalities, and can play whatever role you can remember or imagine. That's why I consider the exercise a waste of time in the general case, even though there are useful ways to do role-playing. If you simply play roles, you run the risk of simply confabulating, because your brain can play any role, whether it's related to what you actually do or not.
And it's not so much that it's fanfiction, per se (as it would be if you use only the "far" brain to write the dialogs).. What you roleplay is real, in the sense that you are using the same equipment (if you're doing it right) that also plays the role of your "normal" personality! The near brain can play any role you want it to, so you are already corrupting the state of what you're trying to inspect by bringing roles into it in the first place.
IOW, it's a (relative) waste of time to have elaborate dialogs about your internal conflicts, even though there's a very good chance you'll stumble onto insights that will lead to you fixing things, from time to time.
In effect, self-anthropomorphism is like spending time talking to chatbots, when what you need to do is directly inspect their source code and pull out their goal lists.
The things that seem to be "parts" or "personalities" are really just roles that you can play -- like mimicking a close friend or pretending to be Yoda or Darth Vader. You're essentially putting costumes on yourself and acting things out, rather than simply inspecting the raw material these roles are based on.
To put it another way, instead of pretending to be Darth Vader, what you want to be inspecting are the life events of Anakin Skywalker... unpleasant though that may be. ;-) (And even as unpleasant as it may be to watch little Ani's traumas, it's probably safer than asking to have a sit-down with Vader himself...)
So, the point of inner dialoging (IMO) is to identify those interests that are based on outdated attempts to seek SASS (Status, Affiliation, Safety, or Stimulation) in contexts where the desired behavior will not actually bring you those things, so you can surface that and drop the mental rules that link SASS threats to a desired behavior, or SASS rewards to an undesired one.
(That, I guess would be the alchemy/chemistry distinction that Roko was alluding to previously.)
I agree. I worry that anthropomorphising these conflicting thoughts just strengthens the divide.
I like your comment "All this has very little to do with actual agency or the workings of akrasia, though, and tends to interfere with the process of a person owning up to the goals that they want to dissociate from. By pretending it's another agency that wants to surf the net, you get to maintain moral superiority... and still hang onto your problem. The goal of virtually any therapy that involves multiple agencies, is to integrate them, but the typical person on getting hold of the metaphor uses is to maintain the separation."
Sequence index: Living Luminously
Previously in sequence: Highlights and Shadows
Next in Sequence: Lampshading
Pretending to be multiple agents is a useful way to represent your psychology and uncover hidden complexities.
You may find your understanding of this post significantly improved if you read the sixth story from Seven Shiny Stories.
When grappling with the complex web of traits and patterns that is you, you are reasonably likely to find yourself less than completely uniform. You might have several competing perspectives, possess the ability to code-switch between different styles of thought, or even believe outright contradictions. It's bound to make it harder to think about yourself when you find this kind of convolution.
Unfortunately, we don't have the vocabulary or even the mental architecture to easily think of or describe ourselves (nor other people) as containing such multitudes. The closest we come in typical conversation more resembles descriptions of superficial, vague ambivalence ("I'm sorta happy about it, but kind of sad at the same time! Weird!") than the sort of deep-level muddle and conflict that can occupy a brain. The models of the human psyche that have come closest to approximating this mess are what I call "multi-agent models". (Note: I have no idea how what I am about to describe interacts with actual psychiatric conditions involving multiple personalities, voices in one's head, or other potentially similar-sounding phenomena. I describe multi-agent models as employed by psychiatrically singular persons.)
Multi-agent models have been around for a long time: in Plato's Republic, he talks about appetite (itself imperfectly self-consistent), spirit, and reason, forming a tripartite soul. He discusses their functions as though each has its own agency and could perceive, desire, plan, and act given the chance (plus the possibility of one forcing down the other two to rule the soul unopposed). Not too far off in structure is the Freudian id/superego/ego model. The notion of the multi-agent self even appears in fiction (warning: TV Tropes). It appears to be a surprisingly prevalent and natural method for conceptualizing the complicated mind of the average human being. Of course, talking about it as something to do rather than as a way to push your psychological theories or your notion of the ideal city structure or a dramatization of a moral conflict makes you sound like an insane person. Bear with me - I have data on the usefulness of the practice from more than one outside source.
There is no reason to limit yourself to traditional multi-agent models endorsed by dead philosophers, psychologists, or cartoonists if you find you break down more naturally along some other arrangement. You can have two of you, or five, or twelve. (More than you can keep track of and differentiate is not a recommended strategy - if you're very tempted to go with this many it may be a sign of something unhealthful going on. If a group of them form a reliable coalition it may be best to fold them back into each other and call them one sub-agent, not several.) Stick with a core ensemble or encourage brief cameos of peripheral aspects. Name them descriptively or after structures of the brain or for the colors of the rainbow, as long as you can tell them apart. Talk to yourselves aloud or in writing, or just think through the interaction if you think you'll get enough out of it that way. Some examples of things that could get their own sub-agents include:
By priors picked up from descriptions of various people trying this, you're reasonably likely to identify one of your sub-agents as "you". In fact, one sub-agent may be solely identified as "you" - it's very hard to shake the monolithic observer experience. This is fine, especially if the "you" sub-agent is the one that endorses or repudiates, but don't let the endorsement and repudiation get out of hand during multi-agent exercises. You have to deal with all of your sub-agents, not just the one(s) you like best, and sub-agents have been known to exhibit manipulative and even vengeful behaviors once given voice - i.e. if you represent your desire for cake as a sub-agent, and you have been thwarting your desire for cake for years, you might find that Desire For Cake is pissed off at Self-Restraint and says mean things thereunto. It will not placate Desire For Cake for you to throw in endorsement behind Self-Restraint while Desire For Cake is just trying to talk to you about your desperate yen for tiramisu. Until and unless you understand Desire For Cake well enough to surgically remove it, you need to work with it. Opposing it directly and with normative censure will be likely to make it angry and more devious in causing you to eat cake.
A few miscellaneous notes on sub-agents:
Your sub-agents may surprise you far more than you expect to be surprised by... well... yourself, which is part of what makes this exercise so useful. If you consciously steer the entire dialogue you will not get as much out of it - then you're just writing self-insert fanfiction about the workings of your brain, not actually learning about it.
Not all of your sub-agents will be "interested" in every problem, and therefore won't have much of relevance to say at all times. (Desire For Cake probably couldn't care less how you act on your date next week until it's time to order dessert.)
Your sub-agents should not outright lie to each other ("should" in the predictive, not normative, sense - let me know if it turns out yours do), but they may threaten, negotiate, hide, and be genuinely ignorant about themselves.
Your sub-agents may not all communicate effectively. Having a translation sub-agent handy could be useful, if they are having trouble interpreting each other.
(Post your ensemble of subagencies in the comments, to inspire others! Write dialogues between them!)