Sequence index: Living Luminously
Previously in sequence: Highlights and Shadows
Next in Sequence: Lampshading
Pretending to be multiple agents is a useful way to represent your psychology and uncover hidden complexities.
You may find your understanding of this post significantly improved if you read the sixth story from Seven Shiny Stories.
When grappling with the complex web of traits and patterns that is you, you are reasonably likely to find yourself less than completely uniform. You might have several competing perspectives, possess the ability to code-switch between different styles of thought, or even believe outright contradictions. It's bound to make it harder to think about yourself when you find this kind of convolution.
Unfortunately, we don't have the vocabulary or even the mental architecture to easily think of or describe ourselves (nor other people) as containing such multitudes. The closest we come in typical conversation more resembles descriptions of superficial, vague ambivalence ("I'm sorta happy about it, but kind of sad at the same time! Weird!") than the sort of deep-level muddle and conflict that can occupy a brain. The models of the human psyche that have come closest to approximating this mess are what I call "multi-agent models". (Note: I have no idea how what I am about to describe interacts with actual psychiatric conditions involving multiple personalities, voices in one's head, or other potentially similar-sounding phenomena. I describe multi-agent models as employed by psychiatrically singular persons.)
Multi-agent models have been around for a long time: in Plato's Republic, he talks about appetite (itself imperfectly self-consistent), spirit, and reason, forming a tripartite soul. He discusses their functions as though each has its own agency and could perceive, desire, plan, and act given the chance (plus the possibility of one forcing down the other two to rule the soul unopposed). Not too far off in structure is the Freudian id/superego/ego model. The notion of the multi-agent self even appears in fiction (warning: TV Tropes). It appears to be a surprisingly prevalent and natural method for conceptualizing the complicated mind of the average human being. Of course, talking about it as something to do rather than as a way to push your psychological theories or your notion of the ideal city structure or a dramatization of a moral conflict makes you sound like an insane person. Bear with me - I have data on the usefulness of the practice from more than one outside source.
There is no reason to limit yourself to traditional multi-agent models endorsed by dead philosophers, psychologists, or cartoonists if you find you break down more naturally along some other arrangement. You can have two of you, or five, or twelve. (More than you can keep track of and differentiate is not a recommended strategy - if you're very tempted to go with this many it may be a sign of something unhealthful going on. If a group of them form a reliable coalition it may be best to fold them back into each other and call them one sub-agent, not several.) Stick with a core ensemble or encourage brief cameos of peripheral aspects. Name them descriptively or after structures of the brain or for the colors of the rainbow, as long as you can tell them apart. Talk to yourselves aloud or in writing, or just think through the interaction if you think you'll get enough out of it that way. Some examples of things that could get their own sub-agents include:
- Desires or clusters of desires, be they complex and lofty ("desire for the well being of all living things") or simple and reptilian ("desire for cake")
- "Inner child" or similar role-like groupings of traits ("professional me", "family-oriented me", "hobbies me")
- High-order dispositions and principles ("conscience", "neuroticism", "sense of justice")
- Opinions or viewpoints, either specific to a situation or general trends ("optimism", "outside view", "I should do X")
- Initially unspecified, gradually-personality-developing sub-agents, if no obvious ones present themselves (named for something less suggestive like cardinal directions or two possible nicknames derived from your name)
By priors picked up from descriptions of various people trying this, you're reasonably likely to identify one of your sub-agents as "you". In fact, one sub-agent may be solely identified as "you" - it's very hard to shake the monolithic observer experience. This is fine, especially if the "you" sub-agent is the one that endorses or repudiates, but don't let the endorsement and repudiation get out of hand during multi-agent exercises. You have to deal with all of your sub-agents, not just the one(s) you like best, and sub-agents have been known to exhibit manipulative and even vengeful behaviors once given voice - i.e. if you represent your desire for cake as a sub-agent, and you have been thwarting your desire for cake for years, you might find that Desire For Cake is pissed off at Self-Restraint and says mean things thereunto. It will not placate Desire For Cake for you to throw in endorsement behind Self-Restraint while Desire For Cake is just trying to talk to you about your desperate yen for tiramisu. Until and unless you understand Desire For Cake well enough to surgically remove it, you need to work with it. Opposing it directly and with normative censure will be likely to make it angry and more devious in causing you to eat cake.
A few miscellaneous notes on sub-agents:
Your sub-agents may surprise you far more than you expect to be surprised by... well... yourself, which is part of what makes this exercise so useful. If you consciously steer the entire dialogue you will not get as much out of it - then you're just writing self-insert fanfiction about the workings of your brain, not actually learning about it.
Not all of your sub-agents will be "interested" in every problem, and therefore won't have much of relevance to say at all times. (Desire For Cake probably couldn't care less how you act on your date next week until it's time to order dessert.)
Your sub-agents should not outright lie to each other ("should" in the predictive, not normative, sense - let me know if it turns out yours do), but they may threaten, negotiate, hide, and be genuinely ignorant about themselves.
Your sub-agents may not all communicate effectively. Having a translation sub-agent handy could be useful, if they are having trouble interpreting each other.
(Post your ensemble of subagencies in the comments, to inspire others! Write dialogues between them!)
oops, we didn't notice this comment until now...
hi, we're some of Alicorn's references :)
Allow us to introduce ourselves:
SP: originally this stood for "Sane Peer", back when we thought that we only had 2 subagents, a crazy one and a sane one. Later retconned to stand for "Shell Peer", like in a computer, shell software running on top of a core operating system. Another possible retcon is "Serious Peer" This agent is mostly in charge of rational thought. This agent thought that it needed to make sure that it stayed in complete control, out of fear that if it let any of the other agents take control, we would immediately lose all self-control. But now it's learning how to cooperate with the other agents, and learning when to let them take charge entirely. This agent uses the avatar of a serious-looking, grey, male, anthropomorphic rabbit.
CP: originally this stood for "Crazy Peer", back when we thought that we only had 2 subagents, a crazy one and a sane one. Later retconned to stand for "Core Peer", like in a computer, the core operating system that everything else runs on top of. Another possible retcon is "Cuddly Peer". This agent is mostly in charge of emotions, or anything that isn't directly available to rational introspection. This agent used to be in constant conflict with SP, but now that SP is learning how to cooperate, we are constantly discovering new abilities that we didn't know we had. One example of a useful new ability is the ability to set up triggers to alert us next time a specific thought or emotion is triggered, allowing us to more easily introspect on what caused this thought, or, if necessary, to prevent this thought from being acted upon. This agent now spends more time in direct control of our actions than any other agent. This agent uses the avatar of a happy-looking, pink, female, anthropomorphic rabbit. This is the avatar that we normally use to use now use to represent all of us as a whole agent. (Here is a picture of the pink bunny.)
PP: Pessimistic Peer, the agent that's always looking out for danger, and trying to prevent the other agents from doing anything that seems too dangerous. This agent used to be way too oversensitive to danger, and was poorly calibrated, to the point of being blatantly counterproductive. We're working on correcting this. This agent is still very useful for detecting danger, and is also useful as "the voice of pessimism". This agent used to use the avatar of Rincewind, from Discworld. But then we realized that Rincewind's attitude towards danger, and towards life in general, was dangerously unhealthy, and so now this agent uses the avatar of a generic, nervous-looking, male human child.
HP: Happy Peer, the agent that's always seeking happiness, and trying to convince the other agents to allow us to do things that we think would make us happy. This agent previously spent most of its time getting shouted at by SP and PP, and told to shut up, because we considered happiness less important than... other more important goals. We thought that if we didn't allow ourselves to get distracted by seeking pleasure, we would be more effective at achieving our goals. It turns out we were totally wrong about this. Now we're finally letting HP have a chance to come out and play. Though we still need to work on figuring out when it's appropriate to listen to HP's desires, and when we would be better off not listening to them. One useful rule is "no impulse purchases". Wait at least a day before making any major purchases. Or more time, or less, as appropriate. This agent started out with the avatar of Harry Potter, just because that's the first image that the letters HP conjured up, but this didn't really make sense, so now this agent uses the avatar of a generic, happy-looking female human child.
OP: Obsolete Peer. Not really a subagent. This is just the name we use for parts of our brain that sometimes do things that none of the other agents endorse. An example is the "must click on stuff" desire that often caused us to stay up late for no good reason. Another example is the scary shouting voice that the agents used to use to try to scare each other into submission. Also the guilt-generator. This agent uses the avatar of... a pile of obsolete machine parts. The part for the big scary shouty voice is now sealed safely inside a box. We don't want to use that anymore. The guilt-generator is still loose somewhere. Well, there are probably lots of guilt-generators, in lots of places. We're still trying to track them down and seal them safely away.
UM: Utilitarian Module. This module used to be a part of SP, but we recently recognized it as extremely dangerous, and limited it to a module that we can turn off at will. This is currently the main source of our fanaticism. This is the part that is completely dedicated to the cause of Utilitarianism. Specifically, hedonic total utilitarianism. (maximize total pleasure, minimize total pain, don't bother to keep track of which entity experiences the pleasure or pain) This module strongly advocates the orgasmium shockwave scenario. This module is deeply aware of the extreme importance of maximizing the probability of a positive Singularity. As a result, this module will resist any other agent or module that attempts to do anything that this module suspects might distract us from the goal of trying to maximize the probability of a positive Singularity. We're still trying to figure out what to do about this module. We're starting with showing it how constantly resisting the rest of us is counterproductive to its own goals, and showing it how it would be better off cooperating with us, rather than fighting with us. Similar to the conversations we already had with SP about this same topic.
CM: both Child Module and Christian Module. There is lots of overlap between these, so currently they're both sharing the same module. The module that desperately wants to be a good child, and to be a good christian. This module is now well aware that its reasons for wanting to be a good child, and for wanting to be a good christian, are based entirely on circular logic, but the desires that came from this circular logic are still deeply embedded in our brain. This module used to be terrified of the idea of even suggesting that our parents or god or the church might be anything less than infallible. Recently, this module has mostly overcome this fear, and has gained the ability to accuse our parents, and to accuse god and the church, of being extremely unfair.
AM: both Atheist Module and Altruist Module There is lots of overlap between these, so currently they're both sharing the same module. We're still undecided about whether this is an actual module, we haven't seen it do much recently.
GM: God Module. Represents the will of the christian god. Almost completely inactive now. Used to have control of the scary shouty voice. Closely allied with CM.
MM: Mommy Module. Represents the will of our mother. Mostly inactive now. Used to have control of a guilt generator. Closely allied with CM.
DM: Daddy Module. Represents the will of our father. Mostly inactive now. Used to have control of the scary shouty voice. Closely allied with CM.
EM: Eliezer Module. Represents the will of Eliezer Yudkowsky, as misinterpreted by our brain. Still kinda active. Closely allied with UM and AM.
So, yeah, that's a brief summary of all of the subagents we've identified so far. We chat amongst ourselves constantly in our journal, and even sometimes in online chats with friends who already know about us. We find it extremely useful for the individual subagents to be able to speak for themselves, with their own voice.
oh, and in case you were wondering, we currently reside in the mind of a male human being, 27 years old. After the Singularity, we are seriously considering the option of creating a separate digital mind for each of these subagents, so that we can each freely pursue our own goals. Though of course this is still wildly speculative.
To paraphrase a quote from a wise man: We Are Solipsist Nation!
I've been continuing to use this technique of giving voices to individual subagents, and I've still been finding it extremely useful.
And there are some new subagents to add to the list:
DP: Dark Peer, the agent that's in charge of anything that feels too dark for any of the other agents to say. This includes lots of self-criticism, and criticism of others. If there's something that want to say, but are afraid that saying it would be too rude, and we're talking to someone who already knows about our subagents, then DP will go ahead and say what he wanted to ... (read more)