Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

City of Lights

26 Post author: Alicorn 31 March 2010 11:30PM

Sequence index: Living Luminously
Previously in sequence: Highlights and Shadows
Next in Sequence: Lampshading

Pretending to be multiple agents is a useful way to represent your psychology and uncover hidden complexities.

You may find your understanding of this post significantly improved if you read the sixth story from Seven Shiny Stories.

When grappling with the complex web of traits and patterns that is you, you are reasonably likely to find yourself less than completely uniform.  You might have several competing perspectives, possess the ability to code-switch between different styles of thought, or even believe outright contradictions.  It's bound to make it harder to think about yourself when you find this kind of convolution.

Unfortunately, we don't have the vocabulary or even the mental architecture to easily think of or describe ourselves (nor other people) as containing such multitudes.  The closest we come in typical conversation more resembles descriptions of superficial, vague ambivalence ("I'm sorta happy about it, but kind of sad at the same time!  Weird!") than the sort of deep-level muddle and conflict that can occupy a brain.  The models of the human psyche that have come closest to approximating this mess are what I call "multi-agent models".  (Note: I have no idea how what I am about to describe interacts with actual psychiatric conditions involving multiple personalities, voices in one's head, or other potentially similar-sounding phenomena.  I describe multi-agent models as employed by psychiatrically singular persons.)

Multi-agent models have been around for a long time: in Plato's Republic, he talks about appetite (itself imperfectly self-consistent), spirit, and reason, forming a tripartite soul.  He discusses their functions as though each has its own agency and could perceive, desire, plan, and act given the chance (plus the possibility of one forcing down the other two to rule the soul unopposed).  Not too far off in structure is the Freudian id/superego/ego model.  The notion of the multi-agent self even appears in fiction (warning: TV Tropes).  It appears to be a surprisingly prevalent and natural method for conceptualizing the complicated mind of the average human being.  Of course, talking about it as something to do rather than as a way to push your psychological theories or your notion of the ideal city structure or a dramatization of a moral conflict makes you sound like an insane person.  Bear with me - I have data on the usefulness of the practice from more than one outside source.

There is no reason to limit yourself to traditional multi-agent models endorsed by dead philosophers, psychologists, or cartoonists if you find you break down more naturally along some other arrangement.  You can have two of you, or five, or twelve.  (More than you can keep track of and differentiate is not a recommended strategy - if you're very tempted to go with this many it may be a sign of something unhealthful going on.  If a group of them form a reliable coalition it may be best to fold them back into each other and call them one sub-agent, not several.)  Stick with a core ensemble or encourage brief cameos of peripheral aspects.  Name them descriptively or after structures of the brain or for the colors of the rainbow, as long as you can tell them apart.  Talk to yourselves aloud or in writing, or just think through the interaction if you think you'll get enough out of it that way.  Some examples of things that could get their own sub-agents include:

  • Desires or clusters of desires, be they complex and lofty ("desire for the well being of all living things") or simple and reptilian ("desire for cake")
  • "Inner child" or similar role-like groupings of traits ("professional me", "family-oriented me", "hobbies me")
  • High-order dispositions and principles ("conscience", "neuroticism", "sense of justice")
  • Opinions or viewpoints, either specific to a situation or general trends ("optimism", "outside view", "I should do X")
  • Initially unspecified, gradually-personality-developing sub-agents, if no obvious ones present themselves (named for something less suggestive like cardinal directions or two possible nicknames derived from your name)

By priors picked up from descriptions of various people trying this, you're reasonably likely to identify one of your sub-agents as "you".  In fact, one sub-agent may be solely identified as "you" - it's very hard to shake the monolithic observer experience.  This is fine, especially if the "you" sub-agent is the one that endorses or repudiates, but don't let the endorsement and repudiation get out of hand during multi-agent exercises.  You have to deal with all of your sub-agents, not just the one(s) you like best, and sub-agents have been known to exhibit manipulative and even vengeful behaviors once given voice - i.e. if you represent your desire for cake as a sub-agent, and you have been thwarting your desire for cake for years, you might find that Desire For Cake is pissed off at Self-Restraint and says mean things thereunto.  It will not placate Desire For Cake for you to throw in endorsement behind Self-Restraint while Desire For Cake is just trying to talk to you about your desperate yen for tiramisu.  Until and unless you understand Desire For Cake well enough to surgically remove it, you need to work with it.  Opposing it directly and with normative censure will be likely to make it angry and more devious in causing you to eat cake.

A few miscellaneous notes on sub-agents:

Your sub-agents may surprise you far more than you expect to be surprised by... well... yourself, which is part of what makes this exercise so useful.  If you consciously steer the entire dialogue you will not get as much out of it - then you're just writing self-insert fanfiction about the workings of your brain, not actually learning about it.

Not all of your sub-agents will be "interested" in every problem, and therefore won't have much of relevance to say at all times.  (Desire For Cake probably couldn't care less how you act on your date next week until it's time to order dessert.)

Your sub-agents should not outright  lie to each other ("should" in the predictive, not normative, sense - let me know if it turns out yours do), but they may threaten, negotiate, hide, and be genuinely ignorant about themselves.

Your sub-agents may not all communicate effectively.  Having a translation sub-agent handy could be useful, if they are having trouble interpreting each other.

(Post your ensemble of subagencies in the comments, to inspire others!  Write dialogues between them!)

Comments (33)

Comment author: ata 01 April 2010 12:43:29AM 2 points [-]

If you consciously steer the entire dialogue you will not get as much out of it - then you're just writing self-insert fanfiction about the workings of your brain, not actually learning about it.

That reminds me a bit of PJ Eby's list of ways people sometimes do his RMI technique wrong. (PJ, if you're reading this, would you mind if I posted it? I'm referring to the list from page 55 of TTD. I know RMI isn't exactly the same as what Alicorn is talking about, but I think they're probably invoking related or identical mental processes, and some of your tips on not messing it up seem like they'd be helpful here.)

Comment author: wedrifid 01 April 2010 01:38:01AM 0 points [-]

I know RMI isn't exactly the same as what Alicorn is talking about, but I think they're probably invoking related or identical mental processes, and some of your tips on not messing it up seem like they'd be helpful here.)

I think the mental processes invoked are likely related or identical too, but watch out! ;)

Comment author: pjeby 01 April 2010 07:37:56PM 4 points [-]

That reminds me a bit of PJ Eby's list of ways people sometimes do his RMI technique wrong. (PJ, if you're reading this, would you mind if I posted it? I'm referring to the list from page 55 of TTD

That's fine; I've posted a similar list here previously, too.

I know RMI isn't exactly the same as what Alicorn is talking about,

It's sort of the same, in that the same basic mental state applies. It's simply a question of utilization.

My model differs in that I assume there are really only two "parts" to speak of:

  1. The "near" brain, composed of a network of possibly-conflicting interests, and a warehouse of mental/physical motor programs, classified by context and expected effects on important variables (such as SASS-derived variables).

  2. The logical, confabulating, abstract, verbal "far" brain... whose main role sometimes seems to be to try to distract you from actually observing your motivations!

Anyway, the near brain doesn't have a personality - it embodies personalities, and can play whatever role you can remember or imagine. That's why I consider the exercise a waste of time in the general case, even though there are useful ways to do role-playing. If you simply play roles, you run the risk of simply confabulating, because your brain can play any role, whether it's related to what you actually do or not.

And it's not so much that it's fanfiction, per se (as it would be if you use only the "far" brain to write the dialogs).. What you roleplay is real, in the sense that you are using the same equipment (if you're doing it right) that also plays the role of your "normal" personality! The near brain can play any role you want it to, so you are already corrupting the state of what you're trying to inspect by bringing roles into it in the first place.

IOW, it's a (relative) waste of time to have elaborate dialogs about your internal conflicts, even though there's a very good chance you'll stumble onto insights that will lead to you fixing things, from time to time.

In effect, self-anthropomorphism is like spending time talking to chatbots, when what you need to do is directly inspect their source code and pull out their goal lists.

The things that seem to be "parts" or "personalities" are really just roles that you can play -- like mimicking a close friend or pretending to be Yoda or Darth Vader. You're essentially putting costumes on yourself and acting things out, rather than simply inspecting the raw material these roles are based on.

To put it another way, instead of pretending to be Darth Vader, what you want to be inspecting are the life events of Anakin Skywalker... unpleasant though that may be. ;-) (And even as unpleasant as it may be to watch little Ani's traumas, it's probably safer than asking to have a sit-down with Vader himself...)

So, the point of inner dialoging (IMO) is to identify those interests that are based on outdated attempts to seek SASS (Status, Affiliation, Safety, or Stimulation) in contexts where the desired behavior will not actually bring you those things, so you can surface that and drop the mental rules that link SASS threats to a desired behavior, or SASS rewards to an undesired one.

(That, I guess would be the alchemy/chemistry distinction that Roko was alluding to previously.)

Comment author: zemaj 02 April 2010 02:08:06AM 1 point [-]

I agree. I worry that anthropomorphising these conflicting thoughts just strengthens the divide.

I like your comment "All this has very little to do with actual agency or the workings of akrasia, though, and tends to interfere with the process of a person owning up to the goals that they want to dissociate from. By pretending it's another agency that wants to surf the net, you get to maintain moral superiority... and still hang onto your problem. The goal of virtually any therapy that involves multiple agencies, is to integrate them, but the typical person on getting hold of the metaphor uses is to maintain the separation."

Comment author: thomblake 01 April 2010 01:00:08AM 1 point [-]

Thanks for the TvTropes warning.

Comment author: wedrifid 01 April 2010 01:44:32AM *  3 points [-]

Your sub-agents should not outright lie to each other ("should" in the predictive, not normative, sense - let me know if it turns out yours do), but they may threaten, negotiate, hide, and be genuinely ignorant about themselves.

I disagree! Finding out when we are lying to our 'selves' about either beliefs or values is one of the most valuable outcomes of this exercise. Lying to our selves so that we may more effectively lie to each other is a core human (homo hipocritus) skill!

Comment author: Morendil 01 April 2010 06:21:26AM *  7 points [-]

This post should definitely have mentioned Julian Jaynes. ;)

Other related material includes Virginia Satir's "Parts party" (brief description).

One way I recognize my various parts is that they come out to different extents according to the company I keep. When I'm with people whose thinking is sloppy that bugs the hell out of my inner Spock and he comes out swinging. When I'm hanging out with a bunch of rationalists my inner Mysterian screams at every appearance of a Myth of Pure Reason.

This can lead to some graceful ways to disagree with people, btw. "Part of me wishes you were right, but my inner Tyler Durden finds that wishy-washy and is making rude comments. How do your arguments stand up to what we know of human nature?" This is much more satisfactory than giving Tyler Durden the run of your mouth and accusing your interlocutor of angelism outright.

Plus, it helps to recognize that even if you think your interlocutor is an idiot, they may not be entirely an idiot, just temporarily under the control of an idiotic part.

I don't do this as explicitly as the post suggests (and never to the extent of writing dialogue) but the turn of phrase "part of me agrees/disagree" is familiar, and I suspect it turns up in my thoughts from time to time.

Comment author: Morendil 01 April 2010 06:23:51AM 2 points [-]

data on the usefulness of the practice from more than one outside source

References would have been awfully nice.

Comment author: Alicorn 01 April 2010 06:12:16PM 0 points [-]

If they'd like to step forward, they can. They read LW.

Comment author: mattnewport 01 April 2010 06:43:42PM 4 points [-]

Are you using the 'plural of anecdote' definition of data?

Comment author: Morendil 01 April 2010 06:46:53PM 2 points [-]

Disclosure: I voted up the above comment, after I refrained from making a similar one myself, out of concern that it would read as too confrontational.

Memo to self: based on that evidence, be more arrogant next time.

Alicorn: please show, don't tell.

Comment author: Alicorn 01 April 2010 07:32:12PM 2 points [-]

I'm using it as a plural of "pieces of information". This information is anecdotal.

Comment author: PeerInfinity 25 April 2010 12:10:28AM *  6 points [-]

oops, we didn't notice this comment until now...

hi, we're some of Alicorn's references :)

Allow us to introduce ourselves:

SP: originally this stood for "Sane Peer", back when we thought that we only had 2 subagents, a crazy one and a sane one. Later retconned to stand for "Shell Peer", like in a computer, shell software running on top of a core operating system. Another possible retcon is "Serious Peer" This agent is mostly in charge of rational thought. This agent thought that it needed to make sure that it stayed in complete control, out of fear that if it let any of the other agents take control, we would immediately lose all self-control. But now it's learning how to cooperate with the other agents, and learning when to let them take charge entirely. This agent uses the avatar of a serious-looking, grey, male, anthropomorphic rabbit.

CP: originally this stood for "Crazy Peer", back when we thought that we only had 2 subagents, a crazy one and a sane one. Later retconned to stand for "Core Peer", like in a computer, the core operating system that everything else runs on top of. Another possible retcon is "Cuddly Peer". This agent is mostly in charge of emotions, or anything that isn't directly available to rational introspection. This agent used to be in constant conflict with SP, but now that SP is learning how to cooperate, we are constantly discovering new abilities that we didn't know we had. One example of a useful new ability is the ability to set up triggers to alert us next time a specific thought or emotion is triggered, allowing us to more easily introspect on what caused this thought, or, if necessary, to prevent this thought from being acted upon. This agent now spends more time in direct control of our actions than any other agent. This agent uses the avatar of a happy-looking, pink, female, anthropomorphic rabbit. This is the avatar that we normally use to use now use to represent all of us as a whole agent. (Here is a picture of the pink bunny.)

PP: Pessimistic Peer, the agent that's always looking out for danger, and trying to prevent the other agents from doing anything that seems too dangerous. This agent used to be way too oversensitive to danger, and was poorly calibrated, to the point of being blatantly counterproductive. We're working on correcting this. This agent is still very useful for detecting danger, and is also useful as "the voice of pessimism". This agent used to use the avatar of Rincewind, from Discworld. But then we realized that Rincewind's attitude towards danger, and towards life in general, was dangerously unhealthy, and so now this agent uses the avatar of a generic, nervous-looking, male human child.

HP: Happy Peer, the agent that's always seeking happiness, and trying to convince the other agents to allow us to do things that we think would make us happy. This agent previously spent most of its time getting shouted at by SP and PP, and told to shut up, because we considered happiness less important than... other more important goals. We thought that if we didn't allow ourselves to get distracted by seeking pleasure, we would be more effective at achieving our goals. It turns out we were totally wrong about this. Now we're finally letting HP have a chance to come out and play. Though we still need to work on figuring out when it's appropriate to listen to HP's desires, and when we would be better off not listening to them. One useful rule is "no impulse purchases". Wait at least a day before making any major purchases. Or more time, or less, as appropriate. This agent started out with the avatar of Harry Potter, just because that's the first image that the letters HP conjured up, but this didn't really make sense, so now this agent uses the avatar of a generic, happy-looking female human child.

OP: Obsolete Peer. Not really a subagent. This is just the name we use for parts of our brain that sometimes do things that none of the other agents endorse. An example is the "must click on stuff" desire that often caused us to stay up late for no good reason. Another example is the scary shouting voice that the agents used to use to try to scare each other into submission. Also the guilt-generator. This agent uses the avatar of... a pile of obsolete machine parts. The part for the big scary shouty voice is now sealed safely inside a box. We don't want to use that anymore. The guilt-generator is still loose somewhere. Well, there are probably lots of guilt-generators, in lots of places. We're still trying to track them down and seal them safely away.

UM: Utilitarian Module. This module used to be a part of SP, but we recently recognized it as extremely dangerous, and limited it to a module that we can turn off at will. This is currently the main source of our fanaticism. This is the part that is completely dedicated to the cause of Utilitarianism. Specifically, hedonic total utilitarianism. (maximize total pleasure, minimize total pain, don't bother to keep track of which entity experiences the pleasure or pain) This module strongly advocates the orgasmium shockwave scenario. This module is deeply aware of the extreme importance of maximizing the probability of a positive Singularity. As a result, this module will resist any other agent or module that attempts to do anything that this module suspects might distract us from the goal of trying to maximize the probability of a positive Singularity. We're still trying to figure out what to do about this module. We're starting with showing it how constantly resisting the rest of us is counterproductive to its own goals, and showing it how it would be better off cooperating with us, rather than fighting with us. Similar to the conversations we already had with SP about this same topic.

CM: both Child Module and Christian Module. There is lots of overlap between these, so currently they're both sharing the same module. The module that desperately wants to be a good child, and to be a good christian. This module is now well aware that its reasons for wanting to be a good child, and for wanting to be a good christian, are based entirely on circular logic, but the desires that came from this circular logic are still deeply embedded in our brain. This module used to be terrified of the idea of even suggesting that our parents or god or the church might be anything less than infallible. Recently, this module has mostly overcome this fear, and has gained the ability to accuse our parents, and to accuse god and the church, of being extremely unfair.

AM: both Atheist Module and Altruist Module There is lots of overlap between these, so currently they're both sharing the same module. We're still undecided about whether this is an actual module, we haven't seen it do much recently.

GM: God Module. Represents the will of the christian god. Almost completely inactive now. Used to have control of the scary shouty voice. Closely allied with CM.

MM: Mommy Module. Represents the will of our mother. Mostly inactive now. Used to have control of a guilt generator. Closely allied with CM.

DM: Daddy Module. Represents the will of our father. Mostly inactive now. Used to have control of the scary shouty voice. Closely allied with CM.

EM: Eliezer Module. Represents the will of Eliezer Yudkowsky, as misinterpreted by our brain. Still kinda active. Closely allied with UM and AM.

So, yeah, that's a brief summary of all of the subagents we've identified so far. We chat amongst ourselves constantly in our journal, and even sometimes in online chats with friends who already know about us. We find it extremely useful for the individual subagents to be able to speak for themselves, with their own voice.

oh, and in case you were wondering, we currently reside in the mind of a male human being, 27 years old. After the Singularity, we are seriously considering the option of creating a separate digital mind for each of these subagents, so that we can each freely pursue our own goals. Though of course this is still wildly speculative.

To paraphrase a quote from a wise man: We Are Solipsist Nation!

Comment author: PeerInfinity 25 April 2010 01:59:49AM 0 points [-]

oh, and then there's the meta level:

sometimes if SP needs to be told about something he's doing that's irrational, SSP will tell him about it.

sometimes if CP has something that he's having trouble introspecting on, CCP will help him introspect on it.

sometimes if PP needs to be warned about something he's doing that's dangerous, PPP will warn him about it.

sometimes if HP feels left out because he doesn't get to have a meta-level, HHP will say something just for fun :)

oh, and I tend to not pay much attention to what pronouns I use. I use he, she, it, and they kinda interchangably. I also don't pay much attention to whether I use "I" or "we". I often end up accidentally using both in the same sentence. You're welcome to use whatever pronoun you find most convenient, when refering to any of us.

Comment author: PeerInfinity 26 June 2010 11:39:28PM *  2 points [-]

I've been continuing to use this technique of giving voices to individual subagents, and I've still been finding it extremely useful.

And there are some new subagents to add to the list:

DP: Dark Peer, the agent that's in charge of anything that feels too dark for any of the other agents to say. This includes lots of self-criticism, and criticism of others. If there's something that want to say, but are afraid that saying it would be too rude, and we're talking to someone who already knows about our subagents, then DP will go ahead and say what he wanted to say. We were surprised to find that this subagent actually enjoys getting angry, and writing angry rants. This subagent is also in charge of reporting "negative" feelings, while HP is in charge of reporting "positive" feelings. This agent uses the avatar of a dark, shadowy, genderless, anthropomorphic rabbit.

Recently HP merged with CP, and they share the pink bunny avatar, and they use the name HP. And PP merged with SP, and they share the gray bunny avatar, and they use the name PP.

These three bunny avatars are now our main voices. Most of our internal conversations are between these three voices, with other voices joining in when they have something specific to say.

Some other subagents that were added recently were:

ORG: Obscure Reference Generator: for whenever an obscure reference pops into our head that seems vaguely on-topic, but all of the other subagents are too embarrassed to mention it. This voice just reports the quote or whatever it is that popped into our head.

SoU: Sense of Urgency. The voice that's constantly telling me that whatever I'm working on at the moment urgently needs to be finished as soon as possible. Or if I'm not doing anything important, it's constantly telling me that I should be doing something important. Thinking about existential risks, and our responsibility to do something about them, seems to have put SoU permanently into full-panic mode. This module is causing us lots of trouble, and we're still trying to figure out how to resolve these issues.

RoM: Routine Module. The voice that's constantly telling us to keep following the usual routines, and that gets really nervous whenever we break one of the usual routines. This module is causing us lots of trouble, and we're still trying to figure out how to resolve these issues.

AM: Altruist Module. I should mention that I was an Altruist, before I became a Utilitarian. And so now I have a module in my brain that's constantly looking for opportunities to help others at our expense, without bothering to calculate how much help at how much expense. And it pushes really hard for us to act on these opportunities. It also fights really hard to prevent us from ever doing anything that would harm or annoy or inconvenience anyone in any way, even in situations where it's obvious that inconveniencing the other person is necessary or worthwhile. And this module is older and stronger than UM, the Utilitarian Module. It's often causing lots of trouble, and we're still trying to figure out how to resolve these issues.

EG: Excuse Generator. It gets activated so often that we decided to give it its own name and voice. It's often causing lots of trouble, actively trying to prevent us from updating on new information.

And sometimes a random thought needs to be given a voice, and we don't know what subagent that thought is coming from, and so we assign the label "?P" to that thought.

Comment author: Alicorn 26 June 2010 11:46:14PM *  2 points [-]

I would like to mention that the City of Lights technique is at its best when it is used to think through the various aspects of a specific problem without feeling pressed to aim at only one set of considerations. Using lots of subagents, or using them all the time, is correlated with something being systematically wrong. If you need them, far be it from me to stop you, but I get by with two in virtually every situation where they're called for and have never felt the need for more than six.

Comment author: PeerInfinity 27 June 2010 03:31:31AM 1 point [-]

Right. Normally only two or three, or sometimes four, are involved in any particular conversation.

Er, wait, no, normally it's the three bunnies, plus maybe one other subagent.

Comment author: Blueberry 27 June 2010 10:33:10AM 2 points [-]

And don't forget the subagent that likes finding, making, and naming other subagents.

Comment author: Rain 01 April 2010 06:56:24PM 0 points [-]

Nick Bostrom's parliamentary model sounds similar.

Comment author: Academian 01 April 2010 09:46:24AM *  11 points [-]

Great post! Some thoughts/experience I'd like to add:

1) How I got started. I began using a multiple-sub-agents heuristic for introspection when I stopped thinking of my mind as a point-mass. The brain has physical extent, and there even parts of my brain that I don't much identify with as "me" even though they affect my bodily functioning and behavior. I thought, how might those parts work? How should I treat them? And then, hey, why not treat them like people? They're made of brain, too.

By priors picked up from descriptions of various people trying this, you're reasonably likely to identify one of your sub-agents as "you".

2) "I am my executive system." To avoid losing or constantly changing my sense of self, and to maintain neutrality, I try to identify most strongly with my executive system (theorized by Miller, Cohen and others to operate primarily in the prefontal cortex). I think of "me" as a team leader who can coordinate the efforts of the rest of my various brain functions toward coherent goals that take into account their individual preferences. For example, sometimes I'll tell my entertainment-seeking-distraction function that it's probably in his best interest to let my productive-ambitious function work and build opportunities so life can be more entertaining on average in the future.

3) An honor system with signalling. When I strike a deal like that between conflicting functions or "sub-agents", I find it extremely important to honor the deal so the sub-agents continue to trust my leadership. After committing to this as a policy, I've found it unbelievably easier to negotiate inner conflicts, especially akrasia. For example, when I strike a deal between work and (other) entertainment, I commit to the entertainment agent that I will not procrastinate entertainment indefinitely. Then, I indulge on occasion as a signal that I will honor the deal more as I get older.

I don't know about the rest of you, but it seems to me that this "honor system with signaling" is absolutely essential to maintaining my own "inner order", and my quality of life has increased dramatically since I adopted it. Of course I can't be sure how it'd work for others, but it's an idea.

Comment author: pwno 01 April 2010 04:52:01PM 2 points [-]

I don't know about the rest of you, but it seems to me that this "honor system with signaling" is absolutely essential

Agree. I think people have an integrity-lacking executive sub-agent when their pride/reputation sub-agent gets too involved with executive functions.

Comment deleted 01 April 2010 12:40:47PM [-]
Comment author: gelisam 29 April 2010 03:16:32PM 6 points [-]

I often get the feeling that Alicorn's posts could use more evidence. However, given her status here, I take the very fact that she recommends something as evidence that she has herself encountered good evidence that the recommendation works; you know, Aumann agreement and all that.

Besides, even though it would be nice to see which evidence she has encountered, I know that I wouldn't bother to read the research if she linked to it. Intellectually, I trust Alicorn's conclusions. Therefore, I wish to believe in her conclusions; you know, Tarski's litany and all that.

Emotionally, however, I can't help but to doubt. Fortunately, I know that I'm liable to being emotionally convinced by unreliable arguments like personal experience stories. That's why I can't wait to reach the end of this sequence, with the promised "how Alicorn raised her happiness setpoint" story.

Comment author: hegemonicon 01 April 2010 02:28:33PM 2 points [-]
Comment author: pwno 01 April 2010 04:43:21PM 3 points [-]

My biggest advantage from switching to a multiple selves perspective: Being more accepting of seemingly ignoble desires affecting my behavior.

Gotta please all my "selves", even if I am not proud of them, but simultaneously not let them define who I am.

Comment author: Amanojack 01 April 2010 09:26:18PM *  1 point [-]

Talk to yourselves aloud or in writing...

This may be out of left field, but I recommend talking to yourself as yourself, rather than as you'd talk to someone else. In other words, cut out any inadvertent signaling in your voice (spoken or imagined). For some this may seem alien or impossible, for others it may be perfectly normal. Still others probably think they do talk to themselves as themselves, but don't.

The reason is just the obvious one that interactions with other people are rarely if ever conducted on a purely rational wavelength, and there's no reason to inadvertently carry those extra distractions over into your own self-talk and contemplation. Maybe no one does this - friendly warning just in case.

Comment author: apophenia 01 June 2010 02:53:32AM 6 points [-]

Cross-posted from Seven Shiny Stories

6. Community

Billy has the chance to study abroad in Australia for a year, and he's so mixed up about it, he can barely think straight. He can't decide if he wants to go, or why, or how he feels about the idea of missing it. Eventually, he decides this would be far easier if all the different nagging voices and clusters of desire were given names and allowed to talk to each other. He identifies the major relevant sub-agents as "Clingyness", which wants to stay in known surroundings; "Adventurer", which wants to seek new experiences and learn about the world; "Obedience to Advisor", which wants to do what Prof. So-and-So recommends; "Academic", who wants to do whatever will make Billy's résumé more impressive to future readers; and "Fear of Spiders", which would happily go nearly anywhere but the home of the Sydney funnelweb and is probably responsible for Billy's spooky dreams. When these voices have a chance to compete with each other, they expose questionable motivations: for instance, Academic determines that Prof. So-and-So only recommends staying at Billy's home institution because Billy is her research assistant, not because it would further Billy's intellectual growth, which reduces the comparative power of Obedience to Advisor. Adventurer renders Fear of Spiders irrelevant by pointing out that the black widow is native to the United States. Eventually, Academic and Adventurer, in coalition, beat out Clingyness (whom Billy is not strongly inclined to identify with), and Billy buys the ticket to Down Under.

Comment author: Normal_Anomaly 15 January 2011 02:20:46AM *  1 point [-]

I've actually been doing this for a while, and I find it really useful. My list of sub-agents is as follows, ones more likely to be endorsed marked with a (+):

*Me (executive)

*Internal Critic/devil's advocate (2nd executive who takes the opposite position from the first)

*Rationalist/scientist (+)

*Ambition (+)

*Conscience (+)

*Optimist (+)

*Cynic

*Arrogance (frequently rebels against endorsed entities/positions)

*Inner adolescent/sense of humor

*Immediate desires/drives (akrasia, hunger, libido, etc.)

Rather fragmented, but they are hardly ever all there at once. Cynic and Optimist don't show up at the same time, Conscience is often played by one of the executives, and Adolescent rarely has anything to say.

Comment author: Nisan 18 March 2011 06:19:26PM 4 points [-]

Another incarnation of this idea is Internal Family Systems Therapy.

Comment author: Douglas_Reay 17 November 2012 06:22:04PM *  1 point [-]

I have a fear about this. My fear is that by naming and concentrating on sub-agents I'd exacerbate any tendency I might have towards Dissociative identity disorder.

Does anyone have more data on that, which may allay my fear?

Comment author: RobbBB 17 November 2012 06:37:01PM 1 point [-]

Do you think you actually have such tendencies? 'Dissociative identity disorder' is a rebranding of 'multiple personality disorder,' which seems to some extent to be a sociohistorically constructed ailment -- i.e., a real disease, but one whose nature and prevalence are strongly dependent on our cultural assumptions and folk-psychological models. Keeping that in mind, or becoming a Buddhist, might help dissolve some of the anxieties that naturally attend to noticing the disunities in one's personality or persona. I can also recommend the book 'Rewriting the Soul,' by Ian Hacking.

Comment author: goatherd 13 November 2013 04:40:40AM *  3 points [-]

I have two major entities in by mind, My Brain, and Me. My brain is heavily influenced by chemistry, such as tiredness, and blood sugar levels, and does all the thinking. Me is not affected so much by such things. However, Me has a very limited amount of control over my Brain. If Me forces my Brain to do somthing that it really does not want to do, then it will tire out Me and render it more difficult to force my Brain to do something, until Me's control is replenished, which is a slow process. Me has basically no cognitive faculties, and must make my Brain do the thinking, but talking is a free move, and Me is capable of recognizing and commenting on many cognitive biases as my brain thinks. My brain often will listen to these comments and stop following those faulty lines of thought, because my reword center gives it some dopamine when it makes its thoughts less wrong.

Possible causes for this lack of control:

  1. Me is using its power for unnecessary thing all the time, and so when it is needed it is tiered.
  2. I have a genetic lack of power for Me to control my Brain.

Solutions:

  1. If problem 1, I could try to catch Me controlling my Brain unnecessarily, and stop it.
  2. Training my inner rat. If problems 1 or 2, I could get the reward center to give my brain dopamine when it does what it should do, so that Me would not have to make it to do as much. Then Me to conserve its strength.
  3. Practices. If 1 is true then this has been tried without success, but if not then it may allow Me to increase its strength of control over my Brain.

Additional thoughts:

My Brain, which I have been conspiring to subjugate to the will of Me, is what has been writing all this, with very little control from Me, probable because it gets dopamine for ‘making clover plan to defeat enemy’. My brain doesn't know that the enemy it is making the clever plan against is itself. Me knows that writing this it a good thing, and so encourages my brain to be happy when doing this, so that it will try to do it more.

Comment author: Articulator 13 November 2013 05:48:12AM 1 point [-]

I have slightly more formally defined the existence of a logical and an evolutionary mind. Same general premise, but with more accurate, unambiguous, and intellectual terminology.

I completely agree with the duality and conflict of these two mind-states. I'm pretty sure it's one of the most common break-downs of human cognition.

Comment author: mare-of-night 22 June 2014 04:48:46PM 1 point [-]

I haven't had my sub-agents talk much in the past few months, so I'm not completely sure this is accurate. I'm going to bring them out to play again soon, so I may come back and edit if I discover things have changed since they last talked. (I started doing this because I had an important decision to make that I couldn't consult others about. The sub-agents talked for hours. So I brought them out only when I had something really big for them to hash over, and they've never been able to resolve things quickly. Is this usual? Any advice for having more frequent conversations that are shorter and/or multitaskable?)

I'd had HP:MOR on my mind when I started doing this, so I started out with the theme of Harry's four house sub-agents, but they deviated a lot from that.

Green ended up the "designated rationalist". It wants to fulfill the others' desires as much as possible, and doesn't really have any of its' own. It's better than the others at reasoning about what will actually happen if we do something, and what's actually true. It doesn't mind saying things that are unpleasant (or else is slow to realize that they're unpleasant - I'm not really sure). It's pretty terrible at being tactful because of that, and because it tends to get mad at the others when they're being self-defeating. Multiple versions of Green will debate with each other to sort out factual/non-normative questions.

Red is my larger-than-life self, I guess. It's trying to focus on what's important, but does so in a naive way. It has big goals (protect everyone, protect friends, and possibly some wannabe-hero-ness), which it's very sure are more important than what the rest of me wants. It tends to forget about practical constraints and fall for logical fallacies, but can correct for that once it's pointed out (usually by Green). It tends to be really shouty and indignant (but that might be because most of my sub-agent conversations happened because Red was frustrated). It does listen to the others, since it's stuck with us.

Yellow is a mix of things, and I'm not completely sure they belong together. Desire for good relationships with the people I know, approval, and having a comfortable life, probably also some other "normal" things. It's pretty good at "empathizing" with the other sub-agents. (Yellow usually translates when Green wants to tell Red something especially uncomfortable.)

Blue was supposed to be a sub-agent, but it rarely participates in conversation. I'm not sure whether the topics we've talked about haven't interested it, or I don't have a Ravenclaw part, or I have one that's not very verbal.

"Phoenix" appeared unexpectedly - basically just a thing in my brain that gets very loud in a nonverbal way when I'm feeling moral horror. This is probably more of a shorthand for describing that feeling, than a sub-agent. Red translates for it, usually just to inform the others of what it's screaming about. It doesn't have a very large role in these conversations.

"Myself" mostly acts as a moderator - introducing the topic of conversation, and then occasionally nudging it. Most of what I have to say after the beginning is just yelling at the other sub agents to "be nice". They don't seem to mind, and they do the same thing to each other pretty often. I think "be nice" is mainly serving as a reminder that cooperation is necessary, not a disendorsement.

An unexpected benefit was realizing that Green gets control of my mouth sometimes, and usually makes a mess of it. A friend is telling me their problems, and I start analyzing the situation and suggesting solutions very bluntly, when that's not what they wanted. I'm better at realizing when this is happening now, and when I notice myself starting to do it, I call in Yellow to translate (if I have the mental energy for that, it's difficult to do in real conversations), or I tell the person I'm talking to that they're talking to Green (but worded in a way that they'd understand) and ask if they want to proceed. I think doing this has helped me avoid making my friends feel uncomfortable or hurt.