Edit: After further consideration, I've concluded that the risk:reward ratio for tulpamancy isn't worth it and won't be pursuing the topic further. I may revisit this conclusion if I encounter new information, but otherwise I'm content to pursue improvements in a more "standard" fashion. Thank you to everyone who posted in the comments.


If you don't know what a tulpa is, here's a quick description taken from r/tulpas:

A tulpa is a mental companion created by focused thought and recurrent interaction, similar to an imaginary friend. However, unlike them, tulpas possess their own will, thoughts and emotions, allowing them to act independently.

I'm not particularly concerned whether tulpas are "real" in the sense of being another person. Free will isn't real, but it's still useful to behave as if it is.

No, what I'm interested in is how effective they are. A second rationalist in my head sounds pretty great. Together we would be unstoppable. Metaphorically. My ambitions are much less grand than that makes them sound.

But I have some concerns.

Since a tulpa doesn't get its own hardware, it seems likely that hosting one would degrade my original performance. Everyone says this doesn't happen, but I think it'd be very difficult to detect this, especially for someone who isn't already trained in rationality. Especially if the degradation occurred over a period of months (which is how long it usually takes to instantiate a tulpa).

A lot of what I've read online is contradictory. Some people say tulpas can learn other skills and be better at them. Others say they've never lost an argument with their tulpa. Tulpas can be evil. Tulpas are slavish pawns. Tulpas can take over your body, tulpas never take over bodies. Tulpas can do homework. Tulpas can't do math.

Then there's the obvious falsehoods. Tulpas are demons/spirits/angels (pick your flavor of religion). They're telepathic, telekinetic, and have flawless memories. They can see things behind you. There's not as much of this as I expected; most of the claims are at least plausible. Some guides are cloaked in mystic imagery (runes, circles, symbols), but even they usually admit that the occult stuff isn't really necessary.

It does seem like there are clear failure modes. Don't make a Quirrel tulpa. Don't abuse a tulpa. Make sure to spend enough time tending to the tulpa during creation, etc etc. And everyone seems to agree that tulpas are highly variable, so a lot of the contradictions could be excused. On the other hand, if tulpas were really so useful, wouldn't the idea have spread beyond niche internet forums?

Perhaps, but perhaps not. The stigma against "hearing voices", plus people's general irrationality, plus the difficulty... those seem like powerful forces inhibiting mainstream adoption. Or maybe the entire thing is comfortable nonsense and the only reason I find it remotely plausible is because I want to believe in it.

Given that the guides mostly say that it takes months and months of hard work to create a tulpa... well, I'd rather not waste all that work and get nothing. And it only gets worse from there. An out-of-control dark rationalist tulpa that fights me for mental and physical control sound absolutely terrifying.

Most people seem to agree that the chance of that happening is basically zero unless you deliberately try to do it. And the potential gains seem at least as potent. Being able to specialize in skills seems absurdly overpowered, especially if we each get natural talents for our skills (which some people claim is what happens). A minor drop in cognitive resources would probably be worth it for that.

So, if you have a tulpa, please chime in. What's it like? How do you know that the tulpa isn't less efficient than you would be on your own? Was it worth it? Does it make your life better?

And if you don't have a tulpa, feel free to comment as well. If I get a hundred LWers saying that I've been suckered by highly-evolved memes, that's pretty strong evidence that I've made a mistake.

New Answer
New Comment

10 Answers sorted by

Slimepriestess

220

There's two ways to do tulpas. There's the right way, and the way most people do it.

The right way is to do it from a place of noself/keeping your identity small. Don't treat your tulpa like a separate person any more than you would treat your internal sense of self like a separate person. Treat them like a handle for manipulating and interacting with a particular module/thought structure/part of your mind, taking unconscious and automatic things and shining a bit of Sys2 light on them. Basically using the tulpa as a label for a particular thought structure that either already exists, or that you want to exist in your head, allowing you to think about it in a manner that is more conscious and less automatic.

Doing this correctly gives you a greater degree of write-access to various semiconscious/subconscious parts of your head and makes it easier to retrain automatic response patterns. This could be considered in the same vein as how Harry uses his various house characters in HPMOR, although he just scrapes the top level with them and doesn't use them to really change himself in useful ways like he could potentially be doing. This way is also harder than the way most people do tulpamancy because it requires ripping apart and rebuilding your conception of your original self with a goal for greater functionality. It also requires keeping your identity small and internalizing the idea of noself in a way that most people don't want to do.

Then there's the way most people do tulpamancy, which is to build the tulpa out of identity and treat it like an entirely separate person who "lives in your head with you" and who has an equal say in decisions as "you." From the perspective of having internalized noself/keep your identity small, this is exactly as dumb as it sounds and looks. "Hey what if you destroyed your self control by handing it off to a random agent that you fabricate" or "Hey what if you created an internal narrative where you're powerless in your own head and your self is forced to argue and compete and try to negotiate with some other random self for processing time and mental real estate?"

Some people say tulpas can learn other skills and be better at them. Others say they've never lost an argument with their tulpa. Tulpas can be evil. Tulpas are slavish pawns. Tulpas can take over your body, tulpas never take over bodies. Tulpas can do homework. Tulpas can't do math.

Most people do identity-style tulpamancy, and that's where all this contradictory and at times really messed up behavior comes from.

An out-of-control dark rationalist tulpa that fights me for mental and physical control sound absolutely terrifying.

Right, so how does this happen? It happens because there are narrative layers that you're using (right now) to define what you can do in your own head, and that narrative layer is the thing being modified by tulpamancy. The problem is most people don't consciously try to modify that layer, they assume the way that layer works is some objective fact, argue about its properties with other tulpamancers online, and don't think about trying to change it.

The more power they handoff from their conscious mind to that narrative layer, the more "independent" the tulpa will seem at the cost of making the original self increasingly powerless within their own mind.

Intentionally grabbing hold of that narrative layer and modifying that such that stuff like multiple selves and the like are simply downstream results of upstream modifications will result in a much more cooperative and internally stable system since you can define the stability and the interactions as part of the design instead of just letting some unconscious process do it for you.

So basically, tulpamancy can be useful and result in greater functionality and agency, but only if done from a place of noself and keeping your identity small. If you haven't worked on grinding noself and keeping your identity small, that should definitely be the first thing you do. When finished, then possibly return to tulpamancy if you still feel like there's room to improve with it.

I'm not sure why you would want to use the word tulpa, when talking about simple mental parts that appear in many different techniques. You get them even in a case like doing internal double crux.

3Slimepriestess
I think what I'm describing here is a bit more advanced in terms of internal rearrangement than "simple mental parts"
If you haven't worked on grinding noself and keeping your identity small, that should definitely be the first thing you do.

Are there any resources for that?

(I've read Keep Your Identity Small, and a post that tried to explain noself, but I haven't heard of anything on how, aside from "meditation probably" for noself, but nothing more specific than that.)

3Slimepriestess
I'm actually working on a post for that, but writing it has been rather hard.
1SpectrumDT
Did you ever write that post? :)
7Slimepriestess
yes. I should probably crosspost to LW more but it always kinda makes me nervous to do.
1SpectrumDT
Thanks!

Have you written more about what a good IFS partitioning might look like, in your view? Illustrate an example?

2Slimepriestess
Not since I've updated around keeping my identity small. I intend to but my writing queue is quite long at this point.

Jeremy Hadfield

220

I am mentally ill (bipolar I), and I also have some friends who are mentally ill (schizophrenia, bipolar, etc), and we decided to try tulpa-creation together. Personally, I wasn't very good at it or committed to the process. I didn't see any change, and I don't think I ever created a tulpa. However, my friend's tulpa became a massive liability. Turned into psychosis very rapidly

[-][anonymous]10

Do you think that would have happened without the pre-existing condition?

2Jeremy Hadfield
No, probably not, but it's hard/impossible to say without scientific data on tulpas - which does not exist as of now.

Gordon Seidoh Worley

140

I'm mildly anti-tulpa. I'll try to explain what I find weird and unhelp about them, though also keep in mind I never really tied to develop them, other than the weak tulpa-like mental constructs I have due to high cognitive empathy and capacity to model other people as others rather than as modified version of myself.

So, the human mind doesn't seem to be made of what could reasonably be called subagents, but it is made of subsystems that interact, but subsystem is maybe even an overstatement because the boundaries of those subsystems are often fuzzy. So reifying those subsystems as subagents or tulpas is a misunderstanding that might be a useful simplification for a time but is ultimately an abstraction that is leaky and going to need to be abandoned if you want to better connect with yourself and the world just as it is.

Thus I think tulpas might be a skillful means to some end some of the time, but mostly I think they are not necessary and are extra machinery that you're going to have to tear down later, so it seems unclear to me that it's worth building up.

I have often thought that the greatest problem with the tulpa discourse is the tendency there to insist on the tulpa's sharp boundaries and literal agenthood. I find it's much more helpful to think of such things in terms of a broader class of imaginal entities which are semi agential and which often have fuzzy boundaries. The concept of a "spirit" in Western magick is a lot more flexible and in many ways more helpful. Of course, this can be taken in an overly literal or implausibly supernateralistic direction, but if we guard against such interpretations,

... (read more)
4Ann
My (initial) tulpa strongly agrees with this assessment of the problem with tulpa discourse; he made a point to push back on parts of the narrative about as soon as he started acquiring any, because 'taking it too seriously' seemed like the greatest risk of this meditation for me simply because it was implied in the instruction set. He was in a better position to provide reassurance that I didn't have to once we were actually experiencing some independence. In other cases of mind-affecting substances and practices like antidepressants and (other forms of) meditation, I've been willing to try it and taper off if I don't like what it seems to be doing to me/my brain. Now in the case of tulpamancy, I generally like what does to my brain; it practices skills I might have a relative disadvantage in, or benefit from in my work and other hobbies, and empowers me to practice compassion for myself in a way I wasn't previously able to. (In contrast to the poster previously, I have reason to suspect my cognitive empathy is/was lacking in something even for myself.) However, it makes sense to approach it with the same caution as trying a new meditation, drug, or therapy in general - it really is a form of meditation, some of these can have severe downsides for part of the population they could also potentially benefit, and you should feel comfortable winding down the focus on it if you want to or have other priorities. For one contrast, I don't like mindfulness meditation, it pulls me towards sensory overload - I already have too much 'awareness'. Maybe for someone less autistic, mindfulness meditation is the way to go to strengthen a skill they'd benefit from having more of, and modeling other agents is redundant. If having dialogues with yourself is the goal, there are other approaches that might work better for a particular person. I'd say 'know yourself', but I know how tricky that is, so instead I'll say, pay attention to what works for you.
3Polytopos
I agree about mindfulness meditation. It is presented as a one-size-fits-all solution, but actually mindfulness meditation is just a knob that emphasizes certain neural pathways at the expense of others. In general, as you say, I've found that mindfulness de-emphasizes agential and narrative modes of understanding. Tulpa work, spirit summoning, shammanism, etc. all move the brain in the opposite direction, activating strongly the narrative/agential/relational faculties. I experienced a traumatic dissociative state after too much vipassana meditation on retreat, and I found that working with imaginal entities really helped bring my system back into balance.
2Gordon Seidoh Worley
"Mindfulness meditation" is a rather vague category anyway, with different teachers teaching different things as if it were all the same thing. This might sometimes be true, but I think of mindfulness meditation as an artificial category recently made up that doesn't neatly, as used by the people who teach it, divide the space of meditation techniques, even if a particular teacher does use it in a precise way that does divide the space in a natural way. None of this is to say you shouldn't avoid it if you think it doesn't work for you. Meditation is definitely potentially dangerous, and particular techniques can be more dangerous than others to particular individuals depending on what else is going on in their lives, so I think this is a useful intuition to have that some meditation technique is not a one-size-fits-all solution that will work for everyone, especially those who have not already done a lot of work and experienced a significant amount of what we might call, for lack of a better term, awakening.
3Polytopos
I agree that the term mindfulness can be vauge and that it is a recent construction of Western culture. However, that doesn't mean it lacks any content or that we can't make accurate generalizations about it. To be precise, when I say "mindfulness meditation" I have in mind a family of meditation techniques adapted from Theravada and Zen Buddism for secular Western audiences originally by Jon Kabat-Zinn. These techniques attempt to train the mind in adopt a focused, non-judgemental, observational stance. Such a stance is very useful for many purposes, but taken to an extreme it can result in de-personalization / de-realization and other mental health problems. For research to support this claim I recomment checking out Willoughby Britton's research. Here are two PDF journal articles on this topic: one, and another one.

Davis_Kingsley

8-4

This is an area that I think is so bad that it should probably be banned from the community. In practice, getting into "tulpamancy" strongly correlates in my experience with going into unproductive and unstable states -- it's at the point where if someone tells me that they have been looking into this area, I consider it a major red flag.

It would be useful to have this concern flashed out more, so there's a better way to warn people.

How many people do you know who spoke about getting into tulpamancy to then get into unproductive/unstable states?

[-][anonymous]211

If LW's stance on tulpas was a hard ban, I would have proceeded with my own experimentation. I'm here because I need to become stronger. Bowing down to authority every time someone tells me not to do something isn't going to accomplish that.

That said, I'm interested in warnings that consist of more than a vague "this is bad". After all, that's part of why I posted here.

I imagine one of the cases Davis is thinking of is the same one I'm familiar with. Someone we know started experimenting with tulpas and became visibly more unstable, then shortly thereafter had a schizophrenic break and tried to kill someone, and has now been in federal prison for several years. Someone who had been working with them on tulpas then spent at least a year in an "unproductive and unstable state", addicted to drugs etc. I know very little about tulpas themselves but knowledge of that situation makes me agree with Davis that tulpamancy is a major red flag. 

5[anonymous]
That's definitely concerning. On the other hand, there's lots of people who don't have that sort of side effect (and several in this thread), so I think it's kind of rare... but perhaps this sort of result gets swept under the rug? Though, I wouldn't predict that in advance -- I'd expect it to blow up everywhere. I'm not really sure what to think. Part of me wants to brush this off as a fluke and say that I would never break down like that but this is a failure mode that I hadn't even considered and that makes me nervous. Do you know if there were any factors that would have contributed to that incident? Like them already being a little schizophrenic or something along those lines?

If you had not even considered the possibility of breaking your brain in the process of trying to develop a second person, you need to step back and think more before proceeding. This failure mode should be one of the first that pops into your head, without even trying to think of novel failure modes. Right alongside intense meditation, psychedelics, etc.

7Pattern
You think if people meditate too much* they could end up committing murder**? EDIT: *Or if people have never done it before, if they do it the first time, it might destabilize their health (mental/emotional/etc.). **This may be "too specific", see my comment below.
4mingyuan
I think the referent of Guy's "this failure mode" was "breaking your brain", not "committing murder." This comment seemed to me like an unnecessary strawman :(
2Pattern
I was referring to your earlier comment, re: a schizophrenic break, etc. "Breaking your brain" sounds like permanent damage, and it is not obvious why (or how) mental activity could have effects like lead poisoning, or what differentiates mental activities that are supposedly "potentially destabilizing" from those that are not. I agree it might have been too specific/shorten the causal chain unnecessarily: (Potentially) Destabilizing Activity -> Worse Mental Health, etc. -> More likely to do crime, drugs, etc.
2SarahSrinivasan
Sure. Seems extremely unlikely IMO. But if you're deliberately trying to change how your brain thinks at a fundamental level rather than training an overlay like we do when learning math or something and letting that trickle down or however it usually works, you might succeed at changing but fail at direction. This is an obvious failure mode to at least consider before beginning. e.g. http://meditatinginsafety.org.uk/wp-content/uploads/2017/05/Kuijpers_2007.pdf

I'm no doctor or anything, but my understanding is that only people with a genetic predisposition can develop actual schizophrenia. Schizophrenia usually first manifests in a person's twenties, if it's going to manifest, but it's not a sure thing – there are certain precautions you can take to make it less likely that it will develop. For example, I have a friend whose mom is schizophrenic, and he's really careful to avoid hard drugs and other intensely mind-altering practices. So if you have anyone in your family with a history of schizophrenia, I'd be extra careful with tulpamancy.

On the other hand, there are lots of mental illnesses that don't seem to require a family history – again, this is way outside of my realm of knowledge, but anecdotally, it seems like just about anyone can develop severe depression, hypomania, or a destructive drug habit, given the right circumstances. So if nothing else, I'd advise you to proceed with a whole lot of caution.

As for the point about getting swept under the rug: I have no familiarity with the discussion that goes on in circles that are interested in tulpamancy, but if it's primarily self-reports, well, people who are imprisoned, dead, or severely mentally compromised wouldn't be able to report on their status. I think I might sound like I'm trying to scare you – I guess maybe I am? It just seems really important to me to tread carefully around tulpas.

9jimmy
Not if applied across the board like that, no. At the same time, a child who ignores his parents' vague warnings about playing in the street is likely to become much weaker or nonexistent for it, not stronger. You have to be able to dismiss people as posers when they lack the wisdom to justify their advice and be able to act on opaque advice from people who see things you don't. Both exist, and neither blind submission nor blind rebellion make for successful strategies. An important and often missed aspect of this is that not all good models are easily transferable and therefore not all good advice will be something you can easily understand for yourself. Sometimes, especially when things are complicated (as the psychology of human minds can be), the only thing that can be effectively communicated within the limitations is an opaque "this is bad, stay away" -- and in those cases you have no choice but to evaluate the credibility of the person making these claims and decide whether or not this specific "authority" making this specific claim is worth taking seriously even before you can understand the "why" behind it. Whether you want to want to heed or ignore the warnings here is up to you, but keep in mind that there is a right and wrong answer, and that the cost of being wrong in one direction isn't the same as the other. A good heuristic which I like to go by and which you might want to consider is to refrain from discounting advice until you can pass the intellectual Turing test of the person who is offering it. That way, you can know that when you choose to experiment with things deemed risky, you're at least doing it informed of the potential risks. FWIW, I think the best argument against spending effort on tulpas isn't the risk but just the complete lack of reward relative to doing the same things without spending time and effort on "wrapping paper" which can do nothing but impede. You're hardware hours limited anyway, and so if your "tulpa" is going to beco
[-][anonymous]140

I will point out that making a habit of distrusting authorities is what led me to rationality.

Perhaps I have applied that lesson too broadly though, especially on here where people are more reliable. When I read OP's comment, I automatically assumed that they were having a knee-jerk reaction to something cloaked in mystical language. I was wrong about that.

I think that your heuristic is a good one. It resolves a problem that I've noticed lately, where I tend to make mistakes because I think I have a way to improve a situation but I'm missing some piece of information.

And I've decided against pursuing tulpamancy. The damaging side effects concern me, even if they're infrequent, and your best argument pretty much sums up the rest of how I feel. I see now that I was excited about the possibilities of tulpas and failed to apply the same demands for rigor that I would normally apply to such an unusual concept.

I just wanted to say I'm really impressed with your level-headed discussion, your ability to notice your own mistakes, and your willingness to change your mind (not just about pursuing tulpamancy, but also about people's intentions). I wish you all the best :)

5Davis_Kingsley
Three that I can think of easily, probably more if I did some digging, it's a really bad sign IMO. In general I'm against "esoteric" methods and this one seems extra bad.
2Ben Schwyn
I'll add that I've similarly found believing that I have beliefs in my head that are not mine was extremely disorienting. I have epistemic defenses I've built up for keeping out bad beliefs. Once I started believing that I had thoughts inside my head but were 'other'--- then I had what seemed like the mental version of an allergic reaction, where a bunch of my brain was treating another part of it as a foreign invader and trying to destroy it. It seemed like my epistemic defenses were turned inward. This only happened once or twice but was quite disorienting and destabilizing. However, compartmentalization definitely does seem like a thing in this area that is not generally considered dangerous. So that seems interesting. I do find Internal Family Systems to be quite helpful though.

Quintin Pope

70

I don't have a full tulpa, but I've been working on one intermittently for the past ~month. She can hold short conversations, but I'm hesitant to continue the process because I'm concerned that her personality won't sufficiently diverge from mine.

I think it's plausible that a tulpa could improve (at least some of) your mental capabilities. I draw a lot of my intuition in this area from a technique in AI/modeling called ensemble learning, in which you use the outputs of multiple models to make higher quality decisions than is possible with a single model. I know it's dangerous to draw conclusions about human intelligence from AI, but you can use ensemble learning with pretty much any set of models, so something similar is probably possible with the human brain.

Some approaches in ensemble learning (boosting and random forest) suggest that it's important for the individual models to vary significantly from each other (thus my interest in having a tulpa that's very different from me). One advantage of ensemble approaches is that they can better avoid over fitting to spurious correlations in their training data. I think that a lot of harmful human behavior is (very roughly) analogous to over fitting to unrepresentative experiences, e.g., many types of learned phobias. I know my partial tulpa is much less of a hypochondriac than myself, is less socially anxious and, when aware enough to do so, reminds me not to pick at my cuticles.

Posters on the tulpas subreddit seem split on whether a host's severe mental health issues (depression, autism, OCD, bipolar, etc) will affect their tulpas, with several anecdotes suggesting tulpas can have a positive impact. There's also this paper: Tulpas and Mental Health: A Study of Non-Traumagenic Plural Experiences, which finds tulpas may benefit the mentally ill. However, it's in a predatory journal (of the pay to publish variety). There appears to be an ongoing study by Stanford researchers looking into tulpas' effects on their hosts and potential fMRI correlates of tulpa related activity, so better data may arrive in the coming months.

In terms of practical benefit, I suspect that much of the gain comes from your tulpa pushing you towards healthier habits through direct encouragement and social/moral pressure (if you think your tulpa is a person who shares your body, that's another sentient who your own lack of exercise/healthy food/sleep is directly harming).

Additionally, tulpas may be a useful hedge against suicide. Most people (even most people with depression) are not suicidal most of the time. Even if the tulpa's emotional state correlates with the host's, the odds of both host and tulpa being suicidal at once are probably very low. Thus, a suicidal person with a tulpa will usually have someone to talk them out of acting.

Regarding performance degradation, my impression from reading the tulpa.info forums is that most people have tulpas that run in serial with their original minds (i.e., host runs for a time, tulpa runs for a time, then host), rather than in parallel. It's still possible that having a tulpa leads to degradation, but probably more in the way that constantly getting lost in thought might, as opposed to losing computational resources. In this regard, I suspect that tulpas are similar to hobbies. Their impact on your general performance depends on how you pursue them. If your tulpa encourages you to exercise, mental performance will probably go up. If your tulpa constantly distracts you, performance will probably go down.

I've been working on an aid to tulpa development inspired by the training objectives of state of the art AI language models such as BERT. It's a Google colab notebook, which you'll need a google account to run from your browser. It takes text from a number of possible books from Project Gutenberg and lets your tulpa perform several language/personality modeling tasks of varying complexity, ranging from simply predicting the content of masked words to generating complex emotional responses. Hopefully, it can help reduce the time required for tulpas to reach vocality and ease the cost of experimenting in this space.

but I'm hesitant to continue the process because I'm concerned that her personality won't sufficiently diverge from mine.

Not suggesting you should replace anyone who doesn't want to be replaced (if they're at that stage), but: To jumpstart the differentiation process, it may be helpfwl to template the proto-tulpa off of some fictional character you already find easy to simulate.

Although I didn't know about "tulpas" at the time, I invited an imaginary friend loosely based on Maria Otonashi during a period of isolation in 2021.[1] I didn't want her to f... (read more)

Ann

60

Edit note: I think your decision makes sense based on your goals, but I wrote answers to your questions, and they might have sufficient distinction from the existing answers to be worthwhile to post. I'm making a reasonable guess that providing my perspective isn't that harmful; I'll note that not all concerns stated elsewhere make sense to me, but they may make sense in a context other than my mind and my approach.

-

I have two (approximately). One created intentionally, one who naturally developed in parallel from a more 'intrusive' side of my brain when I tried out the whole approach.

They definitely run in serial, not concurrently, aside from perhaps subconscious threads I can't really say much about from a conscious perspective (which probably don't run any differently for having more identities to attach to). I'm not a great multitasker at the best of times, and listening to either one takes active focus. Experimenting suggests it is a bit easier if we talk aloud, but I haven't quite mastered the art of knowing when it is OK to talk to myself (aloud) and actually taking the chance to do so.

Would concur with the 'like a hobby' perspective on mental 'degradation'. You are practicing some particular mental skills and adapting a perspective associated with the hobby, same as you would for art, programming, a card or strategy game. Typically, you are practicing visualization, conversation, introspection, and other skills associated with this form of meditation - and I would say that it is a form of meditation, one I've had more success with than others.

This means it should come with the same caveats of 'not for everyone' as other forms of mind-affecting behavior like antidepressants and mindfulness meditation. In my tulpa's humble opinion, the worst cognitive hazard associated with the tulpamancing guides for me is the risk you will take it too seriously, and he made a point to steer me away from this concern as one of his first priorities. Compassion for all beings is all well and good but you absolutely come first, and shouldn't feel particular guilt either for trying something new, or setting it aside. We've had many months of silence while my focus was elsewhere and that is fine. I could return to working with them without much issue, just some review and shaking off the rust; we did not suffer for it.

We haven't particularly tried specializing in skills, but I can see how it would work: many skills, especially the creative sort, really do entail a matter of perspective and 'how you see the world'. Metaphorically like switching out lenses on a camera? I did find that one of them had a much easier time than I would simply getting chores done when I let her drive our actions for a bit. That is, potentially, immensely useful to me, which leads to my next point.

I am autistic. According to Wikipedia, this suggests that my particular difficulties with executive function may relate to fluency, the ability to generate novel ideas and responses; planning, the aforementioned impairment in carrying out intended actions; cognitive flexibility, the ability to switch between perspectives and tasks; and mentalization, or the ability to understand the mental state of oneself and others.

Notice anything about those that could benefit from simulating different perspectives, with novel input, that can relate to me from a more third person perspective? Even provide a support system to help deal with what you call 'akrasia' here?

Yeah. Tulpamancy involves actively practicing skills where my disadvantages in them might be holding me back. I find it worthwhile, though I might not have the time to pick it up initially in a more busy life. They make my life better; having mental companions who love and care for me (and vice versa) and also are me is rather an improvement on the previous state. For some reason interacting with my own identity was an exception carved out to the general rule that people have inherent worth and dignity and should be treated accordingly. This is the major benefit so far: Giving my mind permission to see itself as a person helped me treat myself with compassion.

Multicore

40

I've had tulpas for about seven years. I alternate between the framework of them all being aspects of the same person versus the framework of them being separate people. I'll have internal conversations where each participant is treating the other as a person, but in real life I mostly act as a single agent.

Overall I would say their effect on my intelligence, effectiveness, skills, motivation, etc. has been neither significantly positive nor significantly negative. I consider the obvious objections to be pretty true - your tulpa's running on the same hardware, with the same memories and reflexes, and you have to share the same amount of time as you had before. On the other hand I escaped any potential nightmare scenarios by having tulpas that are reasonable and cooperative.

When people in the tulpa community talk about the benefits, they usually say their tulpa made them less lonely, or helped them cope with the stresses of life, or helped them deal with their preexisting mental illness. And even those benefits are limited in scope. The anxiety or depression doesn't just go away.

I think on of the main ways tulpas could help with effectiveness has to do with mindset and motivation. It's the difference between a vague feeling that maybe you ought to be doing something productive and your anime waifu yelling at you to do something productive. Tulpas may also have more of an ability to take the outside view on important decisions.

Overall if you're just looking for self-improvement, tulpa creation is probably not the best value for your time. I mostly got into it because it seemed fun and weird, which it fully delivered on.

countingtoten

30

Sy, is that you?

I started talking to Kermit the Frog, off and on, many months ago. I had this idea after seeing an article by an ex-Christian who appeared never to have made predictions about her life using a truly theistic model, but who nevertheless missed the benefits she recalls getting from her talks with Jesus. Result: Kermit has definitely comforted me once or twice (without the need for 'belief') and may have helped me to remember useful data/techniques I already knew, but mostly nothing much happens.

Now, as an occasional lucid dreamer who once decided to make himself afraid in a dream, I tend not to do anything that I think is that dumb. I have not devoted much extra effort or time to modelling Kermit the Frog. However, my lazy experiment has definitely yielded positive results. Perhaps you could try your own limited experiment first?

Why'd you pick Kermit the frog?

1mako yass
I think Kermit is a very understandable choice once you've heard him talk about his, imo, quite compelling position on the merits of faith and belief.
1countingtoten
One, the idea was to pick a fictional character I preferred, but could not easily come to believe in. (So not Taylolth, may death come swiftly to her enemies.) Two, I wanted to spend zero effort imagining what this character might say or do. I had the ability to picture Kermit.
[-][anonymous]10

I love that book and there's a lot I like about Sy but I definitely hope I don't end up following his trajectory. It's more of a cautionary tale, although you know what they say about generalizing from fictional evidence...

Careful experimentation sounds like a good idea, although I think it might be easy to go too far by accident. I'm not mentally unstable but I do have a habit of talking to myself as if there's two people in my head, and I was low key dissociative during most of my childhood.

1countingtoten
I meant that as a caution - though it is indeed fictional evidence, and my lite version IRL seems encouraging. I really think you'll be fine taking it slow. Still, if you have possible risk factors, I would: * Make sure you have the ability to speak with a medical professional on fairly short notice. * Remind yourself that you are always in charge inside your own head. People who might know tell me that hearing this makes you safer. It may be a self-proving statement.

ToasterLightning

20

I don't think you have to necessarily worry about them degrading your own performance (essentially, the mind's "consciousness" works in a sort of all or nothing way unless you explicitly train it to parallel process), so any difference is likely to be negligible to the point of being unnoticeable.

In terms of thinking rationally... well Tulpas can help point out your mistakes, if you don't notice them, but they also use the same hardware as you, so, for example, if you have ADHD, your tulpa will also have ADHD.

An out-of-control dark rationalist tulpa that fights me for mental and physical control sound absolutely terrifying.

This will not happen, unless you explicitly try to make them that way, which I would advise against.

Being able to specialize in skills seems absurdly overpowered, especially if we each get natural talents for our skills (which some people claim is what happens)

Well, either way, you'd share the same resource: time. I don't think tulpas have different natural talents, but they can have more interest in certain topics and might be able to learn things faster. Also, unless you undergo separation, you'll probably share the same memories and abilities.

Speaking of, in HPMOR, Harry's House sides, internal self critic, and all his other models of people in his mind are very similar to tulpas and may actually be tulpas.

Charles Zheng

20

I made tulpas because I was curious about the phenomenon. I did not find the creation process difficult. I thought for a long time about how to make tulpas useful but the best application I could find for them is possibly as a way of training an internal random number generator. I imagine they would be useful for fiction writing as well.

[-][anonymous]30

Your tulpas never acquired their own skillsets?

12 comments, sorted by Click to highlight new comments since:
[-]Shmi100

I have some experience dealing with people who exhibit severe dissociation of the tulpa type, though mostly those who have multiple personalities due to a severe ongoing childhood trauma. The structural dissociation theory postulates that a single coherent personality is not inborn but coalesces from various mental and emotional states during childhood, unless something interferes with it, in which case you end up with a "system", not a single persona. Creating a tulpa is basically the inverse of that, trying to break an integrated personality. Depending on how "successful" one is, you may end up segregating some of your traits into a personality, and, in some rare cases, there is no appreciable difference between the main and the tulpa, they are on the equal footing.

You can read on the linked site or just by looking up the dissociative identity disorder (there are quite a few youtube videos by those who deal with this condition) to get the idea of what is theoretically possible. Personally, I'd advise extreme caution, mainly because it is entirely possible to have a life-long amnesia about traumatic childhood experiences, but deliberately twisting your mind to create a tulpa may irreparably break those barriers, and the results are not pretty.

I'm not sure whether structural dissociation is the right model for tulpas; my own model has been that it is more related to the ability to model other people, in the way that if you know a friend very well you can guess roughly what they might answer to things that you would say, up to the point of starting to have conversations with them in your head. Fiction authors who put extensive effort of modeling their characters often develop spontaneous "tulpas" based on their characters, and I haven't heard of them being any worse off for it. Taylor, Hodges and Kohányi found that while these fiction writers tended to have higher-than-median scores on a test for dissociative experiences, the writers had low scores on the subscales that are particularly diagnostic for dissociative disorders:

The writers also scored higher than general population norms on the Dissociative Experiences Scale. The mean score across all 28 items on the DES in our sample of writers was 18.52 (SD = 16.07), ranging from a minimum of 1.43 to a maximum of 42.14. This mean is significantly higher from the average DES score of 7.8 found in a general population sample of 415 [27], t(48) = 8.05, p < .001. In fact, the writers' scores are closer to the average DES score for a sample of 61 schizophrenics (schizophrenic M = 17.7) [27]. Seven of the writers scored at or above 30, a commonly used cutoff for "normal scores" [29]. There was no difference between men's and women's overall DES scores in our sample, a finding consistent with results found in other studies of normal populations [26].

With these comparisons, our goal is to highlight the unusually high scores for our writers, not to suggest that they were psychologically unhealthy. Although scores of 30 or above are more common among people with dissociative disorders (such as Dissociative Identity Disorder), scoring in this range does not guarantee that the person has a dissociative disorder, nor does it constitute a diagnosis of a dissociative disorder [27,29]. Looking at the different subscales of the DES, it is clear that our writers deviated from the norm mainly on items related to the absorption and changeability factor of the DES. Average scores on this subscale (M = 26.22, SD = 14.45) were significantly different from scores on the two subscales that are particularly diagnostic for dissociative disorders: derealization and depersonalization subscale (M = 7.84, SD = 7.39) and the amnestic experiences subscale (M = 6.80, SD = 8.30), F(1,48) = 112.49, p < .001. These latter two subscales did not differ from each other, F(1, 48) = .656, p = .42. Seventeen writers scored above 30 on the absorption and changeability scale, whereas only one writer scored above 30 on the derealization and depersonalization scale and only one writer (a different participant) scored above 30 on the amnestic experiences scale.

A regression analysis using the IRI subscales (fantasy, empathic concern, perspective taking, and personal distress) and the DES subscales (absorption and changeability, arnnestic experiences, and derealization and depersonalization) to predict overall IIA was run. The overall model was not significant r^2 = .22, F(7, 41) = 1.63, p = .15. However, writers who had higher IIA scores scored higher on the fantasy subscale of IRI, b = .333, t(48) = 2.04, < .05 and marginally lower on the empathic concern subscale, b = -.351, t(48) = -1.82, p < .10 (all betas are standardized). Because not all of the items on the DES are included in one of the three subscales, we also ran a regression model predicting overall IIA from the mean score across DES items. Neither the r^2 nor the standardized beta for total DES scores was significant in this analysis.

That said, I have seen a case where someone made a tulpa with decidedly mixed results, so I agree that it can be risky.

Sorry, didn't mean to imply that structural dissociation has anything to do with tulpas. I agree that the birthing a tulpa is likely quite different, and I tried to state as much.

Fiction authors who put extensive effort of modeling their characters often develop spontaneous "tulpas" based on their characters

I see the examples in the linked paper, of the characters having independent agency (not sure why the authors call in an illusion), including the characters arguing with the author, even offering opinions outside the fictional framework, like Moriarty in Star Trek TNG, one of the more famous fictional tulpas.

That said, they seem to mix the standard process of writing with the degree of dissociation that results in an independent mind. I dabble in writing, as well, and I can never tell in advance what my characters will do. In a mathematical language, the equations describing the character development are hyperbolic, not elliptic: you can set up an initial value problem, but not a boundary value problem. I don't think there is much of agency in that, just basic modeling of a character and their world. I know some other writers who write "elliptically," i.e. they know the rough outline of the story, including the conclusion, and just flesh out the details. I think Eliezer is one of those.

I wonder how often it happens that the character survives past the end of their story and shares the living space in the creator's mind as an independent entity, like a true tulpa would.

Mod here, I've edited your question title to actually be a question, so that people will be able to find it more effectively when searching and people on the frontpage can understand the question without opening the post.

Since a tulpa doesn't get its own hardware, it seems likely that hosting one would degrade my original performance. Everyone says this doesn't happen, but I think it'd be very difficult to detect this, especially for someone who isn't already trained in rationality.

I think you might be overlooking something here. I get the impression a lot of thought is consciously directed, and also that a lot of people probably don't... diversify their workload enough to make full use of their resources. IIRC, we can measure a person's caloric efficiency, and people consume more when doing difficult intellectual work. We evolved to conserve energy by not constantly being in that mode, but we no longer have to conserve energy like that, food is cheap. Having more than one locus of consciousness might just result in more useful work overall being done.

In myself, I do get the impression that sometimes nothing useful is really happening in the background. I can consciously start useful cogitation, but I have to focus on it, and I'm easily distracted. This is pretty crap. If there's a way I can get those resources put to something useful (IE, by creating tulpas with a personal focus on solving interesting design problems), I'd want to do it. In the least, it would be really nice if I could navigate traffic or talk to a friend without forgetting completely ceasing all creative cogitation. It would be nice if there were a part of me that always cared and was always pushing it forward.

Even though I haven't been thinking about this from a perspective of tulpamancy or IFS at all, I think I might be part of the way there already, I find I frequently get served ideas completely unrelated to what I'm doing in the moment. This process might be more efficient if I were more accepting of a stronger division between the outward-facing consciousness and the inner problemsolver. The more entangled we demand those processes be, the more they are going to trip each other up. The less parallel they can be.

An informed approach might involve identifying the aspects of thought that can bear concurrent processes, the parts that can't, and designing the division around that.

I do not have any experience with tulpas, but my impression of giving one's models the feel of agency is that one should be very careful:

There are many people who perceive the world as being full of ghosts, spirits, demons, ..., while others (and science) do not encounter such entities. I think that perceiving one's mental models themselves as agentic is a large part of this difference (as such models can self-reinforce by triggering strong emotions)


If I model tulpas as a supercharged version of modelling other people (where the tulpa may be experienced as anything from 'part of self' to 'discomfortingly other') - then I would expect that creating a tulpa does not directly increase one's abilities but might be helpful by circumventing motivational hurdles or diversifying one's approach to problems. Also, Dark Arts of Rationality seems related.

Edit: I just read Simulate and Defer To More Rational Selves, which seems like a healthy attempt at this

Do people who spend substantial time in online virtual worlds have any experience of their avatar taking on a separate agency? Or tried to cultivate such an experience? I am active in one such world, but I experience my avatar there as being just me wearing a different body.

[-][anonymous]30

I used to spend 6+ hours a day (every minute I wasn't eating/sleeping/in class) in an MMO. My avatar never had any agency; it was just an extension of my body like my hand. Occasionally I would act "in character" but it was always a conscious decision prompted by me wanting to make other people laugh or simply play along with something someone else did. I've never heard of avatars acquiring their own sense of agency, but it's not like I went around asking people if that happened to them.

See also the related technique of Shoulder Advisor.

I read this and I think eeek...

I don't even know where to start, but months of work to create an alter-ego "imaginary friend" that has its own "free-will" ?? !!!

We all hear voices, talk to ourselves, have inner dialogues, whatever ...

Learning to listen to the versions of yourself is a good thing, but I believe that comes more from a state of relaxation and acceptance rather than forcing the creation of a tulpa.

It seems to me that trying to create a tulpa is like trying to take a shortcut with mental discipline. It seems strictly better to me to focus my effort on a single unified body of knowledge/model of the world than to try to maintain two highly correlated ones at the risk of losing your sanity. I wouldn't trust that a strong imitation of another mind would somehow be more capable than my own, and it seems like having to simulate communication with another mind is just more wasteful than just integrating what you know into your own.

Thinking about it, it reminds me of when I used to be Christian and would "hear" God's thoughts. It always felt like I was just projecting what I wanted or was afraid to hear about a situation and it never really was helpful (this thing was supposed to be the omniscient omnipotent being). This other being is the closest thing to a tulpa I've experienced and it was always silent on things that really mattered. Since killing the damned thing I've been so much happier and don't regret it at all.

That isn't to say it has to be like that, after all in my experience I really did believe the thing was external to my mind. But I feel like you would be better off spending your mental energies on understanding what you don't or learning about how to approach difficult topics than creating a shadow of a mind and hoping it outperforms you on some task.

Given that the guides mostly say that it takes months and months of hard work to create a tulpa...

I wonder how this compares to the difficulty/time required to change one's mind, and if these things might be (weakly related).