You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Joshua_Blaine comments on Open Thread, November 1 - 7, 2013 - Less Wrong Discussion

5 Post author: witzvo 02 November 2013 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (299)

You are viewing a single comment's thread.

Comment author: Joshua_Blaine 06 November 2013 03:00:53AM 6 points [-]

Does anyone here have any serious information regarding Tulpas? When I first heard of them they immediately seemed to be the kind of thing that is obviously and clearly a very bad idea, and may not even exist in the sense that people describe them. A very obvious sign of a persons who is legitimately crazy, even.

Naturally, my first re-reaction is the desire to create one myself (One might say I'm a bit contrarian by nature). I don't know any obvious reason not to (ignoring social stigma and time consuming initial investment), And there may be some advantage to having one, such as parallel focus, more "outside" self analysis, etc. I don't really know much of anything right now, which is why I'm asking if there's been any decent research done already.

Comment author: Armok_GoB 06 November 2013 04:57:57PM 3 points [-]

I've been doing some research (mainly hanging on their subreddit) and I think I have a fairly good idea of how tulpas work and the answers to your questions.

There are a myriad very different things tulpas are described as and thus "tulpas exist in the way people describe them" is not well defined.

There undisputably exist SOME specific interesting phenomena that's the referent of the word Tulpa.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

It does not seem deciding to make a tulpa is a sign of being crazy. Tulpas themselves seem to not be automatically unhealthy and can often help their host overcome depression or anxiety. However, there are many signs that the act of making a tulpa is dangerous and can trigger latent tendencies or be easily done in a catastrophically wrong way. I estimate the risk is similar to doing extensive meditation or taking a single largeish dose of LSD. For this reason I have not and will not attempt making one.

I am to lazy to find citations or examples right now but I probably could. I've tried to be a good rationalist and am fairly certain of most of these claims.

Comment author: NancyLebovitz 07 November 2013 01:28:48PM 2 points [-]

Has anyone worked on making a tulpa which is smarter than they are? This seems at least possible if you assume that many people don't let themselves make full use of their intelligence and/or judgement.

Comment author: Armok_GoB 07 November 2013 05:04:34PM 4 points [-]

Unless everything I think I understand about tulpas is wrong, this is at the very least significantly harder than just thinking yourself smarter without one. All the idea generating is done before credit is assigned to either the "self" or the "tulpa".

What there ARE several examples of however are tulpas that are more emotionally mature, better at luminosity, and don't share all their hosts preconceptions. This is not exactly smarts though, or even general purpose formal rationality.

One CAN imagine scenarios where you end up with a tulpa smarter than the host. For example the host might have learned helplessness, or the tulpa being imagined as "smarter than me" and thus all the brains good ideas get credited to it.

Disclaimer: this is based of only lots of anecdotes I've read, gut feeling, and basic stuff that should be common knowledge to any LWer.

Comment author: TheOtherDave 07 November 2013 05:40:46PM 9 points [-]

I'm reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.

So she and I created this whole whiteboard flowchart working out the design, at the end of which I said "Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I... um... completely failed to notice?"

I could certainly describe that as having a "Mark" in my head who is smarter about tax-code-related designs than I am, and there's nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.

But "Mark" in this case would just be pointing to a subset of "Dave", just as "Dave's fantasies about aliens" does.

Comment author: gwern 07 November 2013 08:04:37PM 6 points [-]

See also 'rubberducking' and previous discussions of this on LW. My basic theory is that reasoning was developed for adversarial purposes, and by rubberducking you are essentially roleplaying as an 'adversary' which triggers deeper processing (if we ever get brain imaging of system I vs system II thinking, I'd expect that adversarial thinking triggers system II more compared to 'normal' self-centered thinking).

Comment author: TheOtherDave 07 November 2013 08:21:14PM 3 points [-]

Yes. Indeed, I suspect I've told this story before on LW in just such a discussion.

I don't necessarily buy your account -- it might just be that our brains are simply not well-integrated systems, and enabling different channels whereby parts of our brains can be activated and/or interact with one another (e.g., talking to myself, singing, roleplaying different characters, getting up and walking around, drawing, etc.) gets different (and sometimes better) results.

This is also related to the circumlocution strategy for dealing with aphasia.

Comment author: Kaj_Sotala 02 January 2014 02:29:17PM *  1 point [-]

My basic theory is that reasoning was developed for adversarial purposes

Obligatory link.

Comment author: Armok_GoB 07 November 2013 06:49:06PM *  1 point [-]

Yea in that case presumably the tulpa would help - but not necessarily significantly more than such a non-tulpa model that requires considerably less work and risk.

Basically, a tulpa can technically do almost anything you can... but the absence of a tulpa can do them to, and for almost all of them there's some much easier and at least as effective way to do the same thing.

Comment author: ChristianKl 08 November 2013 04:04:09PM 0 points [-]

Basically, a tulpa can technically do almost anything you can...

Mental process like waking up without an alarm clock at a specific time aren't easy. I know a bunch of people who have that skill but it's not like there a step by step manual that you can easily follow that gives you that ability.

A tulpa can do things like that. There are many mental processes that you can't access directly but that a tulpa might be able to access.

Comment author: Armok_GoB 08 November 2013 05:22:40PM 1 point [-]

I am surprised to know there isn't such a step by step manual, suspect that you're wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.

But I guess you could make this argument; that a tulpa is more flexible and has a simpler user interface, even if it's less powerful and has a bunch of logistic and moral problems. I dont like it but I can't think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.

Comment author: ChristianKl 09 November 2013 05:39:00AM 2 points [-]

I am surprised to know there isn't such a step by step manual, suspect that you're wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.

My point isn't so much that it impossible but that it isn't easy.

Creating a mental device that only wakes me up will be easier than creating a whole Tupla but once you do have a Tulpa you can reuse it a lot.

Let's say I want to practice Salsa dance moves at home. Visualising a full dance partner completely just for the purpose of having a dance partner at home wouldn't be worth the effort.

I'm not sure about how much you gain by pair programming with a Tulpa, but the Tulpa might be useful for that task.

It takes a lot of energy to create it the first time but afterwards you reap the benefits.

I dont like it but I can't think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.

Tulpa creation involves quite a lot of effort so it doesn't seem the lazy road.

Comment author: Armok_GoB 09 November 2013 04:29:04PM 0 points [-]

Hmm, you have a point, I hadn't thought about it that way. If it wasn't so dangerous I would have asked you to experiment.

Comment author: hesperidia 02 December 2013 04:00:36AM *  0 points [-]

Mental process like waking up without an alarm clock at a specific time aren't easy. I know a bunch of people who have that skill but it's not like there a step by step manual that you can easily follow that gives you that ability.

I do not have "wake up at a specific time" ability, but I have trained myself to have "wake up within ~1.5 hours of the specific time" ability. I did this over a summer break in elementary school because I learned about how sleep worked and thought it would be cool. Note that you will need to have basically no sleep debt (you consistently wake up without an alarm) for this to work correctly.

The central point of this method is this: a sleep cycle (the time it takes to go from a light stage of sleep to the deeper stages of sleep and back again) is about 1.5 hours long. If I am not under stress or sleep debt, I can estimate my sleeping time to the nearest sleep cycle. Using the sleep cycle as a unit of measurement lets me partition out sleep without being especially reliant on my (in)ability to perceive time.

The way I did it is this (each step was done until I could do it reliably, which took up to a week each for me [but I was a preteen then, so it may be different for adults]):

  1. Block off approximately 2 hours (depending on how long it takes you to fall asleep), right after lunch so it has the least danger of merging with your consolidated/night sleep, and take a nap. Note how this makes you feel.

  2. Do that again, but instead of blocking off the 2 hours with an alarm clock, try doing it naturally, and awakening when it feels natural, around the 1.5h mark (repeating this because it is very important: you will need to have very little to no accumulated sleep debt for this to work). Note how this makes you feel.

  3. Do that again, but with a ~3.5-hour block. Take two 1.5 hour sleep cycle naps one after another (wake up in between).

  4. During a night's sleep, try waking up between every sleep cycle. Check this against [your sleep time in hours / 1.5h per sleep cycle] to make sure that you caught all of them.

  5. Block off a ~3.5 hour nap and try taking it as two sleep cycles without waking up in between them. (Not sure about the order with this point and the previous one. Did I do them in the opposite order? I'm reconstructing from memory here. It's probably possible to make this work in either order.)

  6. You probably know from step 4 how many sleep cycles you have in a night. Now you should be able to do things like consciously split up your sleep biphasically, or waking up a sleep cycle earlier than you usually do.

I then spent the rest of summer break with a biphasic "first/second sleep" rhythm, which disappeared once I was in school and had to wake up at specific times again.

To this day, I sleep especially lightly, must take my naps in 1.5 hour intervals, and will frequently wake up between sleep cycles (I've had to keep a clock on my nightstand since then so I can orient myself if I get woken unexpectedly by noises, because a 3:30AM waking is different from a 5AM waking, but they're at the same point on the cycle so they feel similar). I also almost always wake up 10-45 minutes before any set alarms, which would be more useful if the spread was smaller (45 minutes before I actually need to wake up seems like a waste). It's a cool skill to have, but it has its downsides.

Comment author: TheOtherDave 07 November 2013 07:57:55PM 0 points [-]

a tulpa can technically do almost anything you can...

Yes, I would expect this.
Indeed, I'm surprised by the "almost" -- what are the exceptions?

Comment author: Armok_GoB 07 November 2013 09:26:58PM 0 points [-]

Anything that requires you using your body and interacting physically with the world.

Comment author: TheOtherDave 07 November 2013 09:33:58PM 0 points [-]

I'm startled. Why can't a tulpa control my body and interact physically with the world, if it's (mutually?) convenient for it to do so?

Comment author: Armok_GoB 07 November 2013 09:44:54PM 0 points [-]

Well if you consider that the tulpa doing it on it's own then no I can't think of any specific exceptions. Most tulpas can't do that trick though.

Comment author: TheOtherDave 06 November 2013 05:11:42PM 2 points [-]

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

Would you classify a novel in the same "moral-status" tier as these four examples?

Comment author: Armok_GoB 06 November 2013 09:49:39PM 0 points [-]

No, thats much much lower. As in torture a novel for decades in order to give a tulpa a quick amusement being a moral thing to do lower.

Assuming you mean either a physical book, or the simulation of the average minor character in the author's mind, here. Main characters or RPing PCs can vary in complexity of simulation from author to author a lot and it's a theory that some become effectively tulpas.

Comment author: TheOtherDave 06 November 2013 10:11:45PM 1 point [-]

Your answer clarifies what I was trying to get at with my question but wasn't quite sure how to ask, thanks; my question was deeply muddled.

For my own part, treating a tulpa as having the moral status of an independent individual distinct from its creator seems unjustified. I would be reluctant to destroy one because it is the unique and likely-unreconstructable creative output of a human being, much like I would be reluctant to destroy a novel someone had written (as in, erase all copies of such that the novel itself no longer exists), but that's about as far as I go.

I didn't mean a physical copy of a novel, sorry that wasn't clear.

Yes, destroying all memory of a character someone played in an RPG and valued remembering I would class similarly.

But all of these are essentially property crimes, whose victim is the creator of the artwork (or more properly speaking the owner, though in most cases I can think of the roles are not really separable), not the work of art itself.

I have no idea what "torture a novel" even means, it strikes me as a category error on a par with "paint German blue" or "burn last Tuesday".

Comment author: ChristianKl 08 November 2013 04:05:18PM *  0 points [-]

What do you think about the moral status of torturing an uploaded human mind that's in silicon?

Does that mind have a different moral status than one in a brain?

Comment author: TheOtherDave 08 November 2013 04:10:45PM 2 points [-]

Certainly not by virtue of being implemented in silicon, no. Why do you ask?

Comment author: Armok_GoB 06 November 2013 11:02:45PM 0 points [-]

Ah. No, I think you'd change your mind if you spent a few hours talking to accounts that claim to be tulpas.

A newborn infant or alzheimer's patient is not an independent individual distinct from it's caretaker either. Do you count their destruction as property crime as well? "Person"-ness is not binary, it's not even a continuum. It's a cluster of properties that usually correlate but in the case of tulpas does not. I recommend re-reading Diseased Thinking.

As for your category error: /me argues for how german is a depressing language and spends all that was gained in that day on something that will not last. Then a pale-green tulpa snores in an angry manner.

Comment author: [deleted] 12 November 2013 03:17:55PM 1 point [-]

As for your category error: /me argues for how german is a depressing language and spends all that was gained in that day on something that will not last. Then a pale-green tulpa snores in an angry manner.

I picture a sheet of paper with a paragraph in each of several languages, a paintbrush, and watercolours. Then boring-sounding environmental considerations make me feel outraged without me consciously realizing what's happening.

Comment author: TheOtherDave 06 November 2013 11:50:53PM 0 points [-]

I agree that person-ness is cluster of properties and not a binary.

I don't believe that tulpas possess a significant subset those properties independent of the person whose tulpa they are.

I don't think I'm failing to understand any of what's discussed in Diseased Thinking. If there's something in particular you think I'm failing to understand, I'd appreciate you pointing it out.

It's possible that talking to accounts that claim to be tulpas would change my mind, as you suggest. It's also possible that talking to bodies that claim to channel spirit-beings or past lives would change my mind about the existence of spirit-beings or reincarnation. Many other people have been convinced by such experiences, and I have no especially justified reason to believe that I'm relevantly different from them.

Of course, that doesn't mean that reincarnation happens, nor that spirit-beings exist who can be channeled, or that tulpas possess a significant subset of the properties which constitute person-ness independent of the person whose tulpa they are.

A newborn infant or alzheimer's patient is not an independent individual distinct from it's caretaker either.

Eh?

I can take a newborn infant away from its caretaker and hand it to a different caretaker... or to no caretaker at all... or to several caretakers. I would say it remains the same newborn infant. The caretaker can die, and the newborn infant continues to live; and vice-versa.

That seems to me sufficient justification (not necessary, but sufficient) to call it an independent individual.

Why do you say it isn't?

Do you count their destruction as property crime as well?

I count it as less like a property crime than destroying a tulpa, a novel, or an RPG character. There are things I count it as more like a property crime than.

Comment author: Armok_GoB 07 November 2013 04:52:57PM 0 points [-]

Seems I were wrong about you not understanding the word thing. Apologies.

You keep saying that word "independent". I'm starting to think we might not disagree about any objective properties of tulpas, just things need to be "independent" or only the most important count towards your utility, but I just add up the identifiable patterns not caring about if they overlap. Metaphor: tulpas are "10101101", you're saying "101" occurs 2 times, I'm saying "101" occurs 3 times.

I'm fairly certain talking to bodies that claim those things would not change my probability estimates on those claims unless powerful brainwashing techniques were used, and I certainly hope the same is the case for you. If I believed that doing that would predictably shift my beliefs I'd already have those beliefs. Conservation of Expected Evidence.

((You can move a tulpa between minds to, probably, it just requires a lot of high tech, unethical surgery, and work. And probably gives the old host permanent severe brain damage. Same as with any other kind of incommunicable memory.))

Comment author: TheOtherDave 07 November 2013 05:20:26PM 1 point [-]

You keep saying that word "independent".

(shrug) Well, I certainly agree that when I interact with a tulpa, I am interacting with a person... specifically, I'm interacting with the person whose tulpa it is, just as I am when I interact with a PC in an RPG.

What I disagree with is the claim that the tulpa has the moral status of a person (even a newborn person) independent of the moral status of the person whose tulpa it is.

I'm fairly certain talking to bodies that claim those things would not change my probability estimates on those claims unless powerful brainwashing techniques were used, and I certainly hope the same is the case for you.

On what grounds do you believe that? As I say, I observe that such experiences frequently convince other people; without some grounds for believing that I'm relevantly different from other people, my prior (your hopes notwithstanding) is that they stand a good chance of convincing me too. Ditto for talking to a tulpa.

((You can move a tulpa between minds to, probably, it just requires a lot of high tech, unethical surgery, and work. And probably gives the old host permanent severe brain damage. Same as with any other kind of incommunicable memory.))

(shrug) I don't deny this (though I'm not convinced of it either) but I don't see the relevance of it.

Comment author: Armok_GoB 07 November 2013 06:53:41PM 0 points [-]

Yea this seems to definitely be just a fundamental values conflict. Let's just end the conversation here.

Comment author: hylleddin 08 November 2013 01:40:30AM *  1 point [-]

As someone with personal experience with a tulpa, I agree with most of this.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how "well-realized" they are.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I have no idea what a tulpa's moral status is, besides not less than a fictional character and not more than a typical human.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

I would expect most of them to have about the same intelligence, rather than lower intelligence.

Comment author: Armok_GoB 08 November 2013 05:05:10PM 0 points [-]

You are probably counting more properties things can vary under as "ontological". I'm mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.

I'm basing the moral status largely on "well realized", "complex" and "technically sentient" here. You'll notice all my example ALSO has the actual utility function multiplier at "unknown".

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Comment author: hylleddin 08 November 2013 10:43:03PM 1 point [-]

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Ah. I see what you mean. That makes sense.

Comment author: TheOtherDave 06 November 2013 03:03:26AM *  3 points [-]
Comment author: Joshua_Blaine 06 November 2013 01:54:38PM *  1 point [-]

I had not, actually. The link you've given just links me to Google's homepage, but I did just search LW for "Tulpa" and found it fine, so thanks regardless.

edit: The link's original purpose now works for me. I'm not sure what the problem was before, but it's gone now.

Comment author: IlyaShpitser 08 November 2013 03:48:56PM 2 points [-]

Well, if you think that the human illusion of unified agency is a good ideal to strive for, it then seems that messing around w/ tulpas is a bad thing. If you have really seriously abandoned that ideal (very few people I know have), then knock yourself out!

Comment author: Vulture 10 November 2013 05:30:50AM 0 points [-]

Why would it be considered important to maintain a feeling of unified agency?

Comment author: IlyaShpitser 10 November 2013 02:04:59PM *  2 points [-]

Is this a serious question? Everything in our society, from laws to social conventions, is based on unified agency.

The consequentialist view of rationality as expressed here seems to be based on the notion of unified agency of people (the notion of a single utility function is only coherent for unified agents).


It's fine if you don't want to maintain unified agency, but it's obviously an important concept for a lot of people. I have not met a single person who truly has abandoned this concept in their life, interactions with others, etc. The conventional view is someone without unified agency has demons to be cast out ("my name is Legion, for we are many.")

Comment author: Vulture 10 November 2013 10:35:34PM 0 points [-]

By "agency", are you referring to physical control of the body? As far as I can tell, the process of "switching" (allowing the tulpa to control the host's body temporarily) is a very rare process which is a good deal more difficult than just creating a tulpa, and which many people who have tulpas cannot do at all even if they try.

Comment author: Tenoke 06 November 2013 05:56:44PM 2 points [-]

I don't know any obvious reason not to

What is stopping is me is the possibility that I will be potentially permanently relinquishing cognitive resources for the sake of the Tulpa.

Comment author: Ishaan 09 November 2013 11:14:53PM *  1 point [-]

There's tons of easily discovered information on the web about it.

I'm not sure the Tulpa-crowed would agree with this, but I think a non-esoteric example of Tulpas in everyday life is how some religious people say that God really speaks and appears to them. The "learning process" and stuff seem pretty similar - the only difference I can see is that in the case of Tulpas it is commonly acknowledged that the phenomenon is imaginary.

Come to think of it, that's probably a really good method for creating Tulpas quickly - building off a real or fictional character for whom you already have a relatively sophisticated mental model. It's probably also important that you are predisposed to take seriously the notion that this thing might actually be an agent which interacts with you...which might be why God works so well, and why the Tulpa-crowed keeps insisting that Tulpas are "real" in the sense that they carry moral weight. It's an imagination-belief driven phenomenon.

It might also illustrate some of the "dangers" - for example, some people who grew up with notions of the angry sort of God might always feel guilty about certain "sinful" things which they might not intellectually feel are bad.

I've also heard claims of people who gain extra abilities / parallel processing / "reminders" with Tulpas....basically, stuff that they couldn't do on their own. I don't really believe that this is possible, and if this were demonstrated to me I would need to update my model of the phenomenon. To the tupla-community's credit, they seem willing to test the belief.

Comment author: Lumifer 06 November 2013 09:44:11PM *  0 points [-]

I don't know any obvious reason not to

A fairly obvious reason is that to generate a tulpa you need to screw up you mind in a sufficiently radical fashion. And once you do that, you may not be able to unfuck it back to normal.

I vaguely recall (sorry, no link) reading a post by a psychiatrist who said that creating tulpas is basically self-induced schizophrenia. I don't think schizophrenia is fun.

Comment author: Adele_L 07 November 2013 08:40:26PM 1 point [-]

A fairly obvious reason is that to generate a tulpa you need to screw up you mind in a sufficiently radical fashion. And once you do that, you may not be able to unfuck it back to normal.

This is a concern I share. However...

I vaguely recall (sorry, no link) reading a post by a psychiatrist who said that creating tulpas is basically self-induced schizophrenia. I don't think schizophrenia is fun.

This is the worst argument in the world.

Comment author: Lumifer 07 November 2013 09:01:22PM -1 points [-]

This is the worst argument in the world.

I don't think so, it can be rephrased tabooing emotional words. I am not trying to attach some stigma of mental illness, I'm pointing out that tulpas are basically a self-inflicted case of what the medical profession calls dissociative identity disorder and that it has significant mental costs.

Comment author: Kaj_Sotala 02 January 2014 02:47:20PM 0 points [-]

I'm pointing out that tulpas are basically a self-inflicted case of what the medical profession calls dissociative identity disorder and that it has significant mental costs.

Taylor et al. claim that although people who exhibit the illusion of independent agency do score higher than the population norm on a screening test of dissociative symptoms, the profile on the most diagnostic items is different from DID patients, and scores on the test do not predict IIA:

The writers also scored higher than general population norms on the Dissociative Experiences Scale. The mean score across all 28 items on the DES in our sample of writers was 18.52 (SO = 16.07), ranging from a minimum of 1.43 to a maximum of 42.14. This mean is significantly higher from the average DES score of 7.8 found in a general population sample of 415 [27], ((48) = 8.05, p < .001.

In fact, the writers' scores are closer to the average DES score for a sample of 61 schizophrenics (schizophrenic M = 17.7) [27]. Seven of the writers scored at or above 30, a commonly used cutoff for "normal scores" [29]. There was no difference between men's and women's overall DES scores in our sample, a finding consistent with results found in other studies of normal populations [26].

With these comparisons, our goal is to highlight the unusually high scores for our writers, not to suggest that they were psychologically unhealthy. Although scores of 30 or above are more common among people with dissociative disorders (such as Dissociative Identity Disorder), scoring in this range does not guarantee that the person has a dissociative disorder, nor does it constitute a diagnosis of a dissociative disorder [27,29]. Looking at the different subscales of the DES, it is clear that our writers deviated from the norm mainly on items related to the absorption and changeability factor of the DES. Average scores on this subscale (M = 26.22, SD = 14.45) were significantly different from scores on the two subscales that are particularly diagnostic for dissociative disorders: derealization and depersonalization subscale (At = 7.84, SD = 7.39) and the amnestic experiences subscale (M = 6.80, SD = 8.30), F(l,48) = 112.49, p < ,001. These latter two subscales did not differ from each other, F(l, 48) = ,656, p = .42. Seventeen writers scored above 30 on the absorption and changeability scale, whereas only one writer scored above 30 on the derealization and depersonalization scale and only one writer (a different participant) scored above 30 on the amnestic experiences scale.

A regression analysis using the IRI subscales (fantasy, empathic concern, perspective taking, and personal distress) and the DES subscales (absorption and changeability, arnnestic experiences, and derealization and depersonalization) to predict overall IIA was run. The overall model was not significant r^2 = .22, F(7, 41) = 1.63, p = .15. However, writers who had higher IIA scores scored higher on the fantasy suhscale of IRI, b = .333, t(48) = 2.04, p < .05 andmarginally lower on the empathic concern subscale, b = -.351, t(48) = -1.82, p < .10 (all betas are standardized). Because not all of the items on the DES are included in one of the three subscales, we also ran a regression model predicting overall IIA from the mean score across DES items. Neither the r^2 nor the standardized beta for total DES scores was significant in this analysis.

Comment author: ChristianKl 08 November 2013 03:50:59PM 0 points [-]

Could you describe the relevant mental costs that you would expect as a sideeffect of creating a tulpa?

Comment author: Lumifer 08 November 2013 04:14:46PM 0 points [-]

Loss of control over your mind.

Comment author: ChristianKl 08 November 2013 04:34:07PM 1 point [-]

What does that mean?

Comment author: Lumifer 08 November 2013 04:48:48PM 0 points [-]

An entirely literal reading of that phrase.

Comment author: ChristianKl 08 November 2013 04:50:26PM -1 points [-]

So you mean that you are something that's separate from your mind? If so, what's you and how does it control the mind?

Comment author: Lumifer 08 November 2013 05:08:23PM *  2 points [-]

Your mind is a very complicated entity. It has been suggested that looking at it as a network (or an ecology) of multiple agents is a more useful view than thinking about it as something monolithic.

In particular, your reasoning consciousness is very much not the only agent in your mind and is not the only controller. An early example of such analysis is Freud's distinction between the id, the ego, and the superego.

Usually, though, your conscious self has sufficient control in day-to-day activities. This control breaks down, for example, under severe emotional stress. Or it can be subverted (cf. problems with maintaining diets). The point is that it's not absolute and you can have more of it or less of it. People with less are often described as having "poor impulse control" but that's not the only mode. Addiction would be another example.

So what I mean here is that the part of your mind that you think of as "I", the one that does conscious reasoning, will have less control over yourself.

Comment author: Vulture 08 November 2013 04:56:25AM *  0 points [-]

Welp, look at that, I just found this thread after finishing up a long comment on the subject in an older thread. Go figure. (By the way, I do recommend reading that entire discussion, which included some actual tulpas chiming in).

Comment author: ChristianKl 08 November 2013 03:51:57PM 0 points [-]

Tulpa creation is effectively the creation of a form of sentinent AI that runs on the hardware of your brain instead of silicon.

That brings up a moral question. To what extend is it immoral to create a Tulpa and have it be in pain?

Tulpa are supposed to suffer from not getting enough attention so if you can't commit to giving it a lot of attention for the rest of your life you might commit an immoral act by creating it.

Comment author: Lumifer 08 November 2013 04:18:05PM 1 point [-]

Tulpa creation is effectively the creation of a form of sentinent AI that runs on the hardware of your brain instead of silicon.

No, I don't think so. It's notably missing the "artificial" part of AI.

I think of tulpa creation as splitting off a shard of your own mind. It's still your own mind, only split now.

Comment author: Vulture 10 November 2013 02:52:10AM *  0 points [-]

I think the really relevant ethical question is whether a tulpa has a separate consciousness from its host. From my own researches in the area (which have been very casual, mind you), I consider it highly unlikely that they have separate consciousness, but not so unlikely that I would be willing to create a tulpa and then let it die, for example.

In fact, my uncertainty on this issue is the main reason I am ambivalent about creating a tulpa. It seems like it would be very useful: I solve problems much better when working with other people, even if they don't contribute much; a tulpa more virtuous than myself could be a potent tool for self-improvement; it could help ameliorate the "fear of social isolation" obstacle to potential ambitious projects; I would gain a better understanding of how tulpas work; I could practice dancing and shaking hands more often; etc. etc. But I worry about being responsible for what may be (even with only ~15% subjective probability) a conscious mind, which will then literally die if I don't spend time with it regularly (ref).

Comment author: TheOtherDave 10 November 2013 04:10:40AM 0 points [-]

Just to clarify this a little... how many separate consciousnesses do you estimate your brain currently hosts?

Comment author: Vulture 10 November 2013 05:11:21AM 0 points [-]

By my current (layman's) understanding of consciousness, my brain currently hosts exactly one.

Comment author: TheOtherDave 10 November 2013 02:00:24PM 0 points [-]

OK, thanks.

Comment author: ChristianKl 08 November 2013 04:32:43PM 0 points [-]

No, I don't think so. It's notably missing the "artificial" part of AI.

It's not your normal mind, so it's artifical for ethical considerations.

I think of tulpa creation as splitting off a shard of your own mind. It's still your own mind, only split now.

As far as I read stuff written by people with Tulpa's they treat them as entity who's desires matter.

Comment author: Vulture 10 November 2013 02:53:18AM 1 point [-]

It's not your normal mind, so it's artifical for ethical considerations.

This might be a stupid question, but what ethical considerations are different for an "artificial" mind?

Comment author: ChristianKl 10 November 2013 03:36:35PM 0 points [-]

This might be a stupid question, but what ethical considerations are different for an "artificial" mind?

When talking about AGI few people label it as murder to shut down the AI that's in the box. At least it's worth a discussion whether it is.

Comment author: [deleted] 11 November 2013 08:16:51PM 2 points [-]
Comment author: Vulture 12 November 2013 04:35:23AM *  1 point [-]

Wow, I had forgotten about that non-person predicates post. I definitely never thought it would have any bearing on a decision I personally would have to make. I was wrong.

Comment author: Vulture 10 November 2013 08:27:59PM 0 points [-]

Really? I was under the impression that there was a strong consensus, at least here on LW, that a sufficiently accurate simulation of consciousness is the moral equivalent of consciousness.

Comment author: ChristianKl 11 November 2013 04:12:31PM *  0 points [-]

"Sufficiently accurate simulation of consciousness" is a subset of set of things that are artificial minds. You might have a consensus for that class. I don't think you have an understanding that all minds have the same moral value. Even all minds with a certain level of intelligence.

Comment author: Vulture 11 November 2013 07:03:12PM 0 points [-]

At least for me, personally, the relevant property for moral status is whether it has consciousness.

Comment author: TheOtherDave 11 November 2013 02:32:42AM *  0 points [-]

That's my understanding as well.... though I would say, rather, that being artificial is not a particularly important attribute towards evaluating the moral status of a consciousness. IOW, an artificial consciousness is a consciousness, and the same moral considerations apply to it as other consciousnesses with the same properties. That said, I also think this whole "a tulpa {is,isn't} an artificial intelligence" discussion is an excellent example of losing track of referents in favor of manipulating symbols, so I don't think it matters much in context.

Comment author: Lumifer 08 November 2013 04:47:20PM 1 point [-]

It's not your normal mind, so it's artifical for ethical considerations.

I don't find this argument convincing.

As far as I read stuff written by people with Tulpa's they treat them as entity who's desires matter.

Yes, and..?

Let me quote William Gibson here:

Addictions ... started out like magical pets, pocket monsters. They did extraordinary tricks, showed you things you hadn't seen, were fun. But came, through some gradual dire alchemy, to make decisions for you. Eventually, they were making your most crucial life-decisions. And they were ... less intelligent than goldfish.

Comment author: ChristianKl 08 November 2013 04:52:55PM 0 points [-]

Yes, and..?

There a good chance that you will also hold that belief when you will interact with the Tulpa on a daily basis. As such it makes sense to think about the implications of the whole affair before creating one.

Comment author: Lumifer 08 November 2013 05:12:17PM 2 points [-]

I still don't see what you are getting at. If I treat a tulpa as a shard of my own mind, of course its desires matter, it's the desires of my own mind.

Think of having an internal dialogue with yourself. I think of tulpas as a boosted/uplifted version of a party in that internal dialogue.

Comment author: Armok_GoB 08 November 2013 05:11:38PM 0 points [-]

Just so facts without getting entangled in the argument: In anecdotes tulpas seem to report more abstract and less intense types of suffering than humans. The by far dominant source of suffering in tulpas seems to be via empathy with the host. The suffering from not getting enough attention is probably fully explainable by loneliness, and sadness over fading away losing the ability to think and do things.

Comment author: Vulture 10 November 2013 02:54:36AM 0 points [-]

This is very useful information if true. Could you link to some of the anecdotes which you draw this from?

Comment author: Armok_GoB 10 November 2013 09:49:14PM 0 points [-]

Look around yourself on http://www.reddit.com/r/Tulpas/ or ask some yourself on the verius IRC rooms that can be reached form there. I only have vague memories built from threads buried noths back on that redit.