Joshua_Blaine comments on Open Thread, November 1 - 7, 2013 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (299)
Does anyone here have any serious information regarding Tulpas? When I first heard of them they immediately seemed to be the kind of thing that is obviously and clearly a very bad idea, and may not even exist in the sense that people describe them. A very obvious sign of a persons who is legitimately crazy, even.
Naturally, my first re-reaction is the desire to create one myself (One might say I'm a bit contrarian by nature). I don't know any obvious reason not to (ignoring social stigma and time consuming initial investment), And there may be some advantage to having one, such as parallel focus, more "outside" self analysis, etc. I don't really know much of anything right now, which is why I'm asking if there's been any decent research done already.
I've been doing some research (mainly hanging on their subreddit) and I think I have a fairly good idea of how tulpas work and the answers to your questions.
There are a myriad very different things tulpas are described as and thus "tulpas exist in the way people describe them" is not well defined.
There undisputably exist SOME specific interesting phenomena that's the referent of the word Tulpa.
I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.
I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.
I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.
It does not seem deciding to make a tulpa is a sign of being crazy. Tulpas themselves seem to not be automatically unhealthy and can often help their host overcome depression or anxiety. However, there are many signs that the act of making a tulpa is dangerous and can trigger latent tendencies or be easily done in a catastrophically wrong way. I estimate the risk is similar to doing extensive meditation or taking a single largeish dose of LSD. For this reason I have not and will not attempt making one.
I am to lazy to find citations or examples right now but I probably could. I've tried to be a good rationalist and am fairly certain of most of these claims.
Has anyone worked on making a tulpa which is smarter than they are? This seems at least possible if you assume that many people don't let themselves make full use of their intelligence and/or judgement.
Unless everything I think I understand about tulpas is wrong, this is at the very least significantly harder than just thinking yourself smarter without one. All the idea generating is done before credit is assigned to either the "self" or the "tulpa".
What there ARE several examples of however are tulpas that are more emotionally mature, better at luminosity, and don't share all their hosts preconceptions. This is not exactly smarts though, or even general purpose formal rationality.
One CAN imagine scenarios where you end up with a tulpa smarter than the host. For example the host might have learned helplessness, or the tulpa being imagined as "smarter than me" and thus all the brains good ideas get credited to it.
Disclaimer: this is based of only lots of anecdotes I've read, gut feeling, and basic stuff that should be common knowledge to any LWer.
I'm reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.
So she and I created this whole whiteboard flowchart working out the design, at the end of which I said "Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I... um... completely failed to notice?"
I could certainly describe that as having a "Mark" in my head who is smarter about tax-code-related designs than I am, and there's nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.
But "Mark" in this case would just be pointing to a subset of "Dave", just as "Dave's fantasies about aliens" does.
See also 'rubberducking' and previous discussions of this on LW. My basic theory is that reasoning was developed for adversarial purposes, and by rubberducking you are essentially roleplaying as an 'adversary' which triggers deeper processing (if we ever get brain imaging of system I vs system II thinking, I'd expect that adversarial thinking triggers system II more compared to 'normal' self-centered thinking).
Yes. Indeed, I suspect I've told this story before on LW in just such a discussion.
I don't necessarily buy your account -- it might just be that our brains are simply not well-integrated systems, and enabling different channels whereby parts of our brains can be activated and/or interact with one another (e.g., talking to myself, singing, roleplaying different characters, getting up and walking around, drawing, etc.) gets different (and sometimes better) results.
This is also related to the circumlocution strategy for dealing with aphasia.
Obligatory link.
Yea in that case presumably the tulpa would help - but not necessarily significantly more than such a non-tulpa model that requires considerably less work and risk.
Basically, a tulpa can technically do almost anything you can... but the absence of a tulpa can do them to, and for almost all of them there's some much easier and at least as effective way to do the same thing.
Mental process like waking up without an alarm clock at a specific time aren't easy. I know a bunch of people who have that skill but it's not like there a step by step manual that you can easily follow that gives you that ability.
A tulpa can do things like that. There are many mental processes that you can't access directly but that a tulpa might be able to access.
I am surprised to know there isn't such a step by step manual, suspect that you're wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.
But I guess you could make this argument; that a tulpa is more flexible and has a simpler user interface, even if it's less powerful and has a bunch of logistic and moral problems. I dont like it but I can't think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.
My point isn't so much that it impossible but that it isn't easy.
Creating a mental device that only wakes me up will be easier than creating a whole Tupla but once you do have a Tulpa you can reuse it a lot.
Let's say I want to practice Salsa dance moves at home. Visualising a full dance partner completely just for the purpose of having a dance partner at home wouldn't be worth the effort.
I'm not sure about how much you gain by pair programming with a Tulpa, but the Tulpa might be useful for that task.
It takes a lot of energy to create it the first time but afterwards you reap the benefits.
Tulpa creation involves quite a lot of effort so it doesn't seem the lazy road.
Hmm, you have a point, I hadn't thought about it that way. If it wasn't so dangerous I would have asked you to experiment.
I do not have "wake up at a specific time" ability, but I have trained myself to have "wake up within ~1.5 hours of the specific time" ability. I did this over a summer break in elementary school because I learned about how sleep worked and thought it would be cool. Note that you will need to have basically no sleep debt (you consistently wake up without an alarm) for this to work correctly.
The central point of this method is this: a sleep cycle (the time it takes to go from a light stage of sleep to the deeper stages of sleep and back again) is about 1.5 hours long. If I am not under stress or sleep debt, I can estimate my sleeping time to the nearest sleep cycle. Using the sleep cycle as a unit of measurement lets me partition out sleep without being especially reliant on my (in)ability to perceive time.
The way I did it is this (each step was done until I could do it reliably, which took up to a week each for me [but I was a preteen then, so it may be different for adults]):
Block off approximately 2 hours (depending on how long it takes you to fall asleep), right after lunch so it has the least danger of merging with your consolidated/night sleep, and take a nap. Note how this makes you feel.
Do that again, but instead of blocking off the 2 hours with an alarm clock, try doing it naturally, and awakening when it feels natural, around the 1.5h mark (repeating this because it is very important: you will need to have very little to no accumulated sleep debt for this to work). Note how this makes you feel.
Do that again, but with a ~3.5-hour block. Take two 1.5 hour sleep cycle naps one after another (wake up in between).
During a night's sleep, try waking up between every sleep cycle. Check this against [your sleep time in hours / 1.5h per sleep cycle] to make sure that you caught all of them.
Block off a ~3.5 hour nap and try taking it as two sleep cycles without waking up in between them. (Not sure about the order with this point and the previous one. Did I do them in the opposite order? I'm reconstructing from memory here. It's probably possible to make this work in either order.)
You probably know from step 4 how many sleep cycles you have in a night. Now you should be able to do things like consciously split up your sleep biphasically, or waking up a sleep cycle earlier than you usually do.
I then spent the rest of summer break with a biphasic "first/second sleep" rhythm, which disappeared once I was in school and had to wake up at specific times again.
To this day, I sleep especially lightly, must take my naps in 1.5 hour intervals, and will frequently wake up between sleep cycles (I've had to keep a clock on my nightstand since then so I can orient myself if I get woken unexpectedly by noises, because a 3:30AM waking is different from a 5AM waking, but they're at the same point on the cycle so they feel similar). I also almost always wake up 10-45 minutes before any set alarms, which would be more useful if the spread was smaller (45 minutes before I actually need to wake up seems like a waste). It's a cool skill to have, but it has its downsides.
Yes, I would expect this.
Indeed, I'm surprised by the "almost" -- what are the exceptions?
Anything that requires you using your body and interacting physically with the world.
I'm startled. Why can't a tulpa control my body and interact physically with the world, if it's (mutually?) convenient for it to do so?
Well if you consider that the tulpa doing it on it's own then no I can't think of any specific exceptions. Most tulpas can't do that trick though.
Would you classify a novel in the same "moral-status" tier as these four examples?
No, thats much much lower. As in torture a novel for decades in order to give a tulpa a quick amusement being a moral thing to do lower.
Assuming you mean either a physical book, or the simulation of the average minor character in the author's mind, here. Main characters or RPing PCs can vary in complexity of simulation from author to author a lot and it's a theory that some become effectively tulpas.
Your answer clarifies what I was trying to get at with my question but wasn't quite sure how to ask, thanks; my question was deeply muddled.
For my own part, treating a tulpa as having the moral status of an independent individual distinct from its creator seems unjustified. I would be reluctant to destroy one because it is the unique and likely-unreconstructable creative output of a human being, much like I would be reluctant to destroy a novel someone had written (as in, erase all copies of such that the novel itself no longer exists), but that's about as far as I go.
I didn't mean a physical copy of a novel, sorry that wasn't clear.
Yes, destroying all memory of a character someone played in an RPG and valued remembering I would class similarly.
But all of these are essentially property crimes, whose victim is the creator of the artwork (or more properly speaking the owner, though in most cases I can think of the roles are not really separable), not the work of art itself.
I have no idea what "torture a novel" even means, it strikes me as a category error on a par with "paint German blue" or "burn last Tuesday".
What do you think about the moral status of torturing an uploaded human mind that's in silicon?
Does that mind have a different moral status than one in a brain?
Certainly not by virtue of being implemented in silicon, no. Why do you ask?
Ah. No, I think you'd change your mind if you spent a few hours talking to accounts that claim to be tulpas.
A newborn infant or alzheimer's patient is not an independent individual distinct from it's caretaker either. Do you count their destruction as property crime as well? "Person"-ness is not binary, it's not even a continuum. It's a cluster of properties that usually correlate but in the case of tulpas does not. I recommend re-reading Diseased Thinking.
As for your category error: /me argues for how german is a depressing language and spends all that was gained in that day on something that will not last. Then a pale-green tulpa snores in an angry manner.
I picture a sheet of paper with a paragraph in each of several languages, a paintbrush, and watercolours. Then boring-sounding environmental considerations make me feel outraged without me consciously realizing what's happening.
I agree that person-ness is cluster of properties and not a binary.
I don't believe that tulpas possess a significant subset those properties independent of the person whose tulpa they are.
I don't think I'm failing to understand any of what's discussed in Diseased Thinking. If there's something in particular you think I'm failing to understand, I'd appreciate you pointing it out.
It's possible that talking to accounts that claim to be tulpas would change my mind, as you suggest. It's also possible that talking to bodies that claim to channel spirit-beings or past lives would change my mind about the existence of spirit-beings or reincarnation. Many other people have been convinced by such experiences, and I have no especially justified reason to believe that I'm relevantly different from them.
Of course, that doesn't mean that reincarnation happens, nor that spirit-beings exist who can be channeled, or that tulpas possess a significant subset of the properties which constitute person-ness independent of the person whose tulpa they are.
Eh?
I can take a newborn infant away from its caretaker and hand it to a different caretaker... or to no caretaker at all... or to several caretakers. I would say it remains the same newborn infant. The caretaker can die, and the newborn infant continues to live; and vice-versa.
That seems to me sufficient justification (not necessary, but sufficient) to call it an independent individual.
Why do you say it isn't?
I count it as less like a property crime than destroying a tulpa, a novel, or an RPG character. There are things I count it as more like a property crime than.
Seems I were wrong about you not understanding the word thing. Apologies.
You keep saying that word "independent". I'm starting to think we might not disagree about any objective properties of tulpas, just things need to be "independent" or only the most important count towards your utility, but I just add up the identifiable patterns not caring about if they overlap. Metaphor: tulpas are "10101101", you're saying "101" occurs 2 times, I'm saying "101" occurs 3 times.
I'm fairly certain talking to bodies that claim those things would not change my probability estimates on those claims unless powerful brainwashing techniques were used, and I certainly hope the same is the case for you. If I believed that doing that would predictably shift my beliefs I'd already have those beliefs. Conservation of Expected Evidence.
((You can move a tulpa between minds to, probably, it just requires a lot of high tech, unethical surgery, and work. And probably gives the old host permanent severe brain damage. Same as with any other kind of incommunicable memory.))
(shrug) Well, I certainly agree that when I interact with a tulpa, I am interacting with a person... specifically, I'm interacting with the person whose tulpa it is, just as I am when I interact with a PC in an RPG.
What I disagree with is the claim that the tulpa has the moral status of a person (even a newborn person) independent of the moral status of the person whose tulpa it is.
On what grounds do you believe that? As I say, I observe that such experiences frequently convince other people; without some grounds for believing that I'm relevantly different from other people, my prior (your hopes notwithstanding) is that they stand a good chance of convincing me too. Ditto for talking to a tulpa.
(shrug) I don't deny this (though I'm not convinced of it either) but I don't see the relevance of it.
Yea this seems to definitely be just a fundamental values conflict. Let's just end the conversation here.
As someone with personal experience with a tulpa, I agree with most of this.
I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how "well-realized" they are.
I have no idea what a tulpa's moral status is, besides not less than a fictional character and not more than a typical human.
I would expect most of them to have about the same intelligence, rather than lower intelligence.
You are probably counting more properties things can vary under as "ontological". I'm mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.
I'm basing the moral status largely on "well realized", "complex" and "technically sentient" here. You'll notice all my example ALSO has the actual utility function multiplier at "unknown".
Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.
Ah. I see what you mean. That makes sense.
Have you read the earlier discussions on this topic?
I had not, actually. The link you've given just links me to Google's homepage, but I did just search LW for "Tulpa" and found it fine, so thanks regardless.
edit: The link's original purpose now works for me. I'm not sure what the problem was before, but it's gone now.
Well, if you think that the human illusion of unified agency is a good ideal to strive for, it then seems that messing around w/ tulpas is a bad thing. If you have really seriously abandoned that ideal (very few people I know have), then knock yourself out!
Why would it be considered important to maintain a feeling of unified agency?
Is this a serious question? Everything in our society, from laws to social conventions, is based on unified agency.
The consequentialist view of rationality as expressed here seems to be based on the notion of unified agency of people (the notion of a single utility function is only coherent for unified agents).
It's fine if you don't want to maintain unified agency, but it's obviously an important concept for a lot of people. I have not met a single person who truly has abandoned this concept in their life, interactions with others, etc. The conventional view is someone without unified agency has demons to be cast out ("my name is Legion, for we are many.")
By "agency", are you referring to physical control of the body? As far as I can tell, the process of "switching" (allowing the tulpa to control the host's body temporarily) is a very rare process which is a good deal more difficult than just creating a tulpa, and which many people who have tulpas cannot do at all even if they try.
What is stopping is me is the possibility that I will be potentially permanently relinquishing cognitive resources for the sake of the Tulpa.
There's tons of easily discovered information on the web about it.
I'm not sure the Tulpa-crowed would agree with this, but I think a non-esoteric example of Tulpas in everyday life is how some religious people say that God really speaks and appears to them. The "learning process" and stuff seem pretty similar - the only difference I can see is that in the case of Tulpas it is commonly acknowledged that the phenomenon is imaginary.
Come to think of it, that's probably a really good method for creating Tulpas quickly - building off a real or fictional character for whom you already have a relatively sophisticated mental model. It's probably also important that you are predisposed to take seriously the notion that this thing might actually be an agent which interacts with you...which might be why God works so well, and why the Tulpa-crowed keeps insisting that Tulpas are "real" in the sense that they carry moral weight. It's an imagination-belief driven phenomenon.
It might also illustrate some of the "dangers" - for example, some people who grew up with notions of the angry sort of God might always feel guilty about certain "sinful" things which they might not intellectually feel are bad.
I've also heard claims of people who gain extra abilities / parallel processing / "reminders" with Tulpas....basically, stuff that they couldn't do on their own. I don't really believe that this is possible, and if this were demonstrated to me I would need to update my model of the phenomenon. To the tupla-community's credit, they seem willing to test the belief.
A fairly obvious reason is that to generate a tulpa you need to screw up you mind in a sufficiently radical fashion. And once you do that, you may not be able to unfuck it back to normal.
I vaguely recall (sorry, no link) reading a post by a psychiatrist who said that creating tulpas is basically self-induced schizophrenia. I don't think schizophrenia is fun.
This is a concern I share. However...
This is the worst argument in the world.
I don't think so, it can be rephrased tabooing emotional words. I am not trying to attach some stigma of mental illness, I'm pointing out that tulpas are basically a self-inflicted case of what the medical profession calls dissociative identity disorder and that it has significant mental costs.
Taylor et al. claim that although people who exhibit the illusion of independent agency do score higher than the population norm on a screening test of dissociative symptoms, the profile on the most diagnostic items is different from DID patients, and scores on the test do not predict IIA:
Could you describe the relevant mental costs that you would expect as a sideeffect of creating a tulpa?
Loss of control over your mind.
What does that mean?
An entirely literal reading of that phrase.
So you mean that you are something that's separate from your mind? If so, what's you and how does it control the mind?
Your mind is a very complicated entity. It has been suggested that looking at it as a network (or an ecology) of multiple agents is a more useful view than thinking about it as something monolithic.
In particular, your reasoning consciousness is very much not the only agent in your mind and is not the only controller. An early example of such analysis is Freud's distinction between the id, the ego, and the superego.
Usually, though, your conscious self has sufficient control in day-to-day activities. This control breaks down, for example, under severe emotional stress. Or it can be subverted (cf. problems with maintaining diets). The point is that it's not absolute and you can have more of it or less of it. People with less are often described as having "poor impulse control" but that's not the only mode. Addiction would be another example.
So what I mean here is that the part of your mind that you think of as "I", the one that does conscious reasoning, will have less control over yourself.
Welp, look at that, I just found this thread after finishing up a long comment on the subject in an older thread. Go figure. (By the way, I do recommend reading that entire discussion, which included some actual tulpas chiming in).
Tulpa creation is effectively the creation of a form of sentinent AI that runs on the hardware of your brain instead of silicon.
That brings up a moral question. To what extend is it immoral to create a Tulpa and have it be in pain?
Tulpa are supposed to suffer from not getting enough attention so if you can't commit to giving it a lot of attention for the rest of your life you might commit an immoral act by creating it.
No, I don't think so. It's notably missing the "artificial" part of AI.
I think of tulpa creation as splitting off a shard of your own mind. It's still your own mind, only split now.
I think the really relevant ethical question is whether a tulpa has a separate consciousness from its host. From my own researches in the area (which have been very casual, mind you), I consider it highly unlikely that they have separate consciousness, but not so unlikely that I would be willing to create a tulpa and then let it die, for example.
In fact, my uncertainty on this issue is the main reason I am ambivalent about creating a tulpa. It seems like it would be very useful: I solve problems much better when working with other people, even if they don't contribute much; a tulpa more virtuous than myself could be a potent tool for self-improvement; it could help ameliorate the "fear of social isolation" obstacle to potential ambitious projects; I would gain a better understanding of how tulpas work; I could practice dancing and shaking hands more often; etc. etc. But I worry about being responsible for what may be (even with only ~15% subjective probability) a conscious mind, which will then literally die if I don't spend time with it regularly (ref).
Just to clarify this a little... how many separate consciousnesses do you estimate your brain currently hosts?
By my current (layman's) understanding of consciousness, my brain currently hosts exactly one.
OK, thanks.
It's not your normal mind, so it's artifical for ethical considerations.
As far as I read stuff written by people with Tulpa's they treat them as entity who's desires matter.
This might be a stupid question, but what ethical considerations are different for an "artificial" mind?
When talking about AGI few people label it as murder to shut down the AI that's in the box. At least it's worth a discussion whether it is.
Only if it's not sapient, which is a non-trivial question.
Wow, I had forgotten about that non-person predicates post. I definitely never thought it would have any bearing on a decision I personally would have to make. I was wrong.
Really? I was under the impression that there was a strong consensus, at least here on LW, that a sufficiently accurate simulation of consciousness is the moral equivalent of consciousness.
"Sufficiently accurate simulation of consciousness" is a subset of set of things that are artificial minds. You might have a consensus for that class. I don't think you have an understanding that all minds have the same moral value. Even all minds with a certain level of intelligence.
At least for me, personally, the relevant property for moral status is whether it has consciousness.
That's my understanding as well.... though I would say, rather, that being artificial is not a particularly important attribute towards evaluating the moral status of a consciousness. IOW, an artificial consciousness is a consciousness, and the same moral considerations apply to it as other consciousnesses with the same properties. That said, I also think this whole "a tulpa {is,isn't} an artificial intelligence" discussion is an excellent example of losing track of referents in favor of manipulating symbols, so I don't think it matters much in context.
I don't find this argument convincing.
Yes, and..?
Let me quote William Gibson here:
Addictions ... started out like magical pets, pocket monsters. They did extraordinary tricks, showed you things you hadn't seen, were fun. But came, through some gradual dire alchemy, to make decisions for you. Eventually, they were making your most crucial life-decisions. And they were ... less intelligent than goldfish.
There a good chance that you will also hold that belief when you will interact with the Tulpa on a daily basis. As such it makes sense to think about the implications of the whole affair before creating one.
I still don't see what you are getting at. If I treat a tulpa as a shard of my own mind, of course its desires matter, it's the desires of my own mind.
Think of having an internal dialogue with yourself. I think of tulpas as a boosted/uplifted version of a party in that internal dialogue.
Just so facts without getting entangled in the argument: In anecdotes tulpas seem to report more abstract and less intense types of suffering than humans. The by far dominant source of suffering in tulpas seems to be via empathy with the host. The suffering from not getting enough attention is probably fully explainable by loneliness, and sadness over fading away losing the ability to think and do things.
This is very useful information if true. Could you link to some of the anecdotes which you draw this from?
Look around yourself on http://www.reddit.com/r/Tulpas/ or ask some yourself on the verius IRC rooms that can be reached form there. I only have vague memories built from threads buried noths back on that redit.