You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Armok_GoB comments on Open Thread, November 1 - 7, 2013 - Less Wrong Discussion

5 Post author: witzvo 02 November 2013 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (299)

You are viewing a single comment's thread. Show more comments above.

Comment author: Armok_GoB 06 November 2013 04:57:57PM 3 points [-]

I've been doing some research (mainly hanging on their subreddit) and I think I have a fairly good idea of how tulpas work and the answers to your questions.

There are a myriad very different things tulpas are described as and thus "tulpas exist in the way people describe them" is not well defined.

There undisputably exist SOME specific interesting phenomena that's the referent of the word Tulpa.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

It does not seem deciding to make a tulpa is a sign of being crazy. Tulpas themselves seem to not be automatically unhealthy and can often help their host overcome depression or anxiety. However, there are many signs that the act of making a tulpa is dangerous and can trigger latent tendencies or be easily done in a catastrophically wrong way. I estimate the risk is similar to doing extensive meditation or taking a single largeish dose of LSD. For this reason I have not and will not attempt making one.

I am to lazy to find citations or examples right now but I probably could. I've tried to be a good rationalist and am fairly certain of most of these claims.

Comment author: NancyLebovitz 07 November 2013 01:28:48PM 2 points [-]

Has anyone worked on making a tulpa which is smarter than they are? This seems at least possible if you assume that many people don't let themselves make full use of their intelligence and/or judgement.

Comment author: Armok_GoB 07 November 2013 05:04:34PM 4 points [-]

Unless everything I think I understand about tulpas is wrong, this is at the very least significantly harder than just thinking yourself smarter without one. All the idea generating is done before credit is assigned to either the "self" or the "tulpa".

What there ARE several examples of however are tulpas that are more emotionally mature, better at luminosity, and don't share all their hosts preconceptions. This is not exactly smarts though, or even general purpose formal rationality.

One CAN imagine scenarios where you end up with a tulpa smarter than the host. For example the host might have learned helplessness, or the tulpa being imagined as "smarter than me" and thus all the brains good ideas get credited to it.

Disclaimer: this is based of only lots of anecdotes I've read, gut feeling, and basic stuff that should be common knowledge to any LWer.

Comment author: TheOtherDave 07 November 2013 05:40:46PM 9 points [-]

I'm reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.

So she and I created this whole whiteboard flowchart working out the design, at the end of which I said "Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I... um... completely failed to notice?"

I could certainly describe that as having a "Mark" in my head who is smarter about tax-code-related designs than I am, and there's nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.

But "Mark" in this case would just be pointing to a subset of "Dave", just as "Dave's fantasies about aliens" does.

Comment author: gwern 07 November 2013 08:04:37PM 6 points [-]

See also 'rubberducking' and previous discussions of this on LW. My basic theory is that reasoning was developed for adversarial purposes, and by rubberducking you are essentially roleplaying as an 'adversary' which triggers deeper processing (if we ever get brain imaging of system I vs system II thinking, I'd expect that adversarial thinking triggers system II more compared to 'normal' self-centered thinking).

Comment author: TheOtherDave 07 November 2013 08:21:14PM 3 points [-]

Yes. Indeed, I suspect I've told this story before on LW in just such a discussion.

I don't necessarily buy your account -- it might just be that our brains are simply not well-integrated systems, and enabling different channels whereby parts of our brains can be activated and/or interact with one another (e.g., talking to myself, singing, roleplaying different characters, getting up and walking around, drawing, etc.) gets different (and sometimes better) results.

This is also related to the circumlocution strategy for dealing with aphasia.

Comment author: Kaj_Sotala 02 January 2014 02:29:17PM *  1 point [-]

My basic theory is that reasoning was developed for adversarial purposes

Obligatory link.

Comment author: Armok_GoB 07 November 2013 06:49:06PM *  1 point [-]

Yea in that case presumably the tulpa would help - but not necessarily significantly more than such a non-tulpa model that requires considerably less work and risk.

Basically, a tulpa can technically do almost anything you can... but the absence of a tulpa can do them to, and for almost all of them there's some much easier and at least as effective way to do the same thing.

Comment author: ChristianKl 08 November 2013 04:04:09PM 0 points [-]

Basically, a tulpa can technically do almost anything you can...

Mental process like waking up without an alarm clock at a specific time aren't easy. I know a bunch of people who have that skill but it's not like there a step by step manual that you can easily follow that gives you that ability.

A tulpa can do things like that. There are many mental processes that you can't access directly but that a tulpa might be able to access.

Comment author: Armok_GoB 08 November 2013 05:22:40PM 1 point [-]

I am surprised to know there isn't such a step by step manual, suspect that you're wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.

But I guess you could make this argument; that a tulpa is more flexible and has a simpler user interface, even if it's less powerful and has a bunch of logistic and moral problems. I dont like it but I can't think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.

Comment author: ChristianKl 09 November 2013 05:39:00AM 2 points [-]

I am surprised to know there isn't such a step by step manual, suspect that you're wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.

My point isn't so much that it impossible but that it isn't easy.

Creating a mental device that only wakes me up will be easier than creating a whole Tupla but once you do have a Tulpa you can reuse it a lot.

Let's say I want to practice Salsa dance moves at home. Visualising a full dance partner completely just for the purpose of having a dance partner at home wouldn't be worth the effort.

I'm not sure about how much you gain by pair programming with a Tulpa, but the Tulpa might be useful for that task.

It takes a lot of energy to create it the first time but afterwards you reap the benefits.

I dont like it but I can't think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.

Tulpa creation involves quite a lot of effort so it doesn't seem the lazy road.

Comment author: Armok_GoB 09 November 2013 04:29:04PM 0 points [-]

Hmm, you have a point, I hadn't thought about it that way. If it wasn't so dangerous I would have asked you to experiment.

Comment author: hesperidia 02 December 2013 04:00:36AM *  0 points [-]

Mental process like waking up without an alarm clock at a specific time aren't easy. I know a bunch of people who have that skill but it's not like there a step by step manual that you can easily follow that gives you that ability.

I do not have "wake up at a specific time" ability, but I have trained myself to have "wake up within ~1.5 hours of the specific time" ability. I did this over a summer break in elementary school because I learned about how sleep worked and thought it would be cool. Note that you will need to have basically no sleep debt (you consistently wake up without an alarm) for this to work correctly.

The central point of this method is this: a sleep cycle (the time it takes to go from a light stage of sleep to the deeper stages of sleep and back again) is about 1.5 hours long. If I am not under stress or sleep debt, I can estimate my sleeping time to the nearest sleep cycle. Using the sleep cycle as a unit of measurement lets me partition out sleep without being especially reliant on my (in)ability to perceive time.

The way I did it is this (each step was done until I could do it reliably, which took up to a week each for me [but I was a preteen then, so it may be different for adults]):

  1. Block off approximately 2 hours (depending on how long it takes you to fall asleep), right after lunch so it has the least danger of merging with your consolidated/night sleep, and take a nap. Note how this makes you feel.

  2. Do that again, but instead of blocking off the 2 hours with an alarm clock, try doing it naturally, and awakening when it feels natural, around the 1.5h mark (repeating this because it is very important: you will need to have very little to no accumulated sleep debt for this to work). Note how this makes you feel.

  3. Do that again, but with a ~3.5-hour block. Take two 1.5 hour sleep cycle naps one after another (wake up in between).

  4. During a night's sleep, try waking up between every sleep cycle. Check this against [your sleep time in hours / 1.5h per sleep cycle] to make sure that you caught all of them.

  5. Block off a ~3.5 hour nap and try taking it as two sleep cycles without waking up in between them. (Not sure about the order with this point and the previous one. Did I do them in the opposite order? I'm reconstructing from memory here. It's probably possible to make this work in either order.)

  6. You probably know from step 4 how many sleep cycles you have in a night. Now you should be able to do things like consciously split up your sleep biphasically, or waking up a sleep cycle earlier than you usually do.

I then spent the rest of summer break with a biphasic "first/second sleep" rhythm, which disappeared once I was in school and had to wake up at specific times again.

To this day, I sleep especially lightly, must take my naps in 1.5 hour intervals, and will frequently wake up between sleep cycles (I've had to keep a clock on my nightstand since then so I can orient myself if I get woken unexpectedly by noises, because a 3:30AM waking is different from a 5AM waking, but they're at the same point on the cycle so they feel similar). I also almost always wake up 10-45 minutes before any set alarms, which would be more useful if the spread was smaller (45 minutes before I actually need to wake up seems like a waste). It's a cool skill to have, but it has its downsides.

Comment author: TheOtherDave 07 November 2013 07:57:55PM 0 points [-]

a tulpa can technically do almost anything you can...

Yes, I would expect this.
Indeed, I'm surprised by the "almost" -- what are the exceptions?

Comment author: Armok_GoB 07 November 2013 09:26:58PM 0 points [-]

Anything that requires you using your body and interacting physically with the world.

Comment author: TheOtherDave 07 November 2013 09:33:58PM 0 points [-]

I'm startled. Why can't a tulpa control my body and interact physically with the world, if it's (mutually?) convenient for it to do so?

Comment author: Armok_GoB 07 November 2013 09:44:54PM 0 points [-]

Well if you consider that the tulpa doing it on it's own then no I can't think of any specific exceptions. Most tulpas can't do that trick though.

Comment author: TheOtherDave 07 November 2013 10:03:39PM *  3 points [-]

Well if you consider that the tulpa doing it on it's own

Well, let me put it this way: suppose my tulpa composes a sonnet (call that event E1), recites that sonnet using my vocal cords (E2), and writes the sonnet down using my fingers (E3).

I would not consider any of those to be the tulpa doing something "on its own", personally. (I don't mean to raise the whole "independence" question again, as I understand you don't consider that very important, but, well, you brought it up.)

But if I were willing to consider E1an example of the tulpa doing something on its own (despite using my brain) I can't imagine a justification for not considering E2 and E3 equally well examples of the tulpa doing something on its own (despite using my muscles).

But I infer that you would consider E1 (though not E2 or E3) the tulpa doing something on its own. Yes?

So, that's interesting. Can you expand on your reasons for drawing that distinction?

Comment author: TheOtherDave 06 November 2013 05:11:42PM 2 points [-]

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

Would you classify a novel in the same "moral-status" tier as these four examples?

Comment author: Armok_GoB 06 November 2013 09:49:39PM 0 points [-]

No, thats much much lower. As in torture a novel for decades in order to give a tulpa a quick amusement being a moral thing to do lower.

Assuming you mean either a physical book, or the simulation of the average minor character in the author's mind, here. Main characters or RPing PCs can vary in complexity of simulation from author to author a lot and it's a theory that some become effectively tulpas.

Comment author: TheOtherDave 06 November 2013 10:11:45PM 1 point [-]

Your answer clarifies what I was trying to get at with my question but wasn't quite sure how to ask, thanks; my question was deeply muddled.

For my own part, treating a tulpa as having the moral status of an independent individual distinct from its creator seems unjustified. I would be reluctant to destroy one because it is the unique and likely-unreconstructable creative output of a human being, much like I would be reluctant to destroy a novel someone had written (as in, erase all copies of such that the novel itself no longer exists), but that's about as far as I go.

I didn't mean a physical copy of a novel, sorry that wasn't clear.

Yes, destroying all memory of a character someone played in an RPG and valued remembering I would class similarly.

But all of these are essentially property crimes, whose victim is the creator of the artwork (or more properly speaking the owner, though in most cases I can think of the roles are not really separable), not the work of art itself.

I have no idea what "torture a novel" even means, it strikes me as a category error on a par with "paint German blue" or "burn last Tuesday".

Comment author: ChristianKl 08 November 2013 04:05:18PM *  0 points [-]

What do you think about the moral status of torturing an uploaded human mind that's in silicon?

Does that mind have a different moral status than one in a brain?

Comment author: TheOtherDave 08 November 2013 04:10:45PM 2 points [-]

Certainly not by virtue of being implemented in silicon, no. Why do you ask?

Comment author: Armok_GoB 06 November 2013 11:02:45PM 0 points [-]

Ah. No, I think you'd change your mind if you spent a few hours talking to accounts that claim to be tulpas.

A newborn infant or alzheimer's patient is not an independent individual distinct from it's caretaker either. Do you count their destruction as property crime as well? "Person"-ness is not binary, it's not even a continuum. It's a cluster of properties that usually correlate but in the case of tulpas does not. I recommend re-reading Diseased Thinking.

As for your category error: /me argues for how german is a depressing language and spends all that was gained in that day on something that will not last. Then a pale-green tulpa snores in an angry manner.

Comment author: [deleted] 12 November 2013 03:17:55PM 1 point [-]

As for your category error: /me argues for how german is a depressing language and spends all that was gained in that day on something that will not last. Then a pale-green tulpa snores in an angry manner.

I picture a sheet of paper with a paragraph in each of several languages, a paintbrush, and watercolours. Then boring-sounding environmental considerations make me feel outraged without me consciously realizing what's happening.

Comment author: TheOtherDave 06 November 2013 11:50:53PM 0 points [-]

I agree that person-ness is cluster of properties and not a binary.

I don't believe that tulpas possess a significant subset those properties independent of the person whose tulpa they are.

I don't think I'm failing to understand any of what's discussed in Diseased Thinking. If there's something in particular you think I'm failing to understand, I'd appreciate you pointing it out.

It's possible that talking to accounts that claim to be tulpas would change my mind, as you suggest. It's also possible that talking to bodies that claim to channel spirit-beings or past lives would change my mind about the existence of spirit-beings or reincarnation. Many other people have been convinced by such experiences, and I have no especially justified reason to believe that I'm relevantly different from them.

Of course, that doesn't mean that reincarnation happens, nor that spirit-beings exist who can be channeled, or that tulpas possess a significant subset of the properties which constitute person-ness independent of the person whose tulpa they are.

A newborn infant or alzheimer's patient is not an independent individual distinct from it's caretaker either.

Eh?

I can take a newborn infant away from its caretaker and hand it to a different caretaker... or to no caretaker at all... or to several caretakers. I would say it remains the same newborn infant. The caretaker can die, and the newborn infant continues to live; and vice-versa.

That seems to me sufficient justification (not necessary, but sufficient) to call it an independent individual.

Why do you say it isn't?

Do you count their destruction as property crime as well?

I count it as less like a property crime than destroying a tulpa, a novel, or an RPG character. There are things I count it as more like a property crime than.

Comment author: Armok_GoB 07 November 2013 04:52:57PM 0 points [-]

Seems I were wrong about you not understanding the word thing. Apologies.

You keep saying that word "independent". I'm starting to think we might not disagree about any objective properties of tulpas, just things need to be "independent" or only the most important count towards your utility, but I just add up the identifiable patterns not caring about if they overlap. Metaphor: tulpas are "10101101", you're saying "101" occurs 2 times, I'm saying "101" occurs 3 times.

I'm fairly certain talking to bodies that claim those things would not change my probability estimates on those claims unless powerful brainwashing techniques were used, and I certainly hope the same is the case for you. If I believed that doing that would predictably shift my beliefs I'd already have those beliefs. Conservation of Expected Evidence.

((You can move a tulpa between minds to, probably, it just requires a lot of high tech, unethical surgery, and work. And probably gives the old host permanent severe brain damage. Same as with any other kind of incommunicable memory.))

Comment author: TheOtherDave 07 November 2013 05:20:26PM 1 point [-]

You keep saying that word "independent".

(shrug) Well, I certainly agree that when I interact with a tulpa, I am interacting with a person... specifically, I'm interacting with the person whose tulpa it is, just as I am when I interact with a PC in an RPG.

What I disagree with is the claim that the tulpa has the moral status of a person (even a newborn person) independent of the moral status of the person whose tulpa it is.

I'm fairly certain talking to bodies that claim those things would not change my probability estimates on those claims unless powerful brainwashing techniques were used, and I certainly hope the same is the case for you.

On what grounds do you believe that? As I say, I observe that such experiences frequently convince other people; without some grounds for believing that I'm relevantly different from other people, my prior (your hopes notwithstanding) is that they stand a good chance of convincing me too. Ditto for talking to a tulpa.

((You can move a tulpa between minds to, probably, it just requires a lot of high tech, unethical surgery, and work. And probably gives the old host permanent severe brain damage. Same as with any other kind of incommunicable memory.))

(shrug) I don't deny this (though I'm not convinced of it either) but I don't see the relevance of it.

Comment author: Armok_GoB 07 November 2013 06:53:41PM 0 points [-]

Yea this seems to definitely be just a fundamental values conflict. Let's just end the conversation here.

Comment author: hylleddin 08 November 2013 01:40:30AM *  1 point [-]

As someone with personal experience with a tulpa, I agree with most of this.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how "well-realized" they are.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I have no idea what a tulpa's moral status is, besides not less than a fictional character and not more than a typical human.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

I would expect most of them to have about the same intelligence, rather than lower intelligence.

Comment author: Armok_GoB 08 November 2013 05:05:10PM 0 points [-]

You are probably counting more properties things can vary under as "ontological". I'm mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.

I'm basing the moral status largely on "well realized", "complex" and "technically sentient" here. You'll notice all my example ALSO has the actual utility function multiplier at "unknown".

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Comment author: hylleddin 08 November 2013 10:43:03PM 1 point [-]

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Ah. I see what you mean. That makes sense.