You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

hylleddin comments on Open Thread, November 1 - 7, 2013 - Less Wrong Discussion

5 Post author: witzvo 02 November 2013 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (299)

You are viewing a single comment's thread. Show more comments above.

Comment author: Armok_GoB 06 November 2013 04:57:57PM 3 points [-]

I've been doing some research (mainly hanging on their subreddit) and I think I have a fairly good idea of how tulpas work and the answers to your questions.

There are a myriad very different things tulpas are described as and thus "tulpas exist in the way people describe them" is not well defined.

There undisputably exist SOME specific interesting phenomena that's the referent of the word Tulpa.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

It does not seem deciding to make a tulpa is a sign of being crazy. Tulpas themselves seem to not be automatically unhealthy and can often help their host overcome depression or anxiety. However, there are many signs that the act of making a tulpa is dangerous and can trigger latent tendencies or be easily done in a catastrophically wrong way. I estimate the risk is similar to doing extensive meditation or taking a single largeish dose of LSD. For this reason I have not and will not attempt making one.

I am to lazy to find citations or examples right now but I probably could. I've tried to be a good rationalist and am fairly certain of most of these claims.

Comment author: hylleddin 08 November 2013 01:40:30AM *  1 point [-]

As someone with personal experience with a tulpa, I agree with most of this.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how "well-realized" they are.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I have no idea what a tulpa's moral status is, besides not less than a fictional character and not more than a typical human.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

I would expect most of them to have about the same intelligence, rather than lower intelligence.

Comment author: Armok_GoB 08 November 2013 05:05:10PM 0 points [-]

You are probably counting more properties things can vary under as "ontological". I'm mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.

I'm basing the moral status largely on "well realized", "complex" and "technically sentient" here. You'll notice all my example ALSO has the actual utility function multiplier at "unknown".

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Comment author: hylleddin 08 November 2013 10:43:03PM 1 point [-]

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Ah. I see what you mean. That makes sense.