You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

hylleddin comments on Open Thread, November 1 - 7, 2013 - Less Wrong Discussion

5 Post author: witzvo 02 November 2013 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (299)

You are viewing a single comment's thread. Show more comments above.

Comment author: hylleddin 08 November 2013 01:40:30AM *  1 point [-]

As someone with personal experience with a tulpa, I agree with most of this.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how "well-realized" they are.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I have no idea what a tulpa's moral status is, besides not less than a fictional character and not more than a typical human.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

I would expect most of them to have about the same intelligence, rather than lower intelligence.

Comment author: Armok_GoB 08 November 2013 05:05:10PM 0 points [-]

You are probably counting more properties things can vary under as "ontological". I'm mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.

I'm basing the moral status largely on "well realized", "complex" and "technically sentient" here. You'll notice all my example ALSO has the actual utility function multiplier at "unknown".

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Comment author: hylleddin 08 November 2013 10:43:03PM 1 point [-]

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Ah. I see what you mean. That makes sense.