You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on Open Thread, November 1 - 7, 2013 - Less Wrong Discussion

5 Post author: witzvo 02 November 2013 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (299)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 08 November 2013 03:51:57PM 0 points [-]

Tulpa creation is effectively the creation of a form of sentinent AI that runs on the hardware of your brain instead of silicon.

That brings up a moral question. To what extend is it immoral to create a Tulpa and have it be in pain?

Tulpa are supposed to suffer from not getting enough attention so if you can't commit to giving it a lot of attention for the rest of your life you might commit an immoral act by creating it.

Comment author: Lumifer 08 November 2013 04:18:05PM 1 point [-]

Tulpa creation is effectively the creation of a form of sentinent AI that runs on the hardware of your brain instead of silicon.

No, I don't think so. It's notably missing the "artificial" part of AI.

I think of tulpa creation as splitting off a shard of your own mind. It's still your own mind, only split now.

Comment author: Vulture 10 November 2013 02:52:10AM *  0 points [-]

I think the really relevant ethical question is whether a tulpa has a separate consciousness from its host. From my own researches in the area (which have been very casual, mind you), I consider it highly unlikely that they have separate consciousness, but not so unlikely that I would be willing to create a tulpa and then let it die, for example.

In fact, my uncertainty on this issue is the main reason I am ambivalent about creating a tulpa. It seems like it would be very useful: I solve problems much better when working with other people, even if they don't contribute much; a tulpa more virtuous than myself could be a potent tool for self-improvement; it could help ameliorate the "fear of social isolation" obstacle to potential ambitious projects; I would gain a better understanding of how tulpas work; I could practice dancing and shaking hands more often; etc. etc. But I worry about being responsible for what may be (even with only ~15% subjective probability) a conscious mind, which will then literally die if I don't spend time with it regularly (ref).

Comment author: TheOtherDave 10 November 2013 04:10:40AM 0 points [-]

Just to clarify this a little... how many separate consciousnesses do you estimate your brain currently hosts?

Comment author: Vulture 10 November 2013 05:11:21AM 0 points [-]

By my current (layman's) understanding of consciousness, my brain currently hosts exactly one.

Comment author: TheOtherDave 10 November 2013 02:00:24PM 0 points [-]

OK, thanks.

Comment author: ChristianKl 08 November 2013 04:32:43PM 0 points [-]

No, I don't think so. It's notably missing the "artificial" part of AI.

It's not your normal mind, so it's artifical for ethical considerations.

I think of tulpa creation as splitting off a shard of your own mind. It's still your own mind, only split now.

As far as I read stuff written by people with Tulpa's they treat them as entity who's desires matter.

Comment author: Vulture 10 November 2013 02:53:18AM 1 point [-]

It's not your normal mind, so it's artifical for ethical considerations.

This might be a stupid question, but what ethical considerations are different for an "artificial" mind?

Comment author: ChristianKl 10 November 2013 03:36:35PM 0 points [-]

This might be a stupid question, but what ethical considerations are different for an "artificial" mind?

When talking about AGI few people label it as murder to shut down the AI that's in the box. At least it's worth a discussion whether it is.

Comment author: [deleted] 11 November 2013 08:16:51PM 2 points [-]
Comment author: Vulture 12 November 2013 04:35:23AM *  1 point [-]

Wow, I had forgotten about that non-person predicates post. I definitely never thought it would have any bearing on a decision I personally would have to make. I was wrong.

Comment author: Vulture 10 November 2013 08:27:59PM 0 points [-]

Really? I was under the impression that there was a strong consensus, at least here on LW, that a sufficiently accurate simulation of consciousness is the moral equivalent of consciousness.

Comment author: ChristianKl 11 November 2013 04:12:31PM *  0 points [-]

"Sufficiently accurate simulation of consciousness" is a subset of set of things that are artificial minds. You might have a consensus for that class. I don't think you have an understanding that all minds have the same moral value. Even all minds with a certain level of intelligence.

Comment author: Vulture 11 November 2013 07:03:12PM 0 points [-]

At least for me, personally, the relevant property for moral status is whether it has consciousness.

Comment author: TheOtherDave 11 November 2013 02:32:42AM *  0 points [-]

That's my understanding as well.... though I would say, rather, that being artificial is not a particularly important attribute towards evaluating the moral status of a consciousness. IOW, an artificial consciousness is a consciousness, and the same moral considerations apply to it as other consciousnesses with the same properties. That said, I also think this whole "a tulpa {is,isn't} an artificial intelligence" discussion is an excellent example of losing track of referents in favor of manipulating symbols, so I don't think it matters much in context.

Comment author: Lumifer 08 November 2013 04:47:20PM 1 point [-]

It's not your normal mind, so it's artifical for ethical considerations.

I don't find this argument convincing.

As far as I read stuff written by people with Tulpa's they treat them as entity who's desires matter.

Yes, and..?

Let me quote William Gibson here:

Addictions ... started out like magical pets, pocket monsters. They did extraordinary tricks, showed you things you hadn't seen, were fun. But came, through some gradual dire alchemy, to make decisions for you. Eventually, they were making your most crucial life-decisions. And they were ... less intelligent than goldfish.

Comment author: ChristianKl 08 November 2013 04:52:55PM 0 points [-]

Yes, and..?

There a good chance that you will also hold that belief when you will interact with the Tulpa on a daily basis. As such it makes sense to think about the implications of the whole affair before creating one.

Comment author: Lumifer 08 November 2013 05:12:17PM 2 points [-]

I still don't see what you are getting at. If I treat a tulpa as a shard of my own mind, of course its desires matter, it's the desires of my own mind.

Think of having an internal dialogue with yourself. I think of tulpas as a boosted/uplifted version of a party in that internal dialogue.

Comment author: Armok_GoB 08 November 2013 05:11:38PM 0 points [-]

Just so facts without getting entangled in the argument: In anecdotes tulpas seem to report more abstract and less intense types of suffering than humans. The by far dominant source of suffering in tulpas seems to be via empathy with the host. The suffering from not getting enough attention is probably fully explainable by loneliness, and sadness over fading away losing the ability to think and do things.

Comment author: Vulture 10 November 2013 02:54:36AM 0 points [-]

This is very useful information if true. Could you link to some of the anecdotes which you draw this from?

Comment author: Armok_GoB 10 November 2013 09:49:14PM 0 points [-]

Look around yourself on http://www.reddit.com/r/Tulpas/ or ask some yourself on the verius IRC rooms that can be reached form there. I only have vague memories built from threads buried noths back on that redit.