You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vulture comments on Open Thread, November 1 - 7, 2013 - Less Wrong Discussion

5 Post author: witzvo 02 November 2013 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (299)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vulture 10 November 2013 02:53:18AM 1 point [-]

It's not your normal mind, so it's artifical for ethical considerations.

This might be a stupid question, but what ethical considerations are different for an "artificial" mind?

Comment author: ChristianKl 10 November 2013 03:36:35PM 0 points [-]

This might be a stupid question, but what ethical considerations are different for an "artificial" mind?

When talking about AGI few people label it as murder to shut down the AI that's in the box. At least it's worth a discussion whether it is.

Comment author: [deleted] 11 November 2013 08:16:51PM 2 points [-]
Comment author: Vulture 12 November 2013 04:35:23AM *  1 point [-]

Wow, I had forgotten about that non-person predicates post. I definitely never thought it would have any bearing on a decision I personally would have to make. I was wrong.

Comment author: Vulture 10 November 2013 08:27:59PM 0 points [-]

Really? I was under the impression that there was a strong consensus, at least here on LW, that a sufficiently accurate simulation of consciousness is the moral equivalent of consciousness.

Comment author: ChristianKl 11 November 2013 04:12:31PM *  0 points [-]

"Sufficiently accurate simulation of consciousness" is a subset of set of things that are artificial minds. You might have a consensus for that class. I don't think you have an understanding that all minds have the same moral value. Even all minds with a certain level of intelligence.

Comment author: Vulture 11 November 2013 07:03:12PM 0 points [-]

At least for me, personally, the relevant property for moral status is whether it has consciousness.

Comment author: TheOtherDave 11 November 2013 02:32:42AM *  0 points [-]

That's my understanding as well.... though I would say, rather, that being artificial is not a particularly important attribute towards evaluating the moral status of a consciousness. IOW, an artificial consciousness is a consciousness, and the same moral considerations apply to it as other consciousnesses with the same properties. That said, I also think this whole "a tulpa {is,isn't} an artificial intelligence" discussion is an excellent example of losing track of referents in favor of manipulating symbols, so I don't think it matters much in context.