nyan_sandwich comments on The Level Above Mine - Less Wrong

42 Post author: Eliezer_Yudkowsky 26 September 2008 09:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (387)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 24 January 2013 05:35:50AM *  6 points [-]

Why do you always have to ask subtly hard questions? I can just see see your smug face, smiling that smug smile of yours with that slight tilt of the head as we squirm trying to rationalize something up quick.

Here's my crack at it: They don't have what we currently think is the requisite code structure to "feel" in a meaningful way, but of course we are too confused to articulate the reasons much further.

Comment author: shminux 24 January 2013 06:52:25AM 1 point [-]

Thank you, I'm flattered. I have asked Eliezer the same question, not sure if anyone will reply. I hoped that there is a simple answer to this, related to the complexity of information processing in the substrate, like the brain or a computer, but I cannot seem to find any discussions online. Probably using wrong keywords.

Comment author: Kaj_Sotala 24 January 2013 09:49:14AM 0 points [-]
Comment author: [deleted] 24 January 2013 03:04:07PM *  1 point [-]

related to the complexity of information processing in the substrate

Not directly related. I think it has a lot to do with being roughly isomorphic to how a human thinks, which requires large complexity, but a particular complexity.

When I evaluate such questions IRL, like in the case of helping out an injured bird, or feeding my cat, I notice that my decisions seem to depend on whether I feel empathy for the thing. That is, do my algorithms recognize it as a being, or as a thing.

But then empathy can be hacked or faulty (see for example pictures of african children, cats and small animals, ugly disfigured people, far away people, etc), so I think of a sort of "abstract empathy" that is doing the job of recognizing morally valuable beings without all the bugs of my particular implementation of it.

In other words, I think it's a matter of moral philosophy, not metaphysics.