AndreInfante comments on The virtual AI within its virtual world - Less Wrong

6 Post author: Stuart_Armstrong 24 August 2015 04:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: AndreInfante 26 August 2015 04:41:31AM 0 points [-]

That's... an odd way of thinking about morality.

I value other human beings, because I value the processes that go on inside my own head, and can recognize the same processes at work in others, thanks to my in-built empathy and theory of the mind. As such, I prefer that good things happen to them rather than bad. There isn't any universal 'shouldness' to it, it's just the way that I'd rather things be. And, since most other humans have similar values, we can work together, arm in arm. Our values converge rather than diverge. That's morality.

I extend that value to those of different races and cultures, because I can see that they embody the same conscious processes that I value. I do not extend that same value to brain dead people, fetuses, or chickens, because I don't see that value present within them. The same goes for a machine that has a very alien cognitive architecture and doesn't implement the cognitive algorithms that I value.

Comment author: PhilGoetz 26 August 2015 02:20:28PM *  0 points [-]

If you're describing how you expect you'd act based on your feelings, then why do their algorithms matter? I would think your feelings would respond to their appearance and behavior.

There's a very large space of possible algorithms, but the space of reasonable behaviors given the same circumstances is quite small. Humans, being irrational, often deviate bizarrely from the behavior I expect in a given circumstance--more so than any AI probably would.