NancyLebovitz comments on Ethical Treatment of AI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (15)
What are the boundaries of not being a person?
I'm inclined to think that any computer complex enough to be useful will at least have to have a model of itself and a model of what changes to the self (or possibly to the model of itself, which gets to be an interesting distinction) are acceptable. This is at least something like being a person, though presumably it wouldn't need to be able to experience pain.
I'm not going to exclude the possibility of something like pain, though, either -- it might be the most efficient way of modeling "don't do that".
Huh-- this makes p-zombies interesting. Could an AI need qualia?
Eliezer has anticipated your argument:
I think it's relevant that the self-model for an AI would change as the AI changes.