JGWeissman comments on Ethical Treatment of AI - Less Wrong

-6 Post author: stanislavzza 15 November 2010 02:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread.

Comment author: JGWeissman 15 November 2010 05:10:27AM 3 points [-]

One approach to treating an AI ethically is to design it to not be a person. Of course, this means building it the hard way, but, as Tetronian notes, that is already a requirement of making it Friendly.

Comment author: NancyLebovitz 15 November 2010 09:55:54AM 1 point [-]

What are the boundaries of not being a person?

I'm inclined to think that any computer complex enough to be useful will at least have to have a model of itself and a model of what changes to the self (or possibly to the model of itself, which gets to be an interesting distinction) are acceptable. This is at least something like being a person, though presumably it wouldn't need to be able to experience pain.

I'm not going to exclude the possibility of something like pain, though, either -- it might be the most efficient way of modeling "don't do that".

Huh-- this makes p-zombies interesting. Could an AI need qualia?

Comment author: JGWeissman 15 November 2010 05:37:55PM 2 points [-]

Eliezer has anticipated your argument:

"Um - okay, look, putting aside the obvious objection that any sufficiently powerful intelligence will be able to model itself -"

Lob's Sentence contains an exact recipe for a copy of itself, including the recipe for the recipe; it has a perfect self-model. Does that make it sentient?

Comment author: NancyLebovitz 15 November 2010 05:58:57PM 1 point [-]

I think it's relevant that the self-model for an AI would change as the AI changes.