One approach to treating an AI ethically is to design it to not be a person. Of course, this means building it the hard way, but, as Tetronian notes, that is already a requirement of making it Friendly.
What are the boundaries of not being a person?
I'm inclined to think that any computer complex enough to be useful will at least have to have a model of itself and a model of what changes to the self (or possibly to the model of itself, which gets to be an interesting distinction) are acceptable. This is at least something like being a person, though presumably it wouldn't need to be able to experience pain.
I'm not going to exclude the possibility of something like pain, though, either -- it might be the most efficient way of modeling "don't do that".
Huh-- this makes p-zombies interesting. Could an AI need qualia?
In the novel Life Artificial I use the following assumptions regarding the creation and employment of AI personalities.
(Note: PDA := AI, Sticky := human)
The second fitness gradient is based on economics and social considerations: can an AI actually earn a living? Otherwise it gets turned off.
As a result of following this line of thinking, it seems obvious that after the initial novelty wears off, AIs will be terribly mistreated (anthropomorphizing, yeah).
It would be very forward-thinking to begin to engineer barriers to such mistreatment, like a PETA for AIs. It is interesting that such an organization already exists, at least on the Internet: ASPCR