Modelling humans in some form seems more likely to result in such a computation than not modelling them, since humans are morally relevant and the system’s models of humans may end up sharing whatever properties make humans morally relevant.
The moral relevance of human intelligence is the first thing I'll think about, I wrote an article about it and as Prof. Gary Francione said:
“[…] cognitive characteristics beyond sentience are morally irrelevant […] being “smart” may matter for some purposes, such as whether we give someone a scholarship, but it is completely irrelevant to whether we use someone as a forced organ donor, as a non-consenting subject in a biomedical experiment.”
Having preferences, desires, interests and acting purposely to achieve them is to attribute a living being with mental states that go beyond the mere ability to feel and perceive things. It goes beyond the accepted definition of “sentience”. Yet, it seems obvious that not all species possess these attributes in equal degrees.
The moral relevance of human intelligence is the first thing I'll think about, I wrote an article about it and as Prof. Gary Francione said:
Having preferences, desires, interests and acting purposely to achieve them is to attribute a living being with mental states that go beyond the mere ability to feel and perceive things. It goes beyond the accepted definition of “sentience”. Yet, it seems obvious that not all species possess these attributes in equal degrees.