I mean something morally meaningful. I don't think a chess computer suffers when it loses a game, no matter how sophisticated. I expect that self-driving cars are programmed to try to avoid accidents even when other drivers drive badly, but I don't think they suffer if you crash into them.
Yeah, if by “suffering” you mean “nociception I care about”, it sure is human-specific.
Thagard (2012) contains a nicely compact passage on thought experiments: