You didn't answer my question. My point was that pain is simpler than suffering, and even scientists who study it can't objectively define it.
Because this "definition" does not help us figure out whether low-complexity WBEs suffer the same way humans do.
Are you suggesting we shouldn't even talk about their potential suffering then? On the same grounds we shouldn't talk about animal suffering either. That human beings suffer is evidence for low-complexity WBEs and animals being capable of that too.
By the time we can make low-complexity WBEs we'll probably have some understanding of what suffering computationally is, but it might be too late start philosophizing about it then.
You didn't answer my question. My point was that pain is simpler than suffering, and even scientists who study it can't objectively define it.
First, the "objective" part of pain is known as nociception and can likely be studied in real or simulated organisms. The subjective part of pain need not be figured out separately from other qualia, like perception of color red.
Second, not all pain is suffering and not all suffering is pain, so figuring out the quale of suffering is separate from studying pain.
...Are you suggesting we shouldn't even talk
I felt like this draft paper by Anders Sandberg was a well-thought-out essay on the morality of experiments on brain emulations. Is there anything you disagree with here, or think he should handle differently?
http://www.aleph.se/papers/Ethics%20of%20brain%20emulations%20draft.pdf