By origin, I'm referring to the source of the need for morality, and it's clear that it's mostly about suffering. We don't like suffering and would rather not experience it, although we are prepared to put up with some (or even a lot) of it if that suffering leads to greater pleasure that outweighs it. We realised long ago that if we do a deal with the people around us to avoid causing each other suffering, we could all suffer less and have better lives - that's far better than spending time hitting each over the head with clubs and stealing the fruits of each other's labour. By doing this deal, we ended up with greater fruits from our work and removed most of the brutality from our lives. Morality is clearly primarily about management of suffering.
You can't torture a rock, so there's no need to have rules about protecting it against people who might seek to harm it. The same applies to a computer, even if it's running AGI - if it lacks sentience and cannot suffer, it doesn't need rules to protect it from harm (other than to try to prevent the owner from suffering any loss if it was to be damaged, or other people who might be harmed by the loss of the work the computer was carrying out). If we were able to make a sentient machine though, and if that sentient machine could suffer, it would then have to be brought into the range of things that need to be protected by morality. We could make an unintelligent sentient machine like a calculator and give it the ability to suffer, or we could make a machine with human-level intelligence with the same ability to suffer, and to suffer to the same degree as the less intelligent calculator. Torturing both of these to generate the same amount of suffering in each would be equally wrong for both. It is not the intelligence that provides the need for morality, but the sentience and the degree of suffering that may be generated in it.
With people, our suffering can perhaps be amplified beyond the suffering that occurs in other animals because there are many ways to suffer, and they can combine. When an animal is chased, brought down and killed by a predator, it most likely experiences fear, then pain. The pain may last for a long time in some cases, such as when wolves eat a musk ox from the rear end while it's still alive, but the victim lacks any real understanding of what's happening to it. When people are attacked and killed though, there are amplifications of the suffering caused by the victim understanding the situation and knowing just how much they are losing. The many people who care deeply about that victim will also suffer because of this loss, and many will suffer deeply for many decades. This means that people need greater protection from morality, although when scores are being put to the degree of suffering caused by pain and fear to an animal victim and a human victim, those should be measured using the same scale, so in that regard these sentiences are being treated as equals.
"Then why are we talking about it [sentience], instead of the gallium market on Jupiter?"
Because most of us believe there is such a thing as sentience, that there is something in us that can suffer, and there would be no role for morality without the existence of a sufferer.
"You really ought to read the Sequences. There's a post, Angry Atoms, that specifically addresses an equivalent misconception."
All it does is assert that things can be more than the sum of their parts, but that isn't true for any other case and it's unlikely that the universe will make an exception to the rules just for sentience.
"Do you think that we have a Feeling Nodule somewhere in our brains that produces Feelings?"
I expect there to be a sufferer for suffering to be possible. Something physical has to exist to do that suffering rather than something magical.
"That's not an effective Taboo of "suffering" - "suffering" and "unpleasant" both draw on the same black-box-node. And anyway, even assuming that you explained suffering in enough detail for an Alien Mind to identify its presence and absence, that's not enough to uniquely determine how to compare two forms of suffering."
Our inability to pin down the ratio between two kinds of suffering doesn't mean there isn't a ratio that describes their relationship.
"...do you mean that you're not claiming that there is a single correct comparison between any two forms of suffering?"
There's always a a single correct comparison. We just don't know what it is. All we can do at the moment is build a database where we collect knowledge of how different kinds of suffering compare in humans, and try to do the same for other species by looking at how distressed they appear, and then we can apply that knowledge as best we can across them all, and that's worth doing as it's more likely to be right than just guessing. Later on, science may be able to find out what's suffering and exactly how much it's suffering by understanding the entire mechanism, at which point we can improve the database and make it close to perfect.
"But what does it even mean to compare two forms of suffering? I don't think you understand the inferential gap here. I don't agree that amount-of-suffering is an objective quantitative thing."
Would you rather be beaten up or have to listen to an hour of the Spice Girls? These are very different forms of suffering and we can put a ratio to them by asking lots of people for their judgement on which they'd choose to go through.
"I don't disagree that if x=y then f(x)=f(y). I do disagree that "same amount" is a meaningful concept, within the framework you've presented here (except that you point at a black box called Same, but that's not actually how knowledge works)."
If you get to the point where half the people choose to be beaten up and the other half choose to listen to the Spice Girls for time T (so you have to find the value for T at which you get this result), you have then found out how those two kinds of suffering are related.
"I haven't banned anything. I'm claiming that your statements are incoherent. Just saying "no that's wrong, you're making a mistake, you say that X isn't real but it's actually real, stop banning discussion" isn't a valid counterargument because you can say it about anything, including arguments against things that really don't exist."
You were effectively denying that there is a way of comparing different kinds of suffering and determining when they are equal. My Spice Girls vs. violence example illustrates the principle.
"I see your argument, but I think it's invalid. I would still dislike it if an alien killed me, even in a world without objective levels of suffering. (See Bayes.)"
I'm sure the ant isn't delighted at being killed either. The issue is with which you should choose over the other in a situation where one of them has to go.
"The inability to measure suffering quantitatively is the crux of this disagreement! If there is no objective equality-operator over any two forms of suffering, even in principle, then your argument is incoherent. You cannot just sweep it under the rug as "a different issue." It is the exact issue here."
See the Spice Girls example. Clearly that only provides numbers for humans, but when we're dealing with other species, we should assume similarity of overall levels of suffering and pleasure in them to us for similar kinds of experience, even though one species might have their feelings set ten times higher - we wouldn't know which way round it was (it could be that their pain feels ten times worse than ours or that ours feels ten times worse than theirs). Because we don't know which way round it is (if there is a difference), we should act as if there is no difference (until such time as science is able to tell us that there is one).