Lukas_Gloor comments on Arguments Against Speciesism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (474)
By this I meant literally the same amount (and intensity!) of suffering. So I agree with the point you and Carl Shulman make, if it is the case that some animals can only experience so much suffering, then it makes sense to value them accordingly.
I'm arguing for 1), but I would only do it by species in order to save time for calculations. If I had infinite computing power, I would do the calculation for each individual separately according to indicators of what constitutes capacity for suffering and its intensity. Incidentally, I would also assign at least a 20% chance that brain size doesn't matter, some people in fact have this view.
By "utilitarianism" I meant hedonistic utilitarianism in general, not your personal utility function that (in this scenario) differentiates between sapience and mere sentience. I added this qualifier because "you'd have to be okay with torturing babies" is not a reductio, since utilitarians would have to bite this bullet anyway if they could thereby prevent an even greater amount of suffering in the future.
I only have my first-person evidence to go with. This bothers me a lot but I'm assuming that some day we will have solved all the problems in philosophy of mind and can then map out what we mean precisely by "sentience", having it correspond to specific implemented algorithms or brain states.
I agree, those are simply the two premises the conclusion that we should value all suffering equally is based on. You end up with coherent positions by rejection one or both of the two.
What evidence do you have for thinking that your first-person intuitions about sentience "cut reality at its joints"? Maybe if you analyze what goes through your head when you think "sentience", and then try to apply that to other animals (never mind AIs or aliens), you'll just end up measuring how different those animals are from humans in a completely arbitrary and morally-unimportant implementation feature.
If after solving all the problems of philosophy you found out something like this, would you accept it, or would you say that "sentience" was no longer the basis of your morals? In other words, why might you prefer this particular intuition to other intuitions that judge how similar something is to a human?
If I understand it correctly, this is the position endorsed here. I don't think realizing that this view is right would change much for me; I would still try to generalize criteria for why I care about a particular experience and then care about all instances of the same thing. However, I realize that this would make it much more difficult to convince others to draw the same lines. If the question of whether a given being is sentience translates into whether I have reasons to care about that being, then one part of my argument would fall away. This issue doesn't seem to be endemic to the treatment of non-human animals though, you'd have it with any kind of utility function that values well-being.