The great moral philosopher Jeremy Bentham, founder of utilitarianism, famously said,'The question is not, "Can they reason?" nor, "Can they talk?" but rather, "Can they suffer?" Most people get the point, but they treat human pain as especially worrying because they vaguely think it sort of obvious that a species' ability to suffer must be positively correlated with its intellectual capacity.
[...]
Nevertheless, most of us seem to assume, without question, that the capacity to feel pain is positively correlated with mental dexterity - with the ability to reason, think, reflect and so on. My purpose here is to question that assumption. I see no reason at all why there should be a positive correlation. Pain feels primal, like the ability to see colour or hear sounds. It feels like the sort of sensation you don't need intellect to experience. Feelings carry no weight in science but, at the very least, shouldn't we give the animals the benefit of the doubt?
[...]
I can see a Darwinian reason why there might even be be a negative correlation between intellect and susceptibility to pain. I approach this by asking what, in the Darwinian sense, pain is for. It is a warning not to repeat actions that tend to cause bodily harm. Don't stub your toe again, don't tease a snake or sit on a hornet, don't pick up embers however prettily they glow, be careful not to bite your tongue. Plants have no nervous system capable of learning not to repeat damaging actions, which is why we cut live lettuces without compunction.
It is an interesting question, incidentally, why pain has to be so damned painful. Why not equip the brain with the equivalent of a little red flag, painlessly raised to warn, "Don't do that again"?
[...] my primary question for today: would you expect a positive or a negative correlation between mental ability and ability to feel pain? Most people unthinkingly assume a positive correlation, but why?
Isn't it plausible that a clever species such as our own might need less pain, precisely because we are capable of intelligently working out what is good for us, and what damaging events we should avoid? Isn't it plausible that an unintelligent species might need a massive wallop of pain, to drive home a lesson that we can learn with less powerful inducement?
At very least, I conclude that we have no general reason to think that non-human animals feel pain less acutely than we do, and we should in any case give them the benefit of the doubt. Practices such as branding cattle, castration without anaesthetic, and bullfighting should be treated as morally equivalent to doing the same thing to human beings.
Link: boingboing.net/2011/06/30/richard-dawkins-on-v.html
Imagine a being so vast and powerful that its theory of mind of other entities would itself be a sentient entity. If this entity came across human beings, it might model those people at a level of resolution that every imagination it has of them would itself be conscious.
Just like we do not grant rights to our thoughts, or the bacteria that make up a big part of our body, such an entity might be unable to grant existential rights to its thought processes. Even if they are of an extent that when coming across a human being the mere perception of it would incorporate a human-level simulation.
But even for us humans it might not be possible to account for every being in our ethical conduct. It might not work to grant everything the rights that it does deserve. Nevertheless, the answer can not be to abandon morality altogether. If only for the reason that human nature won't permit this. It is part of our preferences to be compassionate.
Our task must be to free ourselves . . . by widening our circle of compassion to embrace all living creatures and the whole of nature and its beauty.
— Albert Einstein
How do we solve this dilemma? Right now it's relatively easy to handle. There are humans and then there is everything else. But even today — without uplifted animals, artificial intelligence, human-level simulations, cyborgs, chimeras and posthuman beings — it is increasingly hard to draw the line. For that science is advancing rapidly, allowing us to keep alive people with severe brain injury or save a premature fetus whose mother is already dead. Then there are the mentally disabled and other humans who are not neurotypical. We are also increasingly becoming aware that many non-human beings on this planet are far more intelligent and cognizant than expected.
And remember, as will be the case in future, it has already been the case in our not too distant past. There was a time when three different human species lived at the same time on the same planet. Three intelligent species of the homo genus, yet very different. Only 22,000 years ago we, H. sapiens, have been sharing this oasis of life with Homo floresiensis and Homo neanderthalensis.
How would we handle such a situation at the present-day? At a time when we still haven't learnt to live together in peace. At a time when we are still killing even our own genus. Most of us are not even ready to become vegetarian in the face of global warming, although livestock farming amounts to 18% of the planet’s greenhouse gas emissions.
So where do we draw the line?
Dawkins is normally a much sharper thinker than this, his arguments could have been made much more compelling. Anyway, I am going to sidestep the moral issue and look at the epistemic question.
Evolutionarily speaking the fundamental non-obvious insight is that there's little advantage to be had in signalling weakness and vulnerability if you don't happen to be a social and therefore intelligent animal with a helpful tribe close by. There's no reason to wire pain signals halfway 'round the brain and back just to suffer in more optimal ways if there's no one around to take advantage of thereby. We can strengthen this argument with a complementary but disjunctive mechanistic analysis. It is important to look at humans' cingulate cortex (esp. ACC), insula, pain asymbolia and related insular oddities, reward signal propagation, et cetera. This would be a decent paper to read but I'm too lazy to read it, or this one for that matter. Do note that much brain research is exaggeration and lies, especially about the ACC, as I had the unfortunate pleasure of discovering recently.
Philosophy is perhaps better suited to this question. Metaphysically speaking it must be acknowledged that animals are obviously not as perfect as humans, and are therefore less Godlike, and therefore less sentient, as can all be proven in the same vein as Leibniz's famous Recursive Universal Dovetailing Measure-Utility Inequality Theorem. His arguments are popularly referred to as the "No Free Haha-God-Is-Evil" theorems, though most monads are skeptical of the results' practical applicability to monads in most monads. Theologians admit that they are puzzled by the probably impossible logical possibility of an acausal algorithm employing some variation on Thompson's "Reality-Warping Elysium" process, but unfortunately any progress towards getting any bits about a relevant Chaitin's omega results in its immediate diagonalization out of space, time, and all mathematically interesting axiom sets. This qua "this" can also be proven by "Goedel's ontological proof" if you happen to be Goedel (naturally).
My default position is that suffering as we know it is fundamentally tied in with extremely important and extremely complex social decision theoretic game theoretic calculus modeling stuff, and also all that metaphysics stuff. I will non-negligibly update if someone can show me a good experiment demonstrating something like "learned helplessness" in non-hominids or non-things-that-hunted-in-packs-for-a-long-time-then-were-artificially-molded-into-hominid-companions. That high-citation rat study looked like positive bias upon brief inspection, but maybe that was positive bias.
On the meta level though, the nicest thing about going sufficiently meta is that you don't have to worry about enlightened aqua versus turquoise policy debates. Which by the way continues to reliably invoke the primal forces of insanity. It's like using a tall metal rod as a totem pole for spiritual practice, in a lightning storm, while your house burns down, with the entire universe inside it, and also the love of your life, who is incredibly attractive. Maybe a cool post would be "Policy is the Mind Killer", about how all policy discussion should be at least 16 meta levels up, because basically everything anyone ever does is a lost purpose. (It has not yet been convincingly shown that humanity is not a lost purpose, but I think this is a timeful/timeless confusion and can be dissolved in short order with right view.) Talking about how to talk about thinking about morality is a decent place to start from and work our way up or down, and in the meantime posts like multifoliaterose's one on Lab Pascals are decent mind-teasers maybe. But object level policy debates just entrench bad cognitive habits. Dramatic cognitive habits. Gauche weapons from a less civilized age... of literal weapons. Your strength as a rationalist is your ability to be understood by Douglas Hofstadter and no one else. Ideally that would include yourself. And don't forget to cut through in the same motion, of course. Anyway this is just unsolicited advice aimed without purpose, and I acknowledge that debating lilac versus mauve can be fun some times. ...I'm not gay, it's just an extended metaphor extension.
Off-the-cuff hypothesis that I arrogantly deem more interesting than the discussion topic: The prefrontal cortex is exploiting executive oversight to rent-seek in the neural Darwinian economy, which results in egodystonic wireheading behaviors and self-defeating use of genetic, memetic, and behavioral selection pressure (a scarce resource), especially at higher levels of abstraction/organization where there is more room for bureaucratic shuffling and vague promises of "meta-optimization", where the selection pressure actually goes towards the cortical substructural equivalent of hookers and blow. Analysis across all levels of organization could be given but is omitted due to space, time, and thermodynamic constraints. The pre-frontal cortex is basically a caricature of big government, but it spreads propagandistic memes claiming the contrary in the name of "science" which just happens to be largely funded by pre-frontal cortices. The bicameral system is actually very cooperative despite misleading research in the form of split-brain studies attempting to promote the contrary. In reality they are the lizards. This hypothesis is a possible explanation for hyperbolic discounting, akrasia, depression, Buddhism, free will, or come to think of it basically anything that at some point involved a human brain. This hypothesis can easily be falsified by a reasonable economic analysis.
I just returned to the parent comment by way of comment-stalking muflax, and got even more out of it this time. You live in an interesting place, Will; and I do enjoy visiting.
Still not sure where the "dovetailing" of Leibniz comes in; or what the indefinite untrustworthy basement layers of Ken Thompson have to do with Elysium; but perhaps I'll get it on my next reading.
Nerfhammer's excellent wikipedia contributions reminded me of your disdain for the heuristics and biases literature. The disdain seems justified (for example, the rhyme-as-reas... (read more)