I'm wondering if rats and dogs also experience despair, the agony of knowing that things will never get better, that they will keep suffering until they die. This seems to me to be at least just as bad as the physical agony.
Can a rat be hurt, offended, or repulsed by pictures of rats suffering? (I expect that a rat does react with discomfort to the cries of injured rats.) Can a rat be hurt, offended, or repulsed by pictures of non-rat animals suffering?
If a rat had a choice between two otherwise-equivalent food sources, one of which involved other rats being hurt and one which did not, would it favor the cruelty-free food source? Is a rat capable of such thoughts? Does a rat have enough causal or abstract reasoning ability to make the connection between "this food" and "those suffering rats"? Does a human? Or do we do it by association, availability heuristic, and magical thinking — rather than by causal reasoning at all?
(Or are association, availability heuristic, and magical thinking a way of approximating causal reasoning?)
It seems to me that humans are capable of caring about rats to a much greater degree than rats are capable of caring about either humans or (say) lizards, birds, or insects. Humans are capable of asking the question, "What would an ideal life for a rat be like?" (See, e.g., the Rat Park experiments.) This empathetic-moral ability seems to be founded on common animal tendencies such as reacting with discomfort to the cries of the injured.
I don't think a rat can be hurt by the pictures because they are too unrealistic: lacking motion, scent, and sound. (I recall a similar problem with octopuses: you can't use TV to show them anything because TV is too slow - they see the individual static frames with no 'motion blur'. If a lion could talk...)
Would adding sufficient realism induce, say, helping or empathetic behavior? I don't know. Some people thought monkeys wouldn't, but when the experiments were done, the subjects were willing to help others at some cost to themselves. Maybe the rat experiments have already been done.
I probably should have quoted the specific part I was thinking of in that comment:
The best the universe has done so far is to produce us - creatures who both can and do care about injustice and suffering. If you believe in a Grand Design, or some other teleological explanation that results in universal justice, then, go to the mirror right now and take a long hard look, because buddy, you are it - you are as good as it has gotten, so far.
This got me thinking about ① just how empathetic various animals can be towards one another, towards other species, and so on; ② how empathy relates to human intelligence; and ③ how much it relates to simpler forms of distress caused by another creature's suffering.
This empathetic-moral ability seems to be founded on common animal tendencies such as reacting with discomfort to the cries of the injured.
The best book I read on the biological basis of morality is Peter Singer's The Expanding Circle: Ethics, Evolution and Moral Progress (free copy). I strongly recommend it.
Thanks!
Having only begun to read this, I wonder — Singer seems to be conflating altruism with cooperation. The way I use these words, they are distinct; notably, altruism does not involve the concept of reciprocation or synergy, whereas cooperation generally does.
(This seems to be the sense in which "altruism" is commonly used by both many who praise altruism, and many who reject it. Wikipedia: "Pure altruism consists of sacrificing something for someone other than the self [...] with no expectation of any compensation or benefits, either direct, or indirect.")
Singer describes his examples of bird warning calls, gazelles stotting, and wolves sharing food as "altruism", where I would tend to see them as cooperative acts; specifically, acts done with at least some expectation of reciprocation when reciprocation becomes possible: as the song says, "today for you, tomorrow for me".
One reconciliation of these ideas may be altruism as a form of acausal cooperation ....
Hi fubarobfusco. I think Singer is using the term 'altruism' to mean what evolutionary psychologists and sociobiologists mean by it, i.e., "a type of helping behavior in which an individual increases the survival chance or reproductive capacity of another individual while decreasing its own survival chance or reproductive capacity" (Peter Gray, Psychology).
Sure, that makes sense — but it is distinct enough from its usual use in discussions of ethics and morality as to be confusing. Almost any social behavior beyond the most short-sighted and sociopathic could be considered altruistic by that notion, including a lot that we might usually regard as self-interested ... but then again, we could expect that if we are a species evolved to take advantage of acausal cooperation.
Then, unless you are cretin or a fool, or both, realize that suffering and injustice are both inescapable contemporary and future realities which you have to deal with rationally (or not) as you choose. You do not get to choose Door Number 3, which is "no suffering and injustice." In fact, even you kill yourself straightaway to avoid inconveniencing a mouse with a plow, the suffering and injustice will continue to march on, even for billions and billions of years.
Not making more life means the suffering and the injustice most under my control stops, even for billions and billions of years. Does my vasectomy make me a cretin or a fool, or both?
It means that one of your decisions places more weight on one sub-aspect of future outcomes (the suffering and injustice your progeny might have experienced or created) more than other aspects (the suffering and injustice your progeny might have directly or indirectly prevented) which third parties might see as equally important. If these decisions reflect your actual values then they aren't foolish or cretinous... but I'm not sure I understand your actual values. Is there a consequentialist argument by which potential future worlds which include your descendants would be inferior to worlds where that space is filled up by the marginal additional offspring of others instead? Is there a deontological ethic which you should follow but others shouldn't, or one which everyone should follow in which "we should undergo voluntary self-extinction" is the correct ethical result?
Well, that was certainly a very colorful and convincing essay, but it doesn't really address the original question. PR is very important to the cryonics project, so what if the "Animal Rights people" do attack cryonics on this basis? Could they do any real damage?
He writes this essay in response to someone who writes about their "gut level emotional response when [they] thought about dogs being likely killed by an as yet unproven and dangerous medical procedure."
I recommend the whole thing. If you are going to read it all, note that some text is duplicated near the end, though there is one paragraph at the very end which is not.
First, he describes how animals share empathy and emotions with humans:
Next, he explains ethics in a way that seems to correspond with a lot of Eliezer's writing:
Next, he tackles questions about whether animal research is, on net, beneficial:
Next, he goes into details of what animal lifespan research entails:
The ending is poignant, and I think an excusable violation of Godwin's law:
Darwin does not mention it in this essay, but he is a vegetarian, and his dog is cryopreserved at Alcor.