peter_hurford comments on Why Eat Less Meat? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
Why? I actually think this is an important consideration. Is "suffering" by definition something only humans can do? If so, isn't this arbitrarily restricting the definition? If not, do you doubt something empirical about nonhuman animal minds?
~
You've characterized my argument correctly. It seems to me that most people already care about the suffering of nonhuman animals without quite realizing it, i.e. why they on the intuitive level resist kicking kittens and puppies. But I acknowledge that some people aren't like this.
I don't think there's a good track record for the success of moral arguments. As a moral anti-realist, I must admit that there's nothing irrational per se about restricting your moral sphere to humans. I guess my only counterargument would be that it seems weird and arbitrary.
What would you say to someone who thinks we should only care about the suffering of white humans of European descent? Would you be fine with that?
Suppose morality is a 'mutual sympathy pact,' and it seems neither weird nor arbitrary to decide how sympathetic to be to others by their ability to be sympathetic towards you. Suppose instead that morality is a 'demonstration of compassion,' and the reverse effect holds--sympathizing with the suffering of those unable to defend themselves (and thus unable to defend you) demonstrates more compassion than the previous approach which requires direct returns. (There are, of course, indirect returns to this approach.)
I'm confused as to what those considerations are supposed to demonstrate.
Basically, I don't think much of your counterargument because it's unimaginative. If you ask the question of what morality is good for, you find a significant number of plausible answers, and different moralities satisfy those values to different degrees. If you can't identify what practical values are encouraged by holding a particular moral principle, what argument do you have for that moral principle besides that you currently hold it?
I don't think moral principles are validated with reference to practical self-interested considerations.
What do you think moral principles are validated by?
Or, to ask a more general question, what they could possibly be validated by?
Broadly, I think moral principles exist as logical standards by wish actions can be measured. It's a fact whether a particular action is endorsed by utilitarianism or deontology, etc. Therefore moral facts exist in the same realm as any other sort of fact.
More specifically, I think the actual set of moral principles someone lives by are a personal choice that is subject to a lot of factors. Some of it might be self-interest, but even if it is, it's usually indirect, not overt.
OK. But standards are not facts. They are metrics in the same way that a unit of length, say, meter, is not a fact but a metric.
How do you validate the choice of meters (and not, say, yards) to measure?
The usual answer is "fitness for a purpose", but how does this work for morality?
True. But whether something meets a standard is a fact. While a meter is a standard, it's an objective fact that my height is approximately 1.85 meters.
~
Social consensus. Also, a meter is much easier to use than a yard.
~
Standards could be evaluated on further desiderata, like internal consistency and robustness in the face of thought experiments.
Social consensus and ease of use could also be factors.
I agree. You can state as a fact whether some action meets some standard of morality. That does nothing to validate a standard of morality, however.
Oh, boy. Social consensus, ease of use, really?
I try not to argue by definition, so it's the latter: I have empirical concerns. See this post, point 4 (but also 3 and 5), for a near-perfect summary of my concerns.
That said, my overall objection to your view does not hinge on this point.
Well, firstly, I have to point out that I am not restricting my moral sphere to humans, per se. (Of known existing creatures, dolphins may qualify for membership; of imaginable creatures, aliens and AIs might.) In any case, the circle I draw seems quite non-arbitrary, even obvious, to me; but I suppose this only speaks to the non-universality of moral intuitions.
That would indeed seem weird and arbitrary. One objection I might raise to such a person is that it's non-trivial, in many cases, to discern someone's "whiteness", not to mention one's exact ancestry. "European" is not a sharp boundary where humans are concerned, and a great many factors confound such categorization. Most of my other objections would be aimed at drawing out the moral intuitions behind this person's judgments about what sorts of beings are objects of morality (do they think "superficial" characteristics matter as much as functional ones? what is their response to various thought experiments such as brain transplant scenarios? etc.). It seems to me that there are both empirical facts and analytic arguments that would shift this person's position closer to my own; a logically contradictory, empirically incoherent, or reflectively inconsistent moral position is generally bound to be less convincing.
(Of course, I might answer entirely differently. I might say: no, I would not be fine with that, because my own ancestry may or may not be classified as "European" or "white", depending on who's doing the classifying. So I would, quite naturally, argue against a moral circle drawn thus. Moral anti-realism notwithstanding, I might convince some people (and in fact that seems to be, in part, how the American civil rights movement, and similar social movements across the world, have succeeded: by means of people who were previously outside the moral circle arguing for their own inclusion). Cows, of course, cannot attempt to persuade us that we should include them in our moral considerations. I do not take this to be an irrelevant fact.)
I think that fights the hypothetical a bit much. Imagine something a bit sharper, like citizenship. Why not restrict our moral sphere to US citizens? Or take Derek Parfit's within-a-mile altruism, where you only have concern for people within a mile of you. Weird, I agree. But irrational? Hard to demonstrate.
~
So do you think nonhuman animals may not suffer? I agree that's a possibility, but it's not likely. What do you think of the body of evidence provided in this post?
I don't think there is a tidy resolution to this problem. We'll have to take our best guess, and that involves thinking nonhuman animals suffer. We'd probably even want to err on the safe side, which would increase our consideration toward nonhuman animals. It would also be consistent with an Ocham's razor approach.
~
What would you suggest?
I basically agree with pragmatist's response, with the caveat only that I think many (most?) people's moral spheres have too steep a gradient between "family, for whom I would happily murder any ten strangers" and "strangers, who can go take a flying leap for all I care". My own gradient is not nearly that steep, but the idea of a gradient rather than a sharp border is sound. (Of course, since it's still the case that I would kill N chickens to save my grandmother, where N can be any number, it seems that chickens fall nowhere at all on this gradient.)
Well, you can phrase this as "nonhuman animals don't suffer", or as "nonhuman animal suffering is morally uninteresting", as you see fit; I'm not here to dispute definitions, I assure you. As for the evidence, to be honest, I don't see that you've provided any. What specifically do you think offers up evidence against points 3 through 5 of RobbBB's post?
I don't think so; or at least this is not obviously the case.
Well, just the stuff about boundaries and hypotheticals and such that you referred to as "fighting the hypothetical". Is there something specific you're looking for, here?
The essay cited the Cambridge Declaration of Consciousness, as well as a couple of other pieces of evidence.
Here is another (more informal) piece that I find compelling.
That's not evidence, that's a declaration of opinion.
In particular, reading things like "Evidence of near human-like levels of consciousness has been most dramatically observed in African grey parrots" (emphasis mine) makes me highly sceptical of that opinion.
It's not scientific evidence, but it is rational evidence. In Bayesian terms, a consensus statement of experts in the field is probably much stronger evidence than, say, a single peer-reviewed study. Expert consensus statements are less likely to be wrong than almost any other form of evidence where I don't have the necessary expertise to independently evaluate claims.
Not if I believe that this particular panel of experts is highly biased and is using this declaration instrumentally to further their undeclared goal.
That may or may not be true, but doesn't seem to be particularly relevant here. The question is what constitutes "near human-like levels consciousness". If you point to an African grey as your example, I'll laugh and walk away. Maybe, if I were particularly polite, I'd ask in which meaning you're using the word "near" here.
If I were in your place, I'd be skeptical of my own intuitions regarding the level of consciousness of African grey parrots. Reality sometimes is unintuitive, and I'd be more inclined to trust the expert consensus than my own intuition. Five hundred years ago, I probably would have laughed at someone who said we would travel to the moon one day.
I trust reality a great deal more than I trust the expert consensus. As has been pointed out, science advances one funeral at a time.
If you want to convince me, show me evidence from reality, not hearsay from a bunch of people I have no reason to trust.
African grays are pretty smart. I'm not sure I'd go so far as to call them near-human, but from what I've read there's a case for putting them on par with cetaceans or nonhuman primates.
The real trouble is that the research into this sort of thing is fiendishly subjective and surprisingly sparse. Even a detailed ordering of relative animal intelligence involves a lot of decisions about which researchers to trust, and comparison with humans is worse.
The moral sphere needn't work like a threshold, where one should extend equal concern to everyone within the sphere and no concern at all to anyone outside it. My moral beliefs are not cosmopolitan -- I think it is morally right to care more for my family than for absolute strangers. In fact, I think it is a huge failing of standard utilitarianism that it doesn't deliver this verdict (without having to rely on post-hoc contortions about long-term utility benefits). I also think it is morally acceptable to care more for people cognitively similar to me than for people cognitively distant (people with radically different interests/beliefs/cultural backgrounds).
This doesn't mean that I don't have any moral concern at all for the cognitively distant. I still think they're owed the usual suite of liberal rights, and that I have obligations of assistance to them, etc. It's just that I would save the life of one of my friends over the lives of, say, three random Japanese people, and I consider this the right thing to do.
I follow a similar heuristic when I move across species. I think we owe the great apes more moral consideration than we owe, say, dolphins. I don't eat any mammals but I eat chicken.
The idea of a completely cosmopolitan ethic just seems bizarre to me. I can see why one would be motivated to adopt it if the only alternative were caring about some subset of people/sentient beings and not caring at all about anyone else. Then there would be something arbitrary about where one draws the line. But this is not the most plausible alternative. One could have a sphere of moral concern that doesn't just stop suddenly but instead attenuates with distance.
The morality you suggest is what Derek Parfit calls collectively self-defeating. This means that if everyone were to follow it perfectly, there could be empirical situations where your actual goals, namely the well-being of those closest to you, are achieved less well than they would be if everyone followed a different moral view. So there could be situations in which people have more influence on the well-being of the family of strangers, and if they'd all favor their own relatives, everyone would end up worse off, despite everyone acting perfectly moral. Personally I want a world where everyone acts perfectly moral to be as close to Paradise as is empirically possible, but whether this is something you are concerned about is a different question (that depends on what question your seeking to answer by coming up with a coherent moral view).
This seems nonsensical; a utility function does not prescribe actions. If I care about my family most, but acting in a certain way will cause them to be worse off, then I won't act that way. In other words, if everyone acting perfectly moral causes everyone to end up worse off, then by definition, at least some people were not acting perfectly moral.
The problem is not with your actions, but with the actions of all the others (who are following the same general kind of utility function but because your utility function is agent-relative, they use different variables, i.e. they care primarily about their family and friend as opposed to yours). However, I was in fact wondering whether this problem disappears if we make the agents timeless (or whatever does the job), so they would cooperate with each other to avoid the suboptimal outcome. This is seems fair enough since acting "perfectly moral" seems to imply the best decision theory.
Does this solve the problem? I think not; we could tweak the thought experiment further to account for it: we could imagine that due to empirical circumstances, such cooperation is prohibited. Let's assume that the agents lack the knowledge that the other agents are timeless. Is this an unfair addendum to the scenario? I don't see why, because given the empirical situation (which seems perfectly logically possible) the agents find themselves in, the moral algorithm they collectively follow may still lead to results that are suboptimal for everyone concerned.
You don't follow a utility function. Utility functions don't prescribe actions.
... are you suggesting that we solve prisoner's dilemmas and similar problems by modifying our utility function?
OK, bad choice of words.
No, but you need some decision theory to go with your utility function, and I was considering the possibility that Parfit merely pointed out a flaw of CDT and not a flaw of common sense morality. However, given that we can still think of situations where common sense morality (no matter the decision theory) executed by everyone does predictably worse for everyone concerned than some other theory, Parfit's objection still stands.
(Incidentally, I suspect that there could be situations where modifying your utility function is a way to solve a prisoner's dilemma, but that wasn't what I meant here.)
By this reasoning everyone should give all their money and resources to charity (except to the extent that they need some of their resources to keep their job and make money).
That's not much of a reductio ad absurdum. It would be much better if people did that, or at least moved a lot in that direction.
People are motivated to do things that make money because the money benefits themselves and their loved ones. Many such things are also beneficial to everyone, either directly (inventors, for instance, or people who manufacture useful goods), or indirectly (someone who is just willing to work hard because working hard benefits themselves, thus producing more and improiving the economy). In a world where everyone gave their money to random strangers and kept them at an equal level of wealth, nobody would be able to make any money (since 1) any money they made would be accompanied by a reduction by the money other people gave them, and 2) they would feel (by hypothesis) obligated to give away the proceeds anyway). This would mean that money as a motivation would no longer exist, and we would lose everything that we gain when money is a motivation. Thts would be bad.
Even if you modified the rule to "I should give money to people so as to arranbge an equal level of wealth except where necessary to provide motivation", in deciding exactly who gets your money you'd essentially have a planned economy done piecemeal by billions of individual decisions. Unlike a normal planned economy, it wouldn't be imposed from the top, but it would have the same problem as a normal planned economy in that there's really nobody competent to plan such a thing. The result would be disaster. So overall it would be a better world if people kept the money they made even if someone else could use it more than they could.
Furthermore, the state where everyone acts this way is unstable. Even if your family would be better off if everyone acted that altruistically, your family would be worse off if half the world acted that way and you and they were part of that half.
Yes. At least as long as there are problems in the world. What's wrong with that?
Everyone, including nonhumans, would have their interests/welfare-function fulfilled as well as possible. If I had to determine the utility function of moral agents before being placed into the world in any position at random, I would choose some form of utilitarianism from a selfish point of view because it maximizes my expected well-being. If doing the "morally right" thing doesn't make the world a better place for the sentient beings in the world, I don't see a reason to call it "right". Also note that this is not an all-or-nothing issue, it seems unfruitful to single out only those actions that produce the perfect outcome, or the perfect outcome in expectation. Every improvement into the right direction counts, because every improvement leads to someone else being better off.
That's a game theory/decision theory problem, not a problem with the utility function.
If all the agents in the situation acted according to utilitarianism, everyone would be better off. To the extent that everyone acting according to common sense morality predictably fails to bring about the best of all possible worlds in this situation, and to the extent that one cares about this fact, this constitutes an argument against common sense morality.
Of course, if decision theory or game theory could make those agents cooperate successfully (so they don't do predictably worse than other moralities anymore) in all logically possible situations, then the objection disappears. I see no reason to assume this, though.
It seems implausible to me that there is any ethical decision procedure that human beings (rather than idealized perfectly informed and perfectly rational super-beings) could follow that wouldn't be collectively self-defeating in this sense. Do you (or Parfit) have an example of one that isn't?
Anyway, I don't see this as a huge problem. First, I'm pretty sure I'm never going to live in a world (or even a close approximation to one) where everyone adheres to my moral beliefs perfectly. So I don't see why the state of such a world should be relevant to my moral beliefs. Second, my moral beliefs are ultimately beliefs about which consequences -- which states of the world -- are best, not beliefs about which actions are best. If there was good evidence that acting in a certain manner (in the aggregate) wasn't effective at producing morally better states of affairs, then I wouldn't advocate acting in that manner.
But I am not convinced that following a cosmopolitan decision procedure (or advocating that others follow one) would empirically be an effective means to achieving my decidedly non-cosmopolitan moral ends. Perhaps if everyone in the world mimicked my moral behavior (or did what I told them) it would be, but alas, that is not the case.
Utilitarianism is not collectively self-defeating, but then there'd be no room for non-cosmopolitan moral ends.
This part shouldn't make a difference. If humans are too irrational to directly follow utilitarianism (U), then U implies they should come up with easier/less dangerous rules of thumb that will, on average, produce the most utility. This is termed "indirectly individually self defeating", if you have a theory that implies it would be best to follow some other theory. Parfit concludes, and I agree with him here, that this is not a reason to reject U. U doesn't imply that one ought to actively implement utilitarianism, it only wants you to bring about the best consequences regardless of how this happens.
This is a pretty dubious move. Why think there will be easy to follow rules that will maximize aggregate utility? And even if such rules exist, how would we go about discovering them, given that the reason we need them in the first place is due to our inability to fully predict the consequences of our actions and their attached utilities?
Do you just mean that we should pick easy to follow rules that tend to produce more utility than other sets of easy to follow rules (as far as we can figure out), but not necessarily ones that maximize utility relative to all possible patterns of behavior? In that case, I don't see why your utilitarianism isn't collectively self-defeating according to the definition you gave. A world in which everyone acts according to such rules will not be a world that is as close to the utilitarian Paradise as empirically possible. After all, it seems entirely empirically possible for people to accurately recognize particular situations where actions contrary to the rules produce higher utility.
Also note that the view you outlined is often concerned with the question of helping others. When it comes to not harming others, many people would agree with the declaration of human rights that inflicting suffering is equally bad regardless of one's geographical or emotional proximity to the victims. Personal vegetarianism is an instance of not harming.
I disagree with cosmopolitanism when it comes to "not harming" as well. I think needlessly inflicting suffering on human beings is always really bad, but it is worse if, say, you do it to your own children rather than to a random stranger's.