In March 2009, Tyler Cowen (blog) interviewed Peter Singer about morality, giving, and how we can most improve the world. They are both thinkers I respect a lot, and I was excited to read their debate. Unfortunately the interview was available only as a video. I wanted a transcript, so I made one:

Cowen: This is Tyler Cowen of George Mason University. I'm doing a BloggingHeads with Peter Singer, the world-famous philosopher from Princeton. This is a forum on Peter's latest book, which he'll start off by telling you a bit about.
Singer: Hi. The book's called "The Life You Can Save: Acting Now To End World Poverty". It begins with an argument that I've used many times in articles about a child drowning in a pond, and suggests that if you saw a child drowning in a pond that you would jump in and save that child, and you think that is what you ought to do, even if it meant that you ruined an expensive pair of shoes that you were wearing.

From there I pull back to saying "what does this mean about the problem of world poverty, given that there are, according to Unicef, ten million children dying of avoidable poverty-related causes every year?" We could save some of them, and probably it wouldn't cost us much more than the cost of an expensive pair of shoes if we find an effective aid agency that is doing something to combat the causes of world poverty, or perhaps to combat the deaths of children from simple conditions like diarrhea or measles, conditions that are not that hard to prevent or to cure. We could probably save a life for the cost of a pair of shoes. So why don't we? What's the problem here? Why do we think it's ok to live a comfortable, even luxurious, life while children are dying? In the book I explore various objections to that view, I don't find any of them really convincing. I look at some of the psychological barriers to giving, and I acknowledge that they are problems. And I consider also some of the objections to aid and questions raised by economists as to whether aid really works. In the end I come to a proposal by which I want to change the culture of giving.

The aim of the book in a sense is to get us to internalize the view that not to do anything for those living in poverty, when we are living in luxury and abundance, is ethically wrong, that it's not just not a nice thing to do but that a part of living an ethically decent life is at least to do something significant for the poor. The book ends with a chapter in which I propose a realistic standard, which I think most people in the affluent world could meet without great hardship. It involves giving 1% of your income if you're in the bottom 90% of US taxpayers, scaling up through 5% and 10% and even more as you get into the top 10%, the top 5%, the top 1% of US taxpayers. But at no point is the scale I'm proposing what I believe is an excessively burdensome one. I've set up a website, thelifeyoucansave.com that people can go to in order to publicly pledge that they will meet this scale, because I think if people will do it publicly, that in itself will encourage other people to do it and, hopefully, the idea will spread.

Cowen: Thank you, Peter. Let me first stress: I agree with most of what's in your book; I think we all could give more and should give more. It would be good for other people and it would be good for ourselves. But let me start off the dialogue by mentioning a few points where I don't completely agree with you. One thing that struck me about the book was some of the omissions.
 

Immigration as an Anti-Poverty Program

Cowen: For instance, in my view, what is by far the best anti-poverty program, the only one that's really been shown to work, and that's what's called "immigration". I don't even see the word "immigration" in your book's index. So why don't we spend a lot more resources allowing immigration, supporting immigration, lobbying for immigration? This raises people's incomes very dramatically, it's sustainable, for the most part it's also good for us. Why not make that the centerpiece of an anti-poverty platform?
Singer: That's an interesting point, Tyler. I suppose, one question I'd like to ask is: is it sustainable? Isn't it the case that if we take, as immigrants, the people who are the most enterprising, perhaps, of the poor countries that we're still going to leave those countries in poverty, and their populations may continue to rise, and eventually, even if we keep taking immigrants, we will reach a capacity where we're starting to strain our own country?
Cowen: There's two separate issues: one is "brain drain" from the third world. I think here's a lot of research by [Michael Clemens], showing that it's not a problem, that third world countries that have even somewhat functional institutions tend to benefit by sending people to other countries. India's a good example: a lot of Indians return to India and start businesses, or they send money back home. Mexico is another example. Maybe North Korea is somewhat different, but for the most part immigration seems to benefit both countries.

I don't think we could have open borders; I don't think we could have unlimited immigration, but we're both sitting here in the United States and it hardly seems to me that we're at the breaking point. Immigrants would benefit much more: their wages would rise by a factor of twenty or more, and there would be perhaps some costs to us, but in a cost-benefit sense it seems far, far more effective than sending them money. Do you agree?

Singer: I must admit that I haven't thought a lot about immigration as a way of dealing with world poverty. Obviously, from what you're saying, I should be thinking more about it, but I can't really say whether I agree until I have thought more about it.
 

Changing Institutions: Greater Tax Break for True Charity

Cowen: Let me try another question along related lines. I think one general way in which I think about your book differently than you do, is that you think more about giving. I'm a big advocate of giving, I've written a whole book myself on philanthropy, but I think somewhat more in terms of changing institutions. So another thing we might consider doing, along the lines of what you advocate, is to increase the tax benefits of giving. Right now, if you're itemizing deductions and you give $1, you deduct $1 from your taxes. But it wouldn't be very difficult to make it the case that for certain kinds of giving you could deduct $1.10 from your taxes or $1.20. Would you favor this kind of reform?
Singer: I might favor that, if giving were defined more narrowly than we do, in the US anyway, because I know I can deduct $1 from my taxes whether I give to Oxfam America, which I think is an effective organization fighting world poverty, or if I give to the Met so they can buy yet another painting to add to the already super-abundant collection of paintings they have. I don't see why the taxpayer should subsidize me if I decide I want to give to the Met but sure, if I'm giving to Oxfam I think that would be good.
Cowen: So, in other words, you favor a kind of tax cut as a way to help the world's poor. That, in this country, if targeted properly, tax policy, in essence cutting the taxes of rich people, is one of the very best ways to help the world's poor. Would you sign on to that?
Singer: I'm not quite sure why it is ... you seem to have leapt a little from what I was saying and I haven't followed the leap as to why cutting taxes for the rich would be one of the most effective ways of helping the poor. Can you explain that a little more?
Cowen: If we give a greater tax break to charitable donations, and here I mean only true charity, not say a fancy art museum, disproportionately this will benefit wealthy people. Wealthy people have a lot of money. In essence you're cutting their taxes. They're giving more, they may not have a higher level of consumption, but would you be willing to raise your hand and say "I, Peter Singer, think that cutting taxes on the US wealthy is in fact one of the very best things we could do for the world's poor, if we do it the right way"? Yes or no?
Singer: Yes, if the tax break only goes to those of the wealthy who are giving to organizations that are effectively helping the poor, I'll raise my hand to that.
Cowen: OK; I'm glad to hear that.
 

Millennium Villages Skepticism

Cowen: Let me focus on another point of difference between us in the book. I think when it comes to the effectiveness of aid, I'm not a total skeptic on the effectiveness of aid but I think I'm more skeptical than you are. In a number of places you site the work of Jeffrey Sachs. Now my view of Sachs is that his projects are actually doing individual human beings a lot of good but the return on investment I don't think is that high. I think he's improving the health of a lot of people but I don't think he's going to raise any villages, much less countries or continents, out of poverty. Given that my view is that the rate of return on this investment is much lower, and I think that the economics profession as a whole agrees with me, not with Sachs, this to me suggests that to really make a dent in world poverty we would have to give much more than 5% of our incomes, even more than 10%, that we're simply at a point where we can do some good, but that to abolish poverty we would have to engage in a very dramatic redistribution. What's your view on this?
Singer: Firstly, I think, as for whether Sachs is really going to succeed in raising villages out of poverty, I think the data isn't in yet. The Millennium Villages project which he's working on has only been going a few years, I think we need to give it maybe another five years to see whether it's working. That's more or less what he's said. He hopes that the aid will be short term, that the villages will become self-sustaining, the improvements will last, they'll be out of the poverty trap. If that hasn't happened at the end of another five years I'm going to agree with you that we're going to need more, but I think it's really too early to call the result on that one.
Cowen: Take the overall opinion of economists, which is again that Sachs's projects can do good, people in those villages might be better off, but if you're in the middle of, say, a totally corrupt African country which is not democratic, which maybe has been fighting wars, which has an absolutely horrible infrastructure, which has a bureaucracy, a kleptocracy, massive problems, lack of literacy, that maybe you could eliminate infectious diseases or malaria within that village. People will be better off, it's worth doing, but at the end of the day is there really any reason to think, given the last 300 years of thinking and writing on development and economic history that this will at all cure poverty? Doesn't it just mean you'll have poor people without malaria, which is better than poor people with malaria, but they're still essentially poor people?
Singer: If the governments and the situation is as bad as you describe, you're probably right, but of course not all countries ... you describe pretty much a worst case scenario. I think there are a lot of countries where there are poor people which do not have governments which are as bad as you painted. I think in those countries we can hope that people actually will lift themselves out of poverty and I think that's what we need to try to do. Now, you may be right that that's still going to leave poor people in countries that are as bad as you describe, and there is a real question then as to how much we can do to help them, whether giving more will really be enough to help them or because of those governments in those situations there's really nothing much we can do. That will be the dilemma. But I don't think we've got to that point yet because we've not really worked out what we can do for people in the countries where the governments aren't so bad.
 

Chinese Reforms

Cowen: I think you and I are both looking for what are the most highly leveraged ways we can reduce poverty in this world.
Singer: Uh huh
Cowen: If I ask myself, historically, what has been the most successful anti-poverty program in the last century, I look at Communist China, and I would say that the reforms, starting in the late 1970s, have taken at least 300M-400M people, and probably more, and taken them from extreme poverty, perhaps starvation, to a situation where a lot of them live quite well or at least have some kind of tolerable lower middle class existence. I think that property rights and institutional reforms are they key to fighting poverty. China during that period, the aid it received didn't matter much. It doesn't mean we shouldn't give aid, I'm all for aid, but isn't the big leveraged investment here changing and improving institutions and not giving money?
Singer: I do that that's a really important thing when we can do it. The question is, can we do it? Obviously the Chinese reforms that you refer to really were internally driven, I don't think they were a result of things the West did, unless you talk about the entire global economic system, which China clearly wanted to participate in. So the question is how can we be effective in producing those sorts of changes? In some countries we can come in and help, say countries recovering from civil war, and give some help in establishing good institutions, but I'm not sure what ideas you have about what's a good way to bring about that kind of reform in these countries that will lead everywhere to the sorts of benefits that you refer to in China.
Cowen: In countries like China in a way it's internally driven. It's not that anyone successfully pressured them, but in another way I think it's highly externally driven, that the Chinese, Taiwanese, Koreans, other countries followed the example of Japan They saw that Japan worked. They saw that an Asian country could rise to moderate wealth or even riches and at some point they decided to copy this in their own way. If we look at Japan, Japan copied the west, so maybe one of the very best most important things we can do is just ourselves be a beacon of progress: be humane, be tolerant, respect others, be wealthy and just show that it's possible. We shouldn't think of that as a substitute for aid, but maybe that's actually our number one priority. Does that make sense to you?
Singer: That makes sense. I don't know that we have to strive to be more wealthy than we are--well, maybe just right at this moment we need to strive to get back to being as wealthy as we were a year ago perhaps. But I think we are setting that example, undoubtedly. We are showing countries what can be done with reasonably good government, open economies, and I do hope that other countries will follow that. But maybe not all countries can do it. I think that Paul Collier argues in his book that it's going to be difficult for some African countries to get into this game now. There are reasons why it's going to be hard for them to compete with countries that have established positions, have developed markets, have low labor costs. It's not clear to me that this is going to be a path that every poor country can follow.
 

Military Intervention

Cowen: You mentioned Paul Collier. I found his book very interesting. One argument he makes--I would say I'm not, myself, convinced but I'm curious to hear what you think--is that we could do the world a great deal of good by selective military interventions. So take the case of Darfur. A large number of people are suffering, dying. Collier says, or implies, or at least opens the possibility, that we, the United States, the UN, whoever, should just move in and in military terms do something about this. It is again a topic that is not prominent in your book, but it seems that if it can work it's highly leveraged, more leveraged than giving away money. I'm curious as to your views on that.
Singer: I did discuss humanitarian intervention in my earlier book One World and I do support it under the right circumstances. I think, though, we do have to be pretty clear about defining it properly and trying to get support for it. Maybe it would work in Darfur. I think Darfur is quite a large area, relatively thinly populated, and it might take a lot of resources to really protect the people in Darfur. There are underlying issues, too, perhaps about climate change, even, that are causing scarcity in Darfur. But isn't possible, I mean I think that Zimbabwe would be another possibility, though maybe just now with changes in the political system you wouldn't want to do it just now, you'd want to see how that played out for a while. But certainly a year ago you might well have thought that if the South Africans could be persuaded to move in and remove Mugabe that would be a good thing to do. That would have been better, I think, than having a white former colonial power come in, that obviously would have evoked a lot of echos of returning to a past that Zimbabweans don't want. But I'm not, in principle opposed to military intervention, I just think we have to be very very careful about the circumstances in which we do it, because obviously it can trigger a lot of violence and bloodshed and produce results that are the opposite of what you and I would both want.
 

Colonialism

Cowen: Do you think the end of colonialism was a good thing or a bad thing for Africa?
Singer: That's a really difficult question. I think, clearly, there were lots of bad things about colonialism, but you would have to say that some countries were definitely better administered and that some people's lives, although they may have had some sort of humiliation, perhaps through not being independent, being ruled by people of a different race, in some ways they were better. It's hard, really, to draw that balance sheet. Independence has certainly not been the unmitigated blessing that people thought it would be at the time.
Cowen: Let's say we have the premise, that with colonialism there would not have been wars between African nations. It's not the case that a British ruled colony would have attacked a French colony, for instance. It's highly unlikely. So given just that millions have perished from wars alone, wouldn't the Utilitarian view, if you're going to take one, suggest that colonialism was essentially a good idea for Africa, it was a shame that we got rid of it, and that the continent would have been better off under foreign rule, European foreign rule.
Singer: I don't think we can be so sure that it would have continued to be peaceful. After all we did have militant resistance movements, we had the Mau Mau in Kenya, for example. We had other militant resistance movements. It may simply have been that the fact of white rule would have provoked not one colony going to war against another but civil war within some of those countries. If what you're asking is would colonialism, had it been accepted by the people there, without military conflict, would that have been better than some of the consequences we've had in some of these countries, you would have to say undoubtedly yes. But we can't go back and wind back the clock and say "how would it have been if" because we don't really know whether that relative stability and peace would have lasted.
Cowen: If we compare the Mau Mau, say, to the wars in Kenya and Rwanda, it seems unlikely that rebellions against colonial governments would have reached that scope, especially if England, France, other countries, would have been willing to spend more money to create some tolerable form of order. My guess is you would have had a fair number of rebellions but it's highly highly unlikely it would compare to the kind of virtual holocausts we've had in Africa as it stands.
 

Aid without stable government

Singer: I certainly agree that if you look at what's been happening in the Congo, just as one example, or countries like Sierra Leone or Liberia, yes, you could certainly think that it might have been better for those countries.
Cowen: Would you say that Zimbabwe is one example of a country where just giving it money through aid is unlikely to work?
Singer: At present, unless the government changes quite dramatically. Again, as you were saying before, there might be specific things we can do: we may be able to help particular people who have disease or are hungry, but I agree, in the present conditions it's unlikely to lift people out of poverty on any kind of large scale.
Cowen: Let's take a country like Madagascar, which as recently as two or three years ago was touted by the Bush administration, and I don't just mean Republicans, it was touted by many people, as being a kind of model for Africa. Here's a country were we could give a lot of aid, the aid would go to some good purpose, we're making progress, and now Madagascar seems to be in the midst of a civil war and the polity is collapsing, the economy is doing very poorly. How many countries in Africa do you think are there where aid works? Where do you draw the line? What in your opinion is the marginal country that is hopeless?
Singer: Look, I haven't got a list of African countries like that, I must admit. I think there are some countries where things seem to work, and that's not to say I could name a country and say, well, Mozambique, that aid programs have made a positive difference, or Sierra Leone. Maybe in a month there'll be a coup and you'll be able to tell me that I was wrong. I can't see the future. But there are countries where I think aid has worked, ones where it hasn't worked. I haven't got a rank ordering and I don't have a cutoff line where that is, I'm sorry I'm just not sufficiently expert on African politics and conditions to do that.
 

Genetically modifying ourselves to be more moral

Cowen: Let's try some philosophical questions. You're a philosopher, and I've been very influenced by your writings on personal obligation. Apart from the practical issue that we can give some money and have it do good, there's a deeper philosophical question of how far those obligations extend, to give money to other people. Is it a nice thing we could do, or are we actually morally required to do so? What I see in your book is a tendency to say something like "people, whether we like it or not, will be more committed to their own life projects than to giving money to others and we need to work within that constraint". I think we would both agree with that, but when we get to the deeper human nature, or do you feel it represents a human imperfection? If we could somehow question of "do we in fact like that fact?", is that a fact you're comfortable with about human nature? If we could imagine an alternative world, where people were, say, only 30% as committed to their personal projects as are the people we know, say the world is more like, in some ways, an ant colony, people are committed to the greater good of the species. Would that be a positive change in human nature or a negative change?
Singer: Of course, if you have the image of an ant colony everyone's going to say "that's horrible, that's negative", but I think that's a pejorative image for what you're really asking ...
Cowen: No, no, I don't mean a colony in a negative sense. People would cooperate more, ants aren't very bright, we would do an ant colony much better than the ants do. ...
Singer: But we'd also be thinking differently, right? What people don't like about ant colonies is ants don't think for themselves. What I would like is a society in which people thought for themselves and voluntarily decided that one of the most satisfying and fulfilling things they could do would be to put more of their effort and more of their energy into helping people elsewhere in need. If that's the question you're asking, then yes, I think it would be a better world if people were readier to make those concerns their own projects.
Cowen: Let's say genetic engineering is possible, which is now not so far off on the major scale, and your daughter were having a daughter, and she asked you "daddy, should I program my daughter so that she's willing to sell her baby and take the money and send it to Haitians to save ten babies in Haiti". Would you recommend to her "yes, you should program the genes of your baby so she's that way"?
Singer: So she's going to sell her baby? What's going to happen to the baby?
Cowen: She's going to sell it to some wealth white couple that's infertile, they live in the Pacific Northwest, they'll take fine care of it, she'll receive $1M and save, say, 30 lives in Haiti. You've recommended that your granddaughter be programmed to act this way. Would you recommend that?
Singer: And so she's going to be happy with that? She's not going to suffer as current people would the pangs of separation from their daughter or the agonies of not knowing what's happened to my daughter? She's going to feel perfectly comfortable with that, and she's going to feel good about the fact that she's helped 30 babies in Haiti to have a decent life? Is that the assumption?
Cowen: We can do it that way, but keep in mind that even if she's unhappy that's outweighed by the 30 Haitian lives which are saved. Either way you want.
Singer: Right, but you're asking me and I'm like normal human beings, I haven't been reprogrammed, so I care about my daughter or my granddaughter, or whoever this is.
Cowen: Ok, she'll be happy.
Singer: Ok, good. Then I think I'm on board with your program.
Cowen: So you would want people to be much more cooperative in this way, if we could manage it in some way that won't wreck their psyches.
Singer: That's right.
Cowen: Do you think people would have a moral obligation to genetically reprogram themselves, or it would just be a nice thing they could do if they felt so inclined?
Singer: I think if we really had a system that was as good as you're saying, would lead to as good consequences, and would leave people happy, that's something they ought to do. Because that would really be a way of making a huge difference to the world. They would be wrong not to take advantage of this, given the benefits it involves and the absence, it seems, as described, of any major drawbacks.
 

Problem areas in Utilitarianism

Cowen: What do you think is the biggest problem area in Utilitarian moral theory?
Singer: The biggest problem area? One problem people are talking about that's relevant to what I'm talking about is that Utilitarian moral theory leads to highly demanding consequences that people reject. So that's one problem. The second problem, of course, is that it requires very complex calculations because we don't have a set of simple moral rules that say "don't do this, do that". We have to work out what the consequences of our actions are. As in this area we're talking about, what kind of aid is effective, what will overcome world poverty, it's very difficult to work out what the consequences are, and it's sometimes very difficult to know what's the right thing to do.
Cowen: But you think we nevertheless should do what we think is best, no matter how imperfect that guess may be?
Singer: Yeah, I don't really see what else we're supposed to do. It would seem to me to be wrong to say "because I can't calculate the consequences I'm just going to follow this simple set of rules". Because I can't calculate the consequences. But why follow this simple set of rules? Where do they come from? I don't believe that we have any god-given rules. I don't think that our moral intuitions are a good source of rules, because that's the product of our evolutionary history, which may not be appropriate for the moment that we're in. So, despite the difficulty, I don't really see what the alternative is, to trying our best to figure out what the expected utility is.
 

Is Utilitarianism independent?

Cowen: Let me toss up a classic criticism of Utilitarianism. I'm curious to see what you say. The criticism is this, that neither pain nor pleasure is a homogeneous thing. There are many different kinds of pains and pleasures and they're not strictly commensurable in terms of any natural unit. So when we're comparing pain and pleasure that's a fine thing to do, but in fact we're calling upon other values. So Utilitarianism is in this sense parasitic upon some deeper sense of philosophic pluralism, and we're not pure utilitarians at all. But that being the case, why don't we sometimes just allow an intuitive sense of right or wrong to override what would otherwise be the Utilitarian conclusion, since Utilitarianism itself cannot avoid value judgments?
Singer: I think the form of Utilitarianism that you're describing is Hedonistic Utilitarianism because you were talking about pleasure and pain and you were suggesting that pleasure is a whole range of different things. The form that I hold is Preference Utilitarianism which looks at people's preferences and tries to asses the importance of the preference for them. Now this is still not an easy thing to know, in fact in some ways you might say it's harder than getting measures of pleasure and pain, but I think it already embraces the pluralism that you're talking about in terms of people's preferences, people's understanding of what it is they're choosing and why. And so I don't think it's up to us to go back and try to pull in other kinds of values that we intuitively hold over the top of people's preferences. We can do it for ourselves, each of us can say "what are my preferences", "I value this", "I value the autonomous life over the happy life, and so that's what I'm going to choose". Of course, when I weigh out your preferences I should say "well here we give weight for the preference for an autonomous life and here we give weight to the preference for the pleasant life" but in making the final judgment, in which we take everyone's preferences into account, it would be wrong for us to just pull out some intuitive values and somehow give them weight in the overall calculation because then we're giving more weight to our preferences than we're giving to those of others.
Cowen: But doesn't preference utilitarianism itself require some means of aggregation? The means we use for weighing different clashing preferences, can require some kind of value judgments above and beyond Utilitarianism?
Singer: I don't quite see why that should be so. While acknowledging the practical differences of actually weighing up and calculating all the preferences, I fail to see why it involves other values apart from the preferences themselves.
 

Peter Singer: Jewish Moralist

Cowen: Let me try giving you my reading of Peter Singer, which is highly speculative, and I'm not even saying it's true, it's just what I think when I read you, especially the later Peter Singer, and I'm just curious to hear your reaction to it. My reading is this: that Peter Singer stands in a long and great tradition of what I would call "Jewish moralists" who draw upon Jewish moral teachings in somehow asking for or demanding a better world. Someone who stands in the Jewish moralist tradition can nonetheless be quite a secular thinker, but your later works tend more and more to me to reflect this initial upbringing. You're a kind of secular Talmudic scholar of Utilitarianism, trying to do Mishna on the classic notion of human well being and bring to the world this kind of idea that we all have obligations to do things that make other people better off, that you're very much out of the classic European, Austrian, Viennese, ultimately Biblical tradition about our obligations to the world. What do you say?
Singer: I'm amused, I have to say. I think it's interesting. You're right that I come from a Jewish family. It was a pretty secular Jewish family, so I never got as a child, actually, a lot of Jewish teaching, never went to Jewish Sunday school, I never learned Hebrew, I never had a Bar Mitzvah, I never read the Torah. So if I had got some of that it must have come kind of at a distance through, sort of, osmosis, as you say this vaguely Jewish Viennese culture that certainly was part of my family background but was very much secularized. The interesting thing to speculate is whether I'm doing something that, say, someone out of the British Utilitarian tradition, the tradition of Bentham and Mill and Sidgwick could not have done. What are the distinctive features of my version of Utilitarianism that they would have rejected? And if there is something, it probably is attributable to that background you mention. But I'd be interested in your answer, what do you think that there is in my view that Bentham or Mill or Sidgwick could not have whole-heartedly endorsed?
Cowen: I'm not sure if there's anything, but I think the mere fact that it is you who is doing it nonetheless reflects something about this. I think of you as one of the worlds greatest theologians, in a way, having this understanding of the quality of mercy, which is put into a secular framework, but what the intuitions really consist of, I think none of us really ever know where our moral intuitions come from.
Singer: Ok. Well, look. It's a possible view, as I think you said introducing it, you don't know whether it's true but it's an interesting view of me and where I come from. You've put it out there. I find it hard to look internally, so I'll leave it to others to judge which of the elements of my background they see having formed me most strongly.
 

What charities does Peter Singer give to?

Cowen: Let me try a personal question but feel free to pass on this one. Let's say someone has read your book and they say "I'm on board, Peter, please tell me what charities you give to." You mentioned Oxfam, but would you have anything specific you'd like to say? And why?
Singer: I do support Oxfam substantially, I've got a long relationship with, different Oxfams. They're actually autonomous national groups that work together, so when I first became interested in this issue as a graduate student, way back in Oxford in the '70s, I was living in Oxford and that's the headquarters of the original Oxfam, Oxfam UK, so I got in touch with them and remain connected with their office. Then I went to Australia and was involved with Oxfam Australia, now I'm involved with Oxfam America. I like what they do good grassroots work, I've seen some of that, helping the most underprivileged people, plus they're not afraid to be a real advocate for the poor, to tackle big mining companies that are pushing the poor off their land, tackle the US government and its agricultural subsidies. That's one reason that I like them. But there are many good organizations around. I've recently started supporting GiveWell, you can find them on givewell.net, because they're doing something that I'm sure you would support: they're trying to get aid organizations to demonstrate their efficacy, to be more transparent about why they support some projects rather than others, and to show how much it costs for them to achieve their goals, whether those goals are saving lives or lifting people out of poverty. And so it's kind of at a meta level, saying I want to improve aid by helping organizations that are trying to do that. I think that's a really highly leveraged way of making an impact on what's going to happen in aid over the next couple of decades.
 

Zero-Overhead Giving

Cowen: I'm a big fan of what I call zero overhead giving, that is I send monetary transfers to poor people, maybe I've met them on my travels, by Western Union. I don't follow up, I don't monitor, there's no tax deduction, there's no overhead, it's just money from me to them. What do you think of that as a way of giving?
Singer: Interesting. I suppose I would like to have some followup. I would worry that I was getting conned. Now, you may have a good sense of who's genuine and who's not, but we all know there's con artists working here in New York city, in other cities in America, who could tell you wonderful stories about how they just need the bus fare home and then they'll be fine, and you give them the bus fare home, and you believe them and then next month they come up to you in the same spot and tell you the same story. So I would like some kind of auditing, but let me just say for people who do want to give direct I think not with zero overhead but I think with 10% overhead, if you go to Kiva, kiva.org, you can give a microloan to someone who is online, tells you what they want. You'll eventually, mostly, get your money back and you can lend it to someone else. I think that's quite an effective way of helping people too.
Cowen: How do you know a good charity when you see one? Is low overhead really a good measure? Those numbers are very easily manipulated.
Singer: I agree. No, "low overhead" is not the right measure. Firstly, as you say, the numbers are manipulated. Second, look, you could cut your overhead by cutting your evaluation, exactly what we were talking about. You could say "look, I'm not going to do any followup or evaluation I'm just going to hand out, basically what you said. I'm going to hand out money to poor people." That way you can get your overhead down, but are you actually doing the most good? I think you don't know that until you do have some people in the field who are in touch with what's been happening and do follow up. So I'm looking for the kind of demonstrated effectiveness that you can find in the reports from GiveWell at givewell.net rather than just checking how much of it goes to overheads and administration and how much of it doesn't.
Cowen: Keep in mind, Utilitarian calculations are very difficult, as we discussed a few minutes ago, but you don't have to listen to con stories from con men. Just fly to an Indian village, ask for people's names, get the village phone number, pick names of people who appear to be poor, they're not expecting you to show up, and send them some money. It seems to me if there's anything were you would think the chance of this doing good is really quite high it would be just sending money, and even well run charities have pretty high overhead, and you can give the money directly. Western Union has a bit of overhead, but it's relatively low. Why not have this method replace a lot of charitable giving? Because we know there's massive poverty, we know there's people who need to eat, and if someone needs to eat and you give them money, they're going to spend it on food, no?
Singer: But you can't say there's no overhead if you, say, fly to an Indian village. There's a lot of overhead, unless you're a very wealthy person. The cost of your trip, not to mention your time, is a very substantial overhead on the amount that you're giving.
Cowen: But say you're traveling anyway. You take trips, as it is, right? You go to poor countries, for other reasons. You could do a side trip to a poorer part of an urban city, in Calcutta, it would take you an hour, maybe, it wouldn't take much time. I would think at the margin there's a way for it to be quite cheap.
Singer: It may be, and the other thing you have to consider is whether putting money directly in the hands of people, say, is better than bringing in a drill to provide water for an entire village where presently they have to walk two hours to carry water from a river and that water's polluted. Maybe some sort of structural changes like that are going to help them more than just putting money in the hands of individuals.
Cowen: Keep in mind, you're a Preference Utilitarian. That doesn't mean public goods can't be more valuable, but the tendency of a Preference Utilitiarian should be to just give people resources and let them do what they want, no?
Singer: I think that's an empirical question. As you say it will depend whether they will actually satisfy their preferences more by individual action or whether there's a kind of cooperative dilemma situation here, that actually they could achieve more good by cooperating, but maybe their culture is such that they don't cooperate unless there's some outside stimulus to get them to do so.
 

Moral Intuitions

Cowen: Here's a philosophical question again: do you trust your own moral intuitions?
Singer: No, not really. Over along time period, I guess, I've thought about them and reflected on them, and I've dropped some or they've faded so maybe now I'm somewhat more comfortable with them, but no, I couldn't really say that I trust them as a whole.
Cowen: What's the moral intuition that you have which you trust least?
Singer: That's a good question. I suppose, the intuitions that you have are that you ... I have intuitions about equality and fairness that make me want to go for more egalitarian solutions and, yet, I'm not sure whether they are really the right thing to do, so I'm somewhat critical of them but I'm still drawn by them to some extent. Obviously things about equality can have Utilitarian benefits if we accept laws of diminishing marginal utility and so on, and I would like to say that's the only sense in which I support equality, but I'm not sure that my intuitions are not actually more egalitarian than I should be as a utilitarian.
 

Improving the world through commerce

Cowen: Let's say I'm an 18 year old and I'm in college, and I've read your book and I'm more or less convinced by it, and I say to you "well what I've decided to do is I'm going to have a career in the cell phone industry because I see that cell phones are revolutionizing Africa and making many people much better off. I'm not going to give a dime to poverty but I'm going to work my hardest to become a millionaire by making cheaper and better cell phones." What do you say to me?
Singer: Well, making cheaper and better cell phones may be great for Africa, and while you're building up your business, of course, you want to reinvest your capital and make the business bigger, but are you going to get to a point, at some stage in your life, where you'll have a lot of money, where you've done your work of providing the cheap cell phones, what are you going to do with that money? I think that's still, for a Utilitarian, a relevant question. It's the kind of question that Warren Buffet asked himself. He accumulated a lot of money and said "look, I can make this money earn money faster than anyone else, so I'm going to wait until I'm old before giving it away." And that was a good thing, I guess, although now we might wish he'd given it away last year rather than this year.
Cowen: That's right, but let's say I never give a dime, I've accumulated a fortune of $200M, I've done a lot for the cell phone industry. Am I a better person than someone who's earned $40K/year and every year given 15% of it away to the poor in India?
Singer: Well I'm not sure that you're a ... "better person" asks for a judgment about the character of the agent. I think it's quite possible you've done more good for the world, and you should be congratulated on the good that you've done for the world. We do tend to judge people by their intentions, and your intentions are a little suspect because, although you've done a lot of good for the cellphone industry and maybe for Africans you've still got this $200M. Would you really be a lot happier with $200M than with $100M, $10M say. And if not, then why not, in addition to the benefits you've conferred on people also use that $190M for something that will help people?
Cowen: If you're a Utilitarian, isn't it a little irrational to judge people by their intentions? You're retreating to this "we". "We" judge people by their intentions. You're not willing to say "I do" because that would make you inconsistent. Why not just say "Utility is what matters, I'm a Utilitarian, this person did more for the poor, this person is a better person than the one who gave a lot to charity". It's not my personal view, I'm less of a strict Utilitarian, but why not indeed embrace that conclusion rather than distance yourself from it?
Singer: Because, as a Utilitarian, praise and blame have a function, to encourage people to do good and not to do things that are bad---
Cowen: --This isn't social, this is your true view, all things considered, it's not what you say publicly to incentivise people. It's the "what you really think" question. Like, all the viewers need to turn off their BloggingHeads TV, and then you can tell me what you really think and then turn it back on again.
Singer: --but we are on BloggingHeads TV, they haven't turned it off--
Cowen: --what would you say?
Singer: Look, if I'm talking to the 18 year old, and the 18 year old is saying "look, I have these two career options. One is I do this, I confer all these benefits by developing cell phones but the end I end up pretty with my $200M and I don't give it away, and the other is I earn the $40K/year and give away whatever the percentage was", and we assume, as you said, the benefits are much less. So I'm going to tell the 18 year old to do the thing that will produce the greatest benefits. That's true. Even when he gets to 60 and he has the $200M I'm still going to think, privately, that I gave him the right advice, that was the right thing to do, I'm glad he did it. So if that's what you're asking me, that will be my judgment, and in that sense he's a better person than he would have been if he had just earned the $40K and given a lot of it away.
Cowen: You think a Utilitarian has to be a kind of Straussian and embrace certain kinds of public lies to incentivise people?
Singer: I think that's a really interesting issue. Yeah, I would say he has to be a Sidgwickian. I prefer being a Sidgwickian to a Straussian, just because Straussians have a rather bad flavor to it after they were used in the Bush administration. You could say that the Iraq War conspiracy was kind of Straussian. But, of course, Henry Sidgwick talked about that, he said that for a Utilitarian it is sometimes going to be the case that you should do good, but you need to do it secretly because if you talk publicly about what you're doing this would set an example that would be misleading to others and would lead to bad consequences. I think that's true, and I think for a Utilitarian it's inevitable that there will sometimes be circumstances in which that's the case.
 

What makes Peter Singer happy?

Cowen: Let me try another personal question, again feel free to pass. If you just ask yourself, "what are the things in life that just make me, Peter Singer, happy", what would you say they are? What's your own self account of what makes you happy?
Singer: It's mixed. For example, I've been touring, talking about this book. I think the book has the potential to do good in the world. I'm happy when I see that people are responding to the book. Somebody told me last night at a dinner that they'd read the book and they'd told an aid organization that they support to find a village where they could support the drilling of a well to provide water and they were going to give whatever it took to drill that well. That makes me happy, that I had this impact. Obviously I've had an impact on people changing their diet too, I have people coming up to me all the time saying "I read Animal Liberation and I became a vegetarian or a vegan and I've been working for animal groups". That makes me happy too, that my work has had that effect, which I think it a beneficial effect. But I don't want to pretend to you or to the BloggingHeads viewers that I'm a saint. I can be happy when I'm on vacation, I can be hiking in the mountains. I love mountain scenery, I had a vacation in Glacier National Park a year or so ago, which was gorgeous. That sort of thing makes me happy, and I admit that it's probably not doing as much good for the world as I could have done if instead of spending the money on that vacation I had given it to Oxfam.
 

Human and animal pleasures

Cowen: I sometimes ask myself, I struggle with this question, I ask "are my own deepest pleasures actually quite primeval ones", basically food, sleep, and sex. In your own writings you've emphasized, correctly, the ties between human beings and non-human animals, and it seems that for other animals these are almost always, maybe always the deepest pleasures. So I tend to think that for human beings, including ourselves, they're the deepest pleasures as well, and the higher pleasures are worth something, but actually they're somewhat of an epiphenomenon and what makes us happy are similar to what make non-humans happy. Do you have a view on this?
Singer: I'm not sure I think that the things you mention ... food and sex are obviously important, actually sleep doesn't make me particularly happy. It's something I need to do or I feel bad, but It doesn't make me happy. But yeah, food and sex are important pleasures in life. Are they more important in my life than the things that I do in work? I wouldn't really say that. I think that food and sex are the kind of desires that get satisfied: I eat a good meal, I enjoy it but I don't want to eat again for a few hours, and even sex has its limits in how much you can do at any particular time and still want more of it. Whereas the kind of things we're talking about, you can call them "higher pleasures" or "more purposive, fulfillment sort of activities" you can just go on and find that it's better and better. So I think there's a difference in that.
Cowen: So what makes you happy is pretty different from what makes a non-human animal happy, you would say?
Singer: Yes, that's true, I think higher cognitive capacities to make a difference there.
 

Pescatarianism

Cowen: Let me ask you a question about animal welfare. I have been very influenced by a lot of what you've written, but I'm also not a pure vegetarian by any means, and when it comes to morality, for instance, my view is that it's perfectly fine to eat fish. There may be practical reasons, like depleting the oceans, that are an issue, but the mere act of killing and eating a fish I don't find anything wrong with. Do you have a view on this?
Singer: There's certainly, as you say, the environmental aspect, which is getting pretty serious with a lot of fish stocks, but the other thing is there's no humane killing of fish, right? If we buy commercially killed fish they have died pretty horrible deaths. They've suffocated in nets or on the decks of ships, or if they're deep sea fish pulled up by nets they've died of decompression, basically their internal organs exploding as they're pulled up. I would really ... I don't need to eat fish that badly that I need to do that to fish. If I was hungry and nothing else to eat I would, perhaps, do it but not given the choices I have.
Cowen: But now you're being much more the Jewish Moralist and less the Utilitarian. Because the Utilitarian would look at the marginal impact and say "most fish die horrible deaths anyway, of malnutrition or they're eaten or something else terrible happens to them". The marginal impact of us killing them to me seems to be basically zero. I'm not even sure a fish's life is happy, and why not just say "it's fine to eat fish"? Should it matter that we make them suffer? It's a very non-Utilitarian way of thinking about it, a very moralizing approach.
Singer: You would need to convince me that in fact they're going to die just as horrible deaths in nature, and I'm not sure that that's true. Probably many of them would get gobbled up by some other fish, and that's probably a lot quicker than what we are doing to them.
Cowen: You have some good arguments against Malthusianism for human beings in your book. My tendency is to think that fish are ruled by a Malthusian model, and being eaten by another fish has to be painful. Maybe it's over quickly, but having your organs burst as you're pulled up out of the water is probably also pretty quick. I would again think that in marginal terms it doesn't matter, but I'm more struck by the fact that it's not your first instinct to view the question in marginal terms. You view us as active agents and ask "are we behaving in some manner which is moral, and you're imposing a non-Utilitarian theory on our behavior. Is that something you're willing to embrace, or something that was just a mistake?
Singer: Look, I think economists tend to think more in terms of marginal impact than I do and you may be right that is something I may need to think about more. Look, Tyler, I have to finish unfortunately, I've got another interview I've got to go to, so it's been great talking to you, but I think we're going to have to leave it at that point.
Cowen: Ok, thank you very much.
Singer: Thanks a lot.
Cowen: I've enjoyed it a great deal. Bye!
Singer: Ok, bye!

(I also posted this on my blog.)

New Comment
34 comments, sorted by Click to highlight new comments since:

Classic Tyler, trying to point out to people that the logical conclusions of their beliefs may include ideas that are associated with their ideological enemies. The first half of this diavlog is basically very incisive and interesting Singer-baiting.

[-]satt100

I don't see anything wrong that debating style in itself; it can be informative to interview someone by highlighting possible ideological blind spots.

However, Cowen pressing Singer on tax cuts for charitable donations did make me roll my eyes. Singer acknowledged a specific point about encouraging donations with a targeted tax cut, and Cowen reshaped Singer's acknowledgement into a sound bite easily misinterpretable as Singer taking a position in the broader, mainstream political argument about cutting taxes on the rich. Then Cowen nudged Singer into approving that rephrasing! If I were Singer I'd have been less polite and explicitly ADBOCed.

What brilliant operationalizations Cowen offers with the baby example and the 18 year old example.

I also love the way Cowen doesn't 'let it go': when discussing whether colonialism might have been better off for Africa, Singer offers that the problem is complex because there might have been militants uprisings even under colonial rule. But Cowen doesn't let it go and forces Singer to consider the fact that most probably the scale of uprisings and damage done due to that would not have compared to the damage being done in current civil wars.

Singer is also very quick to update and move on when he realizes the truth of something. Awesome rationality skills from both sides.

Thank you for doing the transcript.

In re making people more cooperative: If people in general were more cooperative, I doubt things would be so bad in Haiti.

What happens when a significant proportion of people are more cooperative, and a significant proportion aren't?

http://lesswrong.com/lw/7e1/rationality_quotes_september_2011/4r01 Note one of the papers is by Cowen, it's good reading.

Great, thanks! One minor nitpick: The name is Tyler Cow e n, not Cowan.

Fixed!

[-]satt30

The main text is fixed now but the title still says "Peter Singer and Tyler Cowan transcript". Another editorial nitpick: near the end you have a link to Wikipedia's entry on Malthusianism; the link's correct but it's labelled "Mathusianism".

Fixed; thanks!

Cowan: But doesn't preference utilitarianism itself require some means of aggregation? The means we use for weighing different clashing preferences, can require some kind of value judgments above and beyond Utilitarianism? Singer: I don't quite see why that should be so. While acknowledging the practical differences of actually weighing up and calculating all the preferences, I fail to see why it involves other values apart from the preferences themselves.

This is very similar to a question I asked in response to this article by Julia Galef. You can find my comment here as well as several (unsuccessful, IMO) attempts to answer it. This worries me somewhat, because many Less Wrongers affirm utilitarianism without even so much as addressing a huge gaping whole at the very core of its logic. It seems to me that utilitarianism hasn't been paying rent for quite some time, but there are no signs that it is about to be evicted.

You're saying that Utilitarianism is fatally flawed because there's no "method (or even a good reason to believe there is such a method) for interpersonal utility comparison", right?

Utilitarians try to maximize some quantity across all people, generally either the sum or average of either happiness or satisfied preferences. These can't be directly measured, so we estimate them as well as we can. For example, to figure out how unhappy having back pain is making someone, you could ask them what probability of success an operation that would cure their back pain (or kill them if it failed) would need to have before they would take it. Questions like this tell us that nearly everyone has the same basic preferences or enjoys the same basic things: really strong preferences for or happiness from getting minimal food, shelter, medical care, etc. Unless we have some reason to believe otherwise we should just add these up across people equally, assuming that having way too little food is as bad for me as it is for you.

Utilitarianism is a value system. It doesn't pay rent in the same way beliefs do, in anticipated experiences. Instead it pays rent in telling us what to do. This is a much weaker standard, but Utilitarianism clearly meets it.

We can't "just add these [preferences] up across people equally" because utility functions are only defined up to an affine transformation.

You might be able to "just add up" pleasure, on the other hand, though you are then vulnerable to utility monsters, etc.

For a Total Utilitarian it's not a problem to be missing a zero point (unless you're talking about adding/removing people).

For an Average Utilitarian, or a Total Utilitarian considering birth or death, you try to identify the point at which a life is not worth living. You estimate as well as you can.

Multiplication by a constant is an affine transformation. This clearly is a very big problem.

[-]Dre-40

But all we want is an ordering of choices, and affine transformations (with a positive multiplicative constant) are order preserving.

[-]jefftk-40

Doesn't "multiplication by a constant" mean births and deaths? Which puts you in my second paragraph: you try to figure out at what point it would be better to never have lived at all. The point at which a life is a net negative is not very clear, and many Utilitarians disagree on where it is. I agree that this is a "big problem", though I think I would prefer the phrasing "open question".

Asking people to trade off various goods against risk of death allows you to elicit a utility function with a zero point, where death has zero utility. But such a utility function is only determined up to multiplication by a positive constant. With just this information, we can't even decide how to distribute goods among a population consisting of two people. Depending on how we scale their utility functions, one of them could be a utility monster. If you choose two calibration points for utility functions (say, death and some other outcome O), then you can make interpersonal comparisons of utility — although this comes at the cost of deciding a priori that one person's death is as good as another's, and one person's outcome O is as good as another's, ceteris paribus, independently of their preferences.

Yes, thank you for taking the time to explain.

Utilitarianism is a value system. It doesn't pay rent in the same way beliefs do, in anticipated experiences. Instead it pays rent in telling us what to do. This is a much weaker standard, but Utilitarianism clearly meets it.

I will grant this assumption for the sake of argument. Utilitarianism doesn't have a truth-value or it does have a truth-value, but is only true for those people who prefer it. Why should I prefer utilitarianism? It seems to have several properties that make it look not very appealing compared to other ethical theories (or "value systems").

For example, utilitarianism requires knowing lots of social science and being able to perform very computationally expensive calculations. Alternatively, the Decalogue only requires that you memorise a small list of rules and have the ability to judge when a violation of the rules has occurred (and our minds are already much better optimised for this kind of judgement relative to utility calculations because of our evolutionary history). Also, from my perspective, the Decalogue is preferable for the reason that it is much easier to meet its standard (it actually isn't that hard not to murder people or steal from them and take a break once a week) which is much more psychologically appealing then beating yourself up for going to see a movie instead of donating your kidney to a starving child in Africa.

So, why should I adopt utilitarianism rather than God's Commandments, egoism, the Categorical Imperative, or any other ethical theory that I happen to fancy?

Wait, are you really claiming we should choose a moral system based on simplicity alone? And that a system of judging how to treat other people that "requires knowing lots of social science" is too complicated? I'd distrust any way of judging how to treat people that didn't require social science. As for calculations, I agree that we don't have very good ways to quantify other people's happiness and suffering (or even our own), but our best guess is better than throwing all the data out and going with arbitrary rules like commandments.

The categorical imperative is nice if you get to make the rules for everyone, but none of us do. Utilitarianism appeals to me because I believe I have worth and other people have worth, and I should do things that take that into account.

Wait, are you really claiming we should choose a moral system based on simplicity alone?

Jayson's point is that a moral system so complicated that you can't figure out whether a given action is moral isn't very useful.

Nutrition is also impossible to perfectly understand, but I take my best guess and know not to eat rocks. Choosing arbitrary rules is not a good alternative to doing your best at rules you don't fully understand.

Nutrition is also impossible to perfectly understand, but I take my best guess and know not to eat rocks. Choosing arbitrary rules is not a good alternative to doing your best at rules you don't fully understand.

How would you know whether utilitarianism is telling you to do the right thing or not? What experiment would you run? On Less Wrong these are supposed to be basic questions you may ask of any belief. Why is it okay to place utilitarianism in a non-overlapping magisteria (NOMA), but not, say, religion?

I am simply pointing out that utilitarianism doesn't meet Less Wrong's epistemic standards and that if utilitarianism is mere personal preference your arguments are no more persuasive to me than a chocolate-eater's would be to a vanilla-eater (except, in this case, chocolate (utilitarianism) is more expensive than vanilla (10 Commandments)).

Also, the Decalogue is not an arbitrary set of rules. We have quite good evidence that it is adaptive in many different environments.

Sorry, I was going in the wrong direction. You're right that utilitarianism isn't a tool, but a descriptor of what I value.

I care about both my wellbeing and my husband's wellbeing. No moral system spells out how to balance these things - the Decalogue merely forbids killing him or cheating on him, but doesn't address whether it's permissible to turn on the light while he's trying to sleep or if I should dress in the dark instead. Should I say, "balancing multiple people's needs is too computationally costly" and give up on the whole project?

When a computation gets too maddening, maybe so. Said husband (jkaufman) and I value our own wellbeing, and we also value the lives of strangers. We give some of our money to buy mosquito nets for strangers, but we don't have a perfect way to calculate how much, and at points it has been maddening to choose. So we pick an amount, somewhat arbitrarily, and go with it.

Picking a simpler system might minimize thought required on my part, but it wouldn't maximize what I want to maximize.

Sorry, I was going in the wrong direction. You're right that utilitarianism isn't a tool, but a descriptor of what I value.

So, utilitarianism isn't true, it is a matter of taste (preferences, values, etc...)? I'm fine with that. The problem I see here is this: I, nor anyone I have ever met, actually has preferences that are isomorphic to utilitarianism (I am not including you, because I do not believe you when you say that utilitarianism describes your value system; I will explain why below).

I care about both my wellbeing and my husband's wellbeing. No moral system spells out how to balance these things - the Decalogue merely forbids killing him or cheating on him, but doesn't address whether it's permissible to turn on the light while he's trying to sleep or if I should dress in the dark instead. Should I say, "balancing multiple people's needs is too computationally costly" and give up on the whole project?

This is not a reason to adopt utilitarianism relative to alternative moral theories. Why? Because utilitarianism is not required in order to balance some people's interest against others'. Altruism does not require weighing everyone in your preference function equally, but utilitarianism does. Even egoists (typically) have friends that they care about. The motto of utilitarianism is "the greatest good for the greatest number", not "the greatest good for me and the people I care most about". If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry (who would have gotten more utility from those particular resources), then to that extent your values are not utilitarian (as demonstrated by WARP).

When a computation gets too maddening, maybe so. Said husband (jkaufman) and I value our own wellbeing, and we also value the lives of strangers. We give some of our money to buy mosquito nets for strangers, but we don't have a perfect way to calculate how much, and at points it has been maddening to choose. So we pick an amount, somewhat arbitrarily, and go with it.

Even if you could measure utility perfectly and perform rock-solid interpersonal utility calculations, I suspect that you would still not weigh your own well-being (nor your husband, friends, etc...) equally with that of random strangers. If I am right about this, then your defence of utilitarianism as your own personal system of value fails on the ground that it is a false claim about a particular person's preferences (namely, you).

In summary, I find utilitarianism as proposition and utilitarianism as value system very unpersuasive. As for the former, I have requested of sophisticated and knowledgeable utilitarians that they tell me what experiences I should anticipate in the world if utilitarianism is true (and that I should not anticipate if other, contradictory, moral theories were true) and, so far, they have been unable to do so. Propositions of this kind (meaningless or metaphysical propositions) don't ordinarily warrant wasting much time thinking about them. As for the latter, according to my revealed preferences, utilitarianism does not describe my preferences at all accurately, so is not much use for determining how to act. Simply, it is not, in fact, my value system.

So, utilitarianism isn't true, it is a matter of taste

I don't understand how "true" applies to a matter of taste any more than a taste for chocolate is "truer" than any other.

utilitarianism is not required in order to balance some people's interest against others'.

There are others, but this is the one that seems best to me.

If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry

This is the type of decision we found maddening, which is why we currently have firm charity and non-charity budgets. Before that system I did spend money on non-necessities, and I felt terrible about it. So you're correct that I have other preferences besides utilitarianism.

I don't think it's fair or accurate to say "If you ever spent any resources on anything other than what you say you prefer, it's not really your preference." I believe people can prefer multiple things at once. I value the greatest good for the greatest number, and if I could redesign myself as a perfect person, I would always act on that preference. But as a mammal, yes, I also have a drive to care for me and mine more than strangers. When I've tried to supress that entirely, I was very unhappy.

I think a pragmatic utilitarian takes into account the fact that we are mammals, and that at some point we'll probably break down if we don't satisfy our other preferences a little. I try to balance it at a point where I can sustain what I'm doing for the rest of my life.

I came late to this whole philosophy thing, so it took me a while to find out "utilitarianism" is what people called what I was trying to do. The name isn't really important to me, so it may be that I've been using it wrong or we have different definitions of what counts as real utilitarianism.

So, utilitarianism isn't true, it is a matter of taste (preferences, values, etc...)?

Saying utilitarianism isn't true because some people aren't automatically motivated to follow it is like saying that grass isn't green because some people wish it was purple. If you don't want to follow utilitarian ethics that doesn't mean they aren't true. It just means that you're not nearly as good a person as someone who does. If you genuinely want to be a bad person then nothing can change your mind, but most human beings place at least some value on morality.

You're confusing moral truth with motivational internalism. Motivational internalism states that moral knowledge is intrinsically motivating, simply knowing something is good and right motivates a rational entity to do it. That's obviously false.

Its opposite is motivational externalism, which states that we are motivated to act morally by our moral emotions (i. e. sympathy, compassion) and willpower. Motivational externalism seems obviously correct to me. That in turn indicates that people will often act immorally if their willpower, compassion, and other moral emotions are depleted, even if they know intellectually that their behavior is less moral than it could be.

If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry (who would have gotten more utility from those particular resources), then to that extent your values are not utilitarian (as demonstrated by WARP).

There is a vast, vast amount of writing at Less Wrong on the fact that people's behavior and their values often fail to coincide. Have you never read anything on the topic of "akrasia?" Revealed preference is moderately informative in regards to people's values, but it is nowhere near 100% reliable. If someone talks about how utilitarianism is correct, but often fails to act in utilitarian ways, it is highly likely they are suffering from akrasia and lack the willpower to act on their values.

Even if you could measure utility perfectly and perform rock-solid interpersonal utility calculations, I suspect that you would still not weigh your own well-being (nor your husband, friends, etc...) equally with that of random strangers. If I am right about this, then your defence of utilitarianism as your own personal system of value fails on the ground that it is a false claim about a particular person's preferences (namely, you).

You don't seem to understand the difference between categorical and incremental preferences. If juliawise spends 50% of her time doing selfish stuff and 50% of her time doing utilitarian stuff that doesn't mean she has no preference for utilitarianism. That would be like saying that I don't have a preference for pizza because I sometimes eat pizza and sometimes eat tacos.

Furthermore, I expect that if juliawise was given a magic drug that completely removed her akrasia she would behave in a much more utilitarian fashion.

As for the former, I have requested of sophisticated and knowledgeable utilitarians that they tell me what experiences I should anticipate in the world if utilitarianism is true (and that I should not anticipate if other, contradictory, moral theories were true) and, so far, they have been unable to do so.

If utilitarianism was true we could expect to see a correlation between willpower and morally positive behavior. This appears to be true, in fact such behaviors are lumped together into the trait "conscientiousness" because they are correlated.

If utilitarianism was true then deontological rule systems would be vulnerable to dutch-booking, while utilitarianism would not be. This appears to be true.

If utilitarianism was true then it would be unfair to for multiple people to have different utility levels, all else being equal. This is practically tautological.

If utilitarianism was true then goodness would consist primarily of doing things that benefit yourself and others. Again, this is practically tautological.

Now, these pieces of evidence don't necessarily point to utilitarianism, other types of consequentialist theories might also explain them. But they are informative.

As for the latter, according to my revealed preferences, utilitarianism does not describe my preferences at all accurately, so is not much use for determining how to act. Simply, it is not, in fact, my value system.

Again, ethical systems are not intrinsically motivating. If you don't want to follow utilitarianism then that doesn't mean it's not true, it just means that you're a person who sometimes treats other people unfairly and badly. Again, if that doesn't bother you then there are no universally compelling arguments. But if you're a reasonably normal human it might bother you a little and make you want to find a consistent system to guide you in your attempts to behave better. Like utilitarianism.

What alternative to utilitarianism are you proposing? Avoiding taking into account multiple people's welfare? Even a perfect egoist still needs to weigh the welfare of different possible future selves. If you zoom in enough, arbitrariness is everywhere, but "arbitrariness is everywhere, arbitrariness, arbitrariness!" is not a policy. To the extent that our "true" preferences about how to compare welfare have structure, we can try to capture that structure in principles; to the extent that they don't have structure, picking arbitrary principles isn't worse than picking arbitrary actions.

Your preferences tell you how to aggregate the preferences of everyone else.

Edit: This post was downvoted to -1 when I came to it, so I thought I'd clarify. It's since been voted back up to 0, but I just finished writing the clarification, so...

Your preferences are all that you care about (by definition). So you only care about the preferences of others to the extent that their preferences are a component of your own preferences. Now if you claim preference utilitarianism is true, you could be making one of two distinct claims:

  • "My preferences state that I should maximize the, suitably aggregated, preferences of all people/relevant agents," or
  • "The preferences of each human state that they should maximize the, suitably aggregated, preference of all people/relevant agents."

In both cases, some "suitable aggregation" has to be chosen and which agents are relevant has to be chosen. The latter is actually a sub-problem of the former: set weights of zero for non-relevant agents in the aggregation. So how does the utilitarian aggregate? Well, that depends on what the utilitarian cares about, quite literally. What does the utilitarian's preferences say? Maximize average utility? Total utility? Ultimately what the utilitarian should be maximizing comes back to her own preferences (or the collective preferences of humanity if the utilitarian is making the claim that our preferences are all the same). Going back to the utilitarian's own utility function also (potentially) deals with things like utility monsters, how to deal with the preferences of the dead and the potentially-alive and so forth.

If my preferences are such that only what happens to me matters, I don't think you can call me a "preference Utilitarian".

Right, your preferences tell you whether you're a utilitarian or not in the first place.

I've recently started supporting GiveWell, you can find them on givewell.net, because they're doing something that I'm sure you would support: they're trying to get aid organizations to demonstrate their efficacy, to be more transparent about why they support some projects rather than others, and to show how much it costs for them to achieve their goals, whether those goals are saving lives or lifting people out of poverty. And so it's kind of at a meta level, saying I want to improve aid by helping organizations that are trying to do that. I think that's a really highly leveraged way of making an impact on what's going to happen in aid over the next couple of decades.

This made me wonder if he knows about existential risks. This page (2007) does suggest that he is aware of existential risks:

I would also include the issue of what Nick Bostrom calls ”existential risks” – how should we act in regard to risks, even very small ones, to the future existence of the entire human species? Arguably, all other issues pale into insignificance when we consider the risk of extinction of our species,

[-][anonymous]10

I think it's safe to say that he knows about existential risks.

Thanks for doing this. I found it a very memorable when it first aired years ago.