Also, whoever saves a person to live another fifty years, it is as if they had saved fifty people to live one more year. Whoever saves someone who very much enjoys life, it is as if they saved many people who are not sure they really want to live. And whoever creates a life that would not have otherwise existed, it is as if they saved someone who had an entire lifetime yet to live.
are we morally obligated to have children, and as many as we can?
Cost of a first-world child is.... checks random Google result $180,000 to get them to age 18. Cost of saving a kid in Africa from dying of Malaria is ~$1,000.
Right now having children is massively selfish, because there's options that are more than TWO magnitudes of order more effective. It'd be like blowing up the train in order to save the deaf kids from the original post :)
Interesting to choose a rich philanthropist for your analogy. Very wealthy people give less, as a percentage of their wealth and income, than people of much more limited means. For wealthy philanthropists, the value from giving may be in status from the publicity of large gifts.
Robin's comment raises the interesting question of whether creating a new life is as good as saving one. It definitely seems to be easier to create a new one, at least at first (the long term effort is probably greater). Most people manage to create a new life or two, but probably never save any. We don't tend to celebrate new-life creators as much as we do life-savers, perhaps because it is seen as too easy.
No. It's way, way easier to save one. According to the Disease Control Priorities Project (http://tinyurl.com/y9wpk5e) you can save lives for about $3 per year. That's, what, $225 for a whole life? Creating a life requires nine months of pregnancy, during which you can't work as well, and you have to pay for food while you're eating for two, and that's just assuming you give the child up for adoption. You also can only do it once every nine months, and you have to be a girl, whereas you can save a life every time you earn $225.
I'm 99% sure you're missing the point.
Falling down a hill is quite painful. However, once you start falling down a hill, you're going to keep falling until you reach the bottom, unless you make a conscious and coordinated decision to stop yourself, if you are even able to. In that sense, once you start falling down a hill, it is very easy to keep falling, because it's what happens if you don't try to change things.
In the same sense (and I apologize for the unpleasant metaphor) pregnancy is easy; once a woman is pregnant, barring miscarriage, she's gonna have a kid. It's going to be painful and at times miserable, but it's going to happen. I agree with you that there's less disutility experienced if she has an abortion, but she has to make a conscious choice to do that, go to a clinic, and pay a bill. It may be the more pleasant way out, but, in this context, it's not easier; it doesn't really happen by accident (excepting spontaneous miscarriage, which is admittedly fairly common, but besides the point).
Everything you describe is something she endures; there's no willpower to it. This is in contrast with saving a life as discussed earlier, which requires a deliberate, conscious de...
In a Big World, which this one appears to be on at least three counts (spatially infinite open universe, inflationary scenario in Standard Model, and Everett branches), everyone who could exist already exists with probability 1. Thus, the issue is not so much creating new people, but ensuring that good things happen to people given that they exist. Creating a new person helps when you can provide them with good outcomes, because what you're really doing is increasing the frequency of good outcomes from that starting point.
Or at least that's one anthropic interpretation of ethics. But it is one reason why I don't endorse running out and creating lots of people if that lowers the average standard of living. In a Big World, it's the average standard of living that you care about.
Eliezer, I hope we can agree that your conclusion is intriguing, but far from clearly true. After all, if every possible person exists, then so does every possible history for every possible person. How then could you effect any relative frequencies?
I have a paper on this problem of infinities in ethics: http://www.nickbostrom.com/ethics/infinite.pdf
It is a difficult topic.
Where does this end? If a philanthropist saves one life instead of two he is damned as any murder. Surely we in the more prosperous countries could easily save many lives by cutting back on luxuries, but we choose not to (this would no doubt apply to nearly everyone in these countries) does that make us all murderers?
Eliezer, whatever it is you were getting at in your comment, it was waaay over my head. When I searched on Wikipedia for Big World, I got an album by Joe Jackson. When I looked for Everett branches, I found an intriguing article about the Piscataquog River. Could you point me to some further reading? I hate to feel left out of the loop here.
Robin: And whoever creates a life that would not have otherwise existed, it is as if they saved someone who had an entire lifetime yet to live.
I have to question that comparison. When you save a life that already exists, you are delivering them from a particular existential danger, even if not from the generic existential danger they face constantly by virtue of being alive. But when you create a life, you are delivering a new "hostage to fortune" and creating an existentially endangered being where none previously existed.
I think on re-reading this that Robin's initial comment was meant to be ironic, or at least a provocative extension of Eliezer's ideas.
As far as Eliezer's point, I would imagine that rabbis and other moral philosophers would agree that saving two lives is better than saving one. Beyond that the calculus of human lives is a difficult problem. Many people would say we should not sacrifice one to save two. There is this distinction between active and passive actions, which are judged very differently. It's all something of a mess.
Charles, you might want to read some of Peter Singer's writings on this point.
Robin, it's clear that relative frequencies exist and matter somehow, even though it might seem like they shouldn't (e.g. because of the ordering problem described in Dr. Bostrom's paper). We observe random events with nonuniform distributions to occur according to the distribution, as opposed to uniformly. We don't live in an extremely bizarre, acausal world even though there are an infinite number throughout spacetime, because the laws of physics are such as to make bizarre worlds rarer than normal ones (even though there are many more possible bizarre worlds than normal ones). "Difficult topic" is probably an understatement.
Robin, we can definitely agree that my notion about relative conditional frequencies is not at all clearly true. This is one of those rare, rare issues that still confuses even me. As such - this is an important general principle, that I'd like to emphasize - when you try to model things that are deeply confusing and mysterious to you, you should not be very confident in your judgments about them.
If infinite people exist, how do our subjective probabilities come out right - why don't we always see every possible die roll with probability 1/6, even when the dice are loaded? How is computation possible, when every if statement always branches both ways? I seriously don't know. Maybe the numbers are finite but just very large. But, if for whatever reason it is possible to flip a biased coin and indeed see mostly heads, then we can try to shape the outcomes of people's lives so that their futures are mostly happy. I don't claim to be sure of this. It is just my attempt to make things add up to normality.
Jeremy, see Nick Bostrom's paper.
and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child.
But what if, while sipping this diet coke, you're fiddling on your laptop computer to organise the shipment of HIV drugs to Africa? If, at the moment the second child's skull gets crushed by the onrushing train, you manage to secure governmental support for a deal which will save thousands of people? If you wipe the splattered blood from your suit, as you rush off to open a new orphanage in India?
Something still feels wrong. I think our intellectual urge to save lives is...
There's also the issue of immediacy.
Your organisation of HIV drugs is likely something that could wait a minute, especially with an excuse as universally acceptable as saving a child from being run over by the train.
Thus, it is nigh certain that you could achieve both goals.
As for the philanthropist, I think the relevant heuristic is that we approve of anyone who saves lives, to socially reinforce the urge for others to do so. If our instincts developed in a tribal environment, then saving a life, or a small group of lives, is the best that anyone can realistically do, so we had no need to scale our admiration to a larger scale.
But if we are to become less biased, and disdain the philanthropist who spends his life-saving money inefficiently, we should be totally consistent about it, and disdain far, far more those who could spend money to save lives and don't (unfortunately, that probably includes most of us).
In a Big World, which this one appears to be on at least three counts (spatially infinite open universe, inflationary scenario in Standard Model, and Everett branches), everyone who could exist already exists with probability 1.
Infinite reasoning has many issues. Since we can only have finite amounts of evidence, then there will always be an "X" such that the probability of "the universe is of size X or bigger" becomes tiny - swamped by uncertainties in our reasoning, even the possible uncertainties in our logic (by the way, if this is ...
I am rather surprised that no one is questioning the unspoken presupposition that all human lives are of equal value.
They certainly aren't in my estimation.
Acksiom, yes, I find it strange as well. Certainly, people in our immediate community are more valuable than people we have never met in other continents. However, I don't think "community" should include beyond those who we actually interact with. It shouldn't include abstract groupings such as "state" or "nation". Supporting your high-school sports team is fine.
You can be more about what actions are likely to save a life than about what actions are likely to save many lives.
What if, as we approach the Singularity, it is provably or near-provably necessary to do unethical things like killing a few people or letting them die to avoid the worst of Singularity outcomes?
(I am not referring here to whether we may create non-Friendly AGI. I am referring to scenarios even before the AGI "takes over.")
Such scenarios seem not impossible, and creates ethical dilemmas along the lines of what Yudkowsky mentions here.
Certainly, people in our immediate community are more valuable than people we have never met in other continents.
On a personal level, of course. But morally and ethically, and especially if you are looking for universal ethical values, this is most definitely not the case.
I am rather surprised that no one is questioning the unspoken presupposition that all human lives are of equal value.
That presupposition is an unjustified bias, but I feel a practical one. We've seen in the past what happens when human lives were openly valued at different levels, and the...
Also, human lives can have different instrumental value but the same inherent value, such that (for instance) a researcher in area X that has the potential to save many, many lives is worth more instrumentally than a random man-on-the-street.
Joshua, what kind of scenarios could those be? (But I would do a straightforward expected-lives-saved calculation, keeping in mind the uncertainty of whether it would actually move the Singularity forward, and whether bad PR and having the police on my tail could delay the Singularity. The actual action would depend qu...
"Certainly, people in our immediate community are more valuable than people we have never met in other continents. However, I don't think "community" should include beyond those who we actually interact with. It shouldn't include abstract groupings such as "state" or "nation". Supporting your high-school sports team is fine."
Yeah, wouldn't the world be a great place if everyone thought like this... screw helping the world... let's just help ourselves and those whom we interact with. Oh yeah, and while we are at it, ...
The choice between an "averagist" and "totalist" model of optimal human welfare is a tough one. The averagist wants to maximize average happiness (or some such measure of welfare); the totalist wants to maximize total happiness. Both lead to unfortunate reductio arguments. Average human welfare can be improved by eliminating everyone who is below average. This process can be repeated successively until we have only one person left, the happiest man in the world. The totalist would proceed by increasing the population to the very edge of...
I believe some models of physics require the universe to be infinite
These are dependent on certain assumptions, the most general of which is the fact that the laws of physics be the same everywhere (the universe is seen as "isotropic and homogeneous"). But those sort of principles arise from observation.
And we can never be entirely sure that they are true. Now, normally this doesn't matter - the probability of them being false is so tiny that we can consider them true. But infinity is nasty. Let's put an probability estimate on "There are mo...
I'd be curious to know if there is a principled model for optimal human happiness which does not conflict so violently with our moral instincts.
Seems we need to take "creating" and "destroying" humans out of the equation - total or average happiness can work fine in a fixed population (and indeed are the same). We can tweak the conditions maybe, and count the dead and the unborn as having a certain level of happiness - but it will still lead to assumptions that violate our instincts; there will always be moments where creating a new lif...
Stuart, as far as the infinities go, I can imagine arguments that suggest that an infinite universe is more likely than a finite one, especially a finite one that is extremely large. For example, if the laws of physics were to turn out to be much simpler for an infinite universe, given our observations, that would be evidence in that direction. Conceptually, infinity is a simpler concept than particular very large numbers, so Occam's razor might lead us to choose infinity.
In fact I would argue that if your prior has a non-zero probability for infinite size...
Joe, that wasn't my point. I believe ethical theories can and should try to capture the spirit of morality. A truer way to appreciate the value of a stranger's life is to understand that to many people close to her, she is not merely a stranger. I was mainly giving a possible reason for not DRASTICALLY sacrificing your money for donation.
And I fully agree with Stuart that it should an exception and not a rule.
But (I call back) I already saved one child from the train tracks, and thus I am "unimaginably" far ahead on points. Whether I save the second child, or not, I will still be credited with an "unimaginably" good deed. Thus, I have no further motive to act. Doesn't sound right, does it?
This isn't a problem with the claim that a human life is of infinite value as such. It's a problem with the claim that it's morally appropriate to attach the concept of comparable value to human lives at all. It's what happens when you start taking most u...
Paul, since my background is in AI, it is natural for me to ask how a "duty" gets cashed out computationally, if not as a contribution to expected utility. If I'm not using some kind of moral points, how do I calculate what my "duty" is?
How should I weigh a 10% chance of saving 20 lives against a 90% chance of saving one life?
If saving life takes lexical priority, should I weigh a 1/googleplex (or 1/Graham's Number) chance of saving one life equally with a certainty of making a billion people very unhappy for fifty years?
Such questions form the base of some pretty strong theorems showing that consistent preferences must cash out as some kind of expected utility maximization.
Eliezer: that's a good point as far as it goes. But the answer many contemporary deontologists would give is that you can't expect to be able to computationally cash out all decision problems, and particularly moral decision problems. (Who said morality was easy?) In hard cases, it seems to me that the most plausible principles of morality don't provide cookie-cutter determinate answers. What fills in the void? Several things kick in under various versions of various theories. For example, some duties are understood as optional rather than necessary,...
how a "duty" gets cashed out computationally, if not as a contribution to expected utility. If I'm not using some kind of moral points, how do I calculate what my "duty" is?
We humans don't seem to act as if we're cashing out an expected utility. Instead we act as if we had a patchwork of lexically distinct moral codes for different situations, and problems come when they overlap.
Since current AI is far from being intelligent, we probably shouldn't see it as compelling argument for how humans do or should behave.
Such questions form the b...
I don't see the relevancy of Mr. Burrows' statement (correct, of course) that "Very wealthy people give less, as a percentage of their wealth and income, than people of much more limited means. For wealthy philanthropists, the value from giving may be in status from the publicity of large gifts."
This is certainly of concern if our goal is to maximize the virtue of rich people. If it is to maximize general welfare, it is of no concern at all. The recipients of charity don't need a percentage's worth of food, but a certain absolute amount.
Is there anyone else who reads this and thinks, "but my altruism is ultimately grounded in the emotional effect that altruism has on myself; it cannot be otherwise. I'm only delusion myself to think that more lives are better, since from my perspective, they feel the same (and I'm trapped in my perceptive. My perspective is the only one that can matter in my decision making)." That is, I don't actually try an maximize utility generally, just my own utility. It just so happens that the primary way to maximize my utility in most situations is to he...
The idea is that valuing a life as that important is what guides the HOW to save the nation. The how is with utmost regard for all people's existence - especially their exposure to suffering.
By valuing people, in this case human life, to that great a degree, it establishes respectful acknowledgement of the great forces which were set in motion to create such a marvel as a human being.
Plus, there IS always the accountability for having devalued a human life when that is the beginning of the end of good policy, behavior, ethics, decency. To value one human so much? Makes you valuable to humans. Etc..
I know I'm way behind for this comment, but still: this point of view makes sense on a level, that saving additional people is always(?) virtuous and you don't hit a ceiling of utility. But, and this is a big one, this is mostly a very simplistic model of virtue calculous, and the things it neglected turn out to have a huge and dangerous impact.
Whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer.
First case in point: can a ...
It's a beautiful thought, isn't it? Feel that warm glow.
I can testify that helping one person feels just as good as helping the whole world. Once upon a time, when I was burned out for the day and wasting time on the Internet - it's a bit complicated, but essentially, I managed to turn someone's whole life around by leaving an anonymous blog comment. I wasn't expecting it to have an effect that large, but it did. When I discovered what I had accomplished, it gave me a tremendous high. The euphoria lasted through that day and into the night, only wearing off somewhat the next morning. It felt just as good (this is the scary part) as the euphoria of a major scientific insight, which had previously been my best referent for what it might feel like to do drugs.
Saving one life probably does feel just as good as being the first person to realize what makes the stars shine. It probably does feel just as good as saving the entire world.
But if you ever have a choice, dear reader, between saving a single life and saving the whole world - then save the world. Please. Because beyond that warm glow is one heck of a gigantic difference.
For some people, the notion that saving the world is significantly better than saving one human life will be obvious, like saying that six billion dollars is worth more than one dollar, or that six cubic kilometers of gold weighs more than one cubic meter of gold. (And never mind the expected value of posterity.) Why might it not be obvious? Well, suppose there's a qualitative duty to save what lives you can - then someone who saves the world, and someone who saves one human life, are just fulfilling the same duty. Or suppose that we follow the Greek conception of personal virtue, rather than consequentialism; someone who saves the world is virtuous, but not six billion times as virtuous as someone who saves one human life. Or perhaps the value of one human life is already too great to comprehend - so that the passing grief we experience at funerals is an infinitesimal underestimate of what is lost - and thus passing to the entire world changes little.
I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable. Or to put it another way: Whoever saves one life, if it is as if they had saved the whole world; whoever saves ten lives, it is as if they had saved ten worlds. Whoever actually saves the whole world - not to be confused with pretend rhetorical saving the world - it is as if they had saved an intergalactic civilization.
Two deaf children are sleeping on the railroad tracks, the train speeding down; you see this, but you are too far away to save the child. I'm nearby, within reach, so I leap forward and drag one child off the railroad tracks - and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child. "Quick!" you scream to me. "Do something!" But (I call back) I already saved one child from the train tracks, and thus I am "unimaginably" far ahead on points. Whether I save the second child, or not, I will still be credited with an "unimaginably" good deed. Thus, I have no further motive to act. Doesn't sound right, does it?
Why should it be any different if a philanthropist spends $10 million on curing a rare but spectacularly fatal disease which afflicts only a hundred people planetwide, when the same money has an equal probability of producing a cure for a less spectacular disease that kills 10% of 100,000 people? I don't think it is different. When human lives are at stake, we have a duty to maximize, not satisfice; and this duty has the same strength as the original duty to save lives. Whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer.
Addendum: It's not cognitively easy to spend money to save lives, since cliche methods that instantly leap to mind don't work or are counterproductive. (I will post later on why this tends to be so.) Stuart Armstrong also points out that if we are to disdain the philanthropist who spends life-saving money inefficiently, we should be consistent and disdain more those who could spend money to save lives but don't.