Related: Taking ideas seriously

Let us say hypothetically you care about stopping people smoking. 

You were going to donate $1000 dollars to givewell to save a life, instead you learn about an anti-tobacco campaign that is better. So you chose to donate $1000 dollars to a campaign to stop people smoking instead of donating it to a givewell charity to save an African's life. You justify this by expecting more people to live due to having stopped smoking (this probably isn't true, but for the sake of argument)

The consequences of donating to the anti-smoking campaign is that 1 person dies in africa and 20 live that would have died instead live all over the world. 

Now you also have the choice of setting fire to many tobacco plantations, you estimate that the increased cost of cigarettes would save 20 lives but it will kill likely 1 guard worker. You are very intelligent so you think you can get away with it. There are no consequences to this action. You don't care much about the scorched earth or loss of profits.

If there are causes with payoff matrices like this, then it seems like a real world instance of the trolley problem. We are willing to cause loss of life due to inaction to achieve our goals but not cause loss of life due to action.

What should you do?

Killing someone is generally wrong but you are causing the death of someone in both cases. You either need to justify that leaving someone to die is ethically not the same as killing someone, or inure yourself that when you chose to spend $1000 dollars in a way that doesn't save a life, you are killing. Or ignore the whole thing.

This just puts me off being utilitarian to be honest.

Edit: To clarify, I am an easy going person, I don't like making life and death decisions. I would rather live and laugh, without worrying about things too much.

This confluence of ideas made me realise that we are making life and death decisions every time we spend $1000 dollars. I'm not sure where I will go from here.

New Comment
56 comments, sorted by Click to highlight new comments since:

All this AI stuff is an unnecessary distraction. Why not bomb cigarette factories? If you're willing to tell people to stop smoking, you should be willing to kill a tobacco company executive if it will reduce lung cancer by the same amount, right?

This decision algorithm ("kill anyone whom I think needs killing") leads to general anarchy. There are a lot of people around who believe for one reason or another that killing various people would make things better, and most of them are wrong, for example religious fundamentalists who think killing gay people will improve society.

There are three possible equilibria - the one in which everyone kills everyone else, the one in which no one kills anyone else, and the one where everyone comes together to come up with a decision procedure to decide whom to kill - ie establishes an institution with a monopoly on using force. This third one is generally better than the other two which is why we have government and why most of us are usually willing to follow its laws.

I can conceive of extreme cases where it might be worth defecting from the equilibrium because the alternative is even worse - but bombing Intel? Come on. "A guy bombed a chip factory, guess we'll never pursue advanced computer technology again until we have the wisdom to use it."

All this AI stuff is an unnecessary distraction.

In a way yes. It was just the context that I thought of the problem under.

. Why not bomb cigarette factories? If you're willing to tell people to stop smoking, you should be willing to kill a tobacco company executive if it will reduce lung cancer by the same amount, right?

Not quite. If you are willing to donate $1000 dollars to an ad campaign against stopping smoking, because you think the ad campaign will save more than 1 life then yes it might be equivalent. If killing that executive would have a comparable effect in saving lives as the ad campaign.

Edit: To make things clearer, I mean by not donating $1000 dollars to a give well charity you are already causing someone to die.

This decision algorithm ("kill anyone whom I think needs killing") leads to general anarchy.

But we are willing to let people die who we don't think are important that we could have saved. This is equivalent to killing them, no? Or do you approach the trolley problem in some way that references the wider society?

Like I said this line of thought made me want to reject utilitarianism.

"A guy bombed a chip factory, guess we'll never pursue advanced computer technology again until we have the wisdom to use it."

That wasn't the reasoning at all! It was, "Guess the price of computer chips has gone up due to the uncertainty of building chip factories so we can only afford 6 spiffy new brain simulators this year rather than 10." Each one has an X percent chance of becoming an AGI fooming and destroying us all. It is purely a stalling for time tactic. Feel free to ignore the AI argument if you want.

I suppose the difference is whether you're doing the Intel attack now, or in a hypothetical future in which Intel is making brain simulators that seem likely to become AGI. As someone else mentioned, if we're talking about literally THEY ARE BUILDING SKYNET RIGHT NOW, then violence seems like the right idea.

I guess it is kind of suspicious that I know without doing the calculations that we're not at the point where violence is justified yet.

But we are willing to let people die who we don't think are important that we could have saved. This is equivalent to killing them, no? Or do you approach the trolley problem in some way that references the wider society?

Even though on this individual problem leaving things alone would be worse than committing an act of violence, in the general case having everyone commit acts of violence is worse than having everyone leave things alone.

This example cherry-picks a case where violence is the correct answer. But when we generalize it, most of the cases it affects won't be cherry picked and will have violence do more harm than good. We have to pretend we're setting a moral system both for ourselves and for the fundamentalist who wants to kill gay people.

So in this case, you're letting die (killing) the people your (smart) unpopular violent action would have saved, in order to save the lives of all the people whom other people's (stupid) unpopular violent actions would have killed.

It could be justified - if you're going to save the world from Skynet, that's worth instituting a moral system that gives religious fundamentalists a little more latitude to violent bigotry - but I imagine most cases wouldn't be.

[+][anonymous]-130

This just puts me off being utilitarian to be honest.

Understandably so, because the outside view says that most such sacrifices for the greater good end up having been the result of bad epistemology and unrealistic assessments of the costs and benefits.

Strong rationality means that you'd be able to get away with such an act. But strong rationality also means that you generally have better methods of achieving your goals than dubious plans involving sacrifice. When you end thinking you have to do something intuitively morally objectionable 'for the greater good' then you should have tons of alarm bells going off in your head screaming out 'have you really paid attention to the outside view here?!'.

In philosophical problems, you might still have a dilemma. But in real life, such tradeoffs just don't come up on an individual level where you have to actually do the deed. Some stock traders might be actively profiting by screwing everyone over, but they don't have to do anything that would feel wrong in the EEA. The kinds of objections you hear against consequentialism are always about actions that feel wrong. Why not a more realistic example that doesn't directly feed off likely misplaced intuitions?

Imagine you're a big time banker whose firm is making tons of money off of questionably legal mortgage loans that you know will blow up in the economy's face, but you're donating all your money to a prestigious cancer research institute. You've done a very thorough analysis of the relative literature and talked to many high status doctors, and they say that with a couple billion dollars a cure to cancer is in sight. You know that when the economy blows up it will lead to lots of jobless folk without the ability to remortgage their homes. Which is sad, and you can picture all those homeless middle class people and their kids, depressed and alone, all because of you. But cancer is a huge bad ugly cause of death, and you can also picture all of those people that wouldn't have to go through dialysis and painful treatments only to die painfully anyway. Do you do the typically immoral and questionably illegal thing for the greater good?

Why isn't the above dilemma nearly as forceful an argument against consequentialism? Is it because it doesn't appeal in the same way to your evolutionarily adapted sense of justice? Then that might be evidence that your evolutionarily adapted sense of justice wasn't meant for rational moral judgement.

You would likely have to, for the simple reason that if Cancer gets cured, more resources can be dedicated to dealing with other diseases, meaning even more lives will be saved in the process (on top of those lives saved due to the curing of Cancer).

The economy can be in shambles for a while, but it can recover in the future, unlike cancer patients..and you could always justifying it that if a banker like you could blow up the economy, it was already too weak in the first place: better to blow it up now when the damage can be limited rather than latter.

Though the reason it doesn't appeal is because you don't quote hard numbers, making the consequentalist rely on "value" judgements when doing his deeds...and different consequentalists have different "values". Your consequentalist would be trying to cure cancer by crashing the economy to raise money for a cancer charity, while a different consequentalist could be embezzling money from that same cancer charity in an attempt to save the economy from crashing.

Which is sad, and you can picture all those homeless middle class people and their kids, depressed and alone, all because of you.

And go on to commit suicide due to losing status, not be able to afford health insurance or die from lack of heating... Sure not all of them, but some of them would. Also cancer patients that relied on savings to pay for care might be affected in the time lag between crash and cure being created.

I'd also have weigh how quickly the billions could be raised without tanking the economy. And how many people the time difference in when it was developed would save.

So I am still stuck doing moral calculus, with death on my hands whatever I chose.

Here is an interesting comment related to this idea:

What I find a continuing source of amazement is that there is a subculture of people half of whom believe that AI will lead to the solving of all mankind's problems (which me might call Kurzweilian S^) and the other half of which is more or less certain (75% certain) that it will lead to annihilation. Lets call the latter the SIAI S^.

Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations.

And instead of waging desperate politico-military struggle to stop all this suicidal AI research you cheerlead for it, and focus your efforts on risk mitigation on discussions of how a friendly god-like AI could save us from annihilation.

You are a deeply schizophrenic little culture, which for a sociologist like me is just fascinating.

But as someone deeply concerned about these issues I find the irrationality of the S^ approach to a-life and AI threats deeply troubling. -- James J. Hughes (existential.ieet.org mailing list, 2010-07-11)

Also reminds me of this:

It is impossible for a rational person to both believe in imminent rise of sea levels and purchase ocean-front property.

It is reported that former Vice President Al Gore just purchased a villa in Montecito, California for $8.875 million. The exact address is not revealed, but Montecito is a relatively narrow strip bordering the Pacific Ocean. So its minimum elevation above sea level is 0 feet, while its overall elevation is variously reported at 50ft and 180ft. At the same time, Mr. Gore prominently sponsors a campaign and award-winning movie that warns that, due to Global Warming, we can expect to see nearby ocean-front locations, such as San Francisco, largely under water. The elevation of San Francisco is variously reported at 52ft up to high of 925ft.

I've highlighted the same idea before by the way:

Ask yourself, wouldn't you fly a plane into a tower if that was the only way to disable Skynet? The difference between religion and the risk of uFAI makes it even more dangerous. This crowd is actually highly intelligent and their incentive based on more than fairy tales told by goatherders. And if dumb people are already able to commit large-scale atrocities based on such nonsense, what are a bunch of highly-intelligent and devoted geeks who see a tangible danger able and willing to do? More so as in this case the very same people who believe it are the ones who think they must act themselves because their God doesn't even exist yet.

And instead of waging desperate politico-military struggle to stop all this suicidal AI research you cheerlead for it, and focus your efforts on risk mitigation on discussions of how a friendly god-like AI could save us from annihilation.

This is one of those good critiques of SIAI strategy that no one ever seems to make. I don't know why. More good critiques would be awesome. Voted up.

I don't really know the SIAI people, but I have the impression that they're not against AI at all. Sure, an unfriendly AI would be awful - but a friendly one would be awesome. And they probably think AI is inevitable, anyway.

This is true as far as it goes; however if you actually visit SIAI, you may find significantly more worry about UFAI in the short term than you would have expected just from reading Eliezer Yudkowsky's writings.

I think that you interacted most with a pretty uncharacteristically biased sample of characters: most of the long-term SIAI folk have longer timelines than good ol' me and Justin by about 15-20 years. That said, it's true that everyone is still pretty worried about AI-soon, no matter the probability.

Well, 15-20 years doesn't strike me as that much of a time difference, actually. But in any case I was really talking about my surprise at the amount of emphasis on "preventing UFAI" as opposed to "creating FAI". Do you suppose that's also reflective of a biased sample?

Well, 15-20 years doesn't strike me as that much of a time difference, actually.

Really? I mean, relative to your estimate it might not be big, but absolutely speaking, doom 15 years versus doom 35 years seems to make a huge difference in expected utility.

Do you suppose that's also reflective of a biased sample?

Probably insofar as Eliezer and Marcello weren't around: FAI and the Visiting Fellows intersect at decision theory only. But the more direct (and potentially dangerous) AGI stuff isn't openly discussed for obvious reasons.

relative to your estimate it might not be big, but absolutely speaking, doom 15 years versus doom 35 years seems to make a huge difference in expected utility.

A good point. By the way, I should mention that I updated my estimate after it was pointed out to me that other folks' estimates were taking Outside View considerations into account, and after I learned I had been overestimating the information-theoretic complexity of existing minds. FOOM before 2100 looks significantly more likely to me now than it did before.

Probably insofar as Eliezer and Marcello weren't around: FAI and the Visiting Fellows intersect at decision theory only.

Well I didn't expect that AGI technicalities would be discussed openly, of course. What I'm thinking of is Eliezer's attitude that (for now) AGI is unlikely to be developed by anyone not competent enough to realize Friendliness is a problem, versus the apparent fear among some other people that AGI might be cobbled together more or less haphazardly, even in the near term.

Eliezer's attitude that (for now) AGI is unlikely to be developed by anyone not competent enough to realize Friendliness is a problem

Huh. I didn't get that from the sequences, perhaps I should check again. It always seemed to me as if he saw AGI as really frickin' hard but not excessively so, whereas Friendliness is the Impossible Problem made up of smaller but also impossible problems.

I don't really know the SIAI people, but I have the impression that they're not against AI at all. Sure, an unfriendly AI would be awful - but a friendly one would be awesome.

True. I know the SIAI people pretty well (I'm kind of one of them) and can confirm they agree. But they're pretty heavily against uFAI development, which is what I thought XiXiDu's quote was talking about.

And they probably think AI is inevitable, anyway.

Well... hopefully not, in a sense. SIAI's working to improve widespread knowledge of the need for Friendliness among AGI researchers. It's inevitable (barring a global catastrophe), but they're hoping to make FAI more inevitable than uFAI.

As someone who was a volunteered for SIAI at the Singularity Summit, a critique of SIAI could be to ask why we're letting people who aren't concerned about uFAI speak at our conferences and affiliate with our memes. I think there are good answers to that critique, but the critique itself is a pretty reasonable one. Most complains about SIAI are comparatively maddeningly irrational (in my own estimation).

A stronger criticism, I think, is why the only mention of friendliness at the Summit was some very veiled hints in Eliezer's speech. Again, I think there are good reasons, but not good reasons that a lot of people know, so I don't understand why people bring up other criticisms before this one.

[-][anonymous]-20

This was meant as a critique too. But people here seem not to believe what they preach, or they would follow their position taken to its logical extreme.

Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations.

This seems to me a good strategy for SIAI people to persuade K-type people to join them.

Ah yes, the standard argument against consequentialism: X has expected positive consequences, so consequentialism says we should do it. But clearly, if people do X, the world will be a worse place. That's why practicing consequentialism makes the world a worse place, and people shouldn't be consequentialists.

Personally I'll stick with consequentialism and say a few extra words in favor of maintaing the consistency of your abstract arguments and your real expectations.

It might be better for the world if people were consequentialists. I might be better off if I did more structured exercise. That doesn't mean I am going to like either of them.

Uhm, it's seriously egregious and needlessly harmful to suggest that SIAI supporters should maybe be engaging in terrorism. Seriously. I agree with Yvain. The example is poor and meant to be inflammatory, not to facilitate reasonable debate about what you think utilitarianism means.

Would you please rewrite it with a different example so this doesn't just dissolve into a meaningless debate about x-risk x-rationality where half of your audience is already offended at what they believe to be a bad example and a flawed understanding of utilitarianism?

A lot of the comments on this post were really confusing until I got to this one.

I should make it explicit that the original post didn't advocate terrorism in any way but was a hypothetical reductio ad absurdum against utilitarianism that was obviously meant for philosophical consideration only.

It was nothing as simple as a philosophical argument against anything.

It is a line of reasoning working from premises that seem to be widely held, that I am unsure of how to integrate into my world view in a way that I (or most people?) would be comfortable with.

[-][anonymous]-20

I don't believe that you are honest in what you write here. If you would really vote against the bombing of Skynet before it tiles the universe with paperclips, then I don't think you actually believe most of what is written on LW.

Terrorism is just a word to discredit acts that are deemed bad by those that oppose it.

If I was really sure that Al Qaeda was going to set free some superbug bioweapon stored in a school and there was no way to stop them doing so and kill millions then I would advocate using incendiary bombs on the school to destroy the weapons. I accept the position that even killing one person can't be a mean to an end to save the whole world, but I don't see how that fits with what is believed in this community. See Torture vs. Dust Specks (The obvious answer is TORTURE, Robin Hanson).

I'll go ahead and reveal my answer now: Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity. -- Eliezer Yudkowsky

You missed the point. He said it was bad to talk about, not that he agreed or disagreed with any particular statement.

[-][anonymous]00

Hush, hush! Of course I know it is bad to talk about it in this way. Same with what Roko wrote. The amount of things we shouldn't talk about, even though they are completely rational, seems to be rising. I just don't have the list of forbidden topics at hand right now.

I don't think this is a solution. You better come up with some story why you people don't think killing is wrong to prevent Skynet, because the idea of AI going FOOM is getting mainstream quickly and people will draw this conclusion and act upon it. Or you stand to what you believe and try to explain why it wouldn't be terrorism but a far-seeing act to slow down AI research or at least watch over it and take out any dangerous research before FAI isn't guaranteed.

Done. The numbers don't really make sense in this version though....

Thanks. The slightly less sensible numbers might deaden the point of your argument a little bit, but I think the quality of discussion will be higher.

Somehow I doubt there will be much discussion, high quality or low :) It seems like it has gone below the threshold to be seen in the discussion section. It is -3 in case you are wondering.

This confluence of ideas made me realise that we are making life and death decisions every time we spend $1000 dollars. I'm not sure where I will go from here.

Here's a blog post I found recently that discusses that idea further

I'm surprised no one has linked to this yet. It's not a perfect match, but I think that "if killing innocent people seems like the right thing to do, you've probably made a mistake" is close enough to be relevant.

Maybe less so before the post was edited, I guess.

It would seem so, but is taking war into enemy territory that reliably a mistake?

I meant to link to that or something similar. In both situations I am killing someone. By not donating to a givewell charity some innocent in Africa dies, (saving more innocents live elsewhere). So I am already in mistake territory, even before I start thinking about terrorism.

I don't like being in mistake territory, so my brain is liable to want to shut off from thinking about it, or inure my heart to the decision.

The distinction between taking an action resulting in someone dying when counterfactually they would not have died if you took some other action, and when counterfactually they would not have died if you didn't exist, while not important to pure consequentialist reasoning, has bearing on when a human attempting consequentialist reasoning should be wary of the fact that they are running on hostile hardware.

You can slightly change the scenarios and get it so that people counter factually wouldn't have died if you didn't exist, which don't seem much morally different. For example X is going to donate to givewell and save Zs life. Should you (Y) convince X to donate to an anti-tobacco campaign which will save more lives. Is this morally the same as (risk free, escalation-less) terrorism or the same as being X?

Anyway I have the feeling people are getting bored of me on this subject, including myself. Simply chalk this up to someone not compartmentalizing correctly. Although I think that if I need to keep consequentialist reasoning compartmentalised, I am likely to find all consequentialist reasoning more suspect.

[-][anonymous]00

I think that "if killing innocent people seems like the right thing to do, you've probably made a mistake".

I don't think so. And I don't get why you wouldn't bomb Skynet if you could save the human race by doing so? Sure, you can call it a personal choice that has nothing to do with rationality. But in the face of posts like this I don't see why nobody here is suggesting to take active measures against uFAI. I can only conclude you either don't follow your beliefs through or don't discuss it because it could be perceived as terrorism.

(Insert large amount of regret about not writing "Taking Ideas Seriously" better.)

Anyway, it's worth quoting Richard Chappell's comment on my post about virtue ethics-style consequentialism:

It's worth noting that pretty much every consequentialist since J.S. Mill has stressed the importance of inculcating generally-reliable dispositions / character traits, rather than attempting to explicitly make utility calculations in everyday life. It's certainly a good recommendation, but it seems misleading to characterize this as in any way at odds with the consequentialist tradition.

But SIAI have stressed making utility calculations in everyday life.... especially about charity.

Hm, I wouldn't consider that 'in everyday life'. It seems like an expected utility calculation you do once every few months or years, when you're deciding where you should be giving charity. You would spend that time doing proto-consequentialist calculations anyway, even if you weren't explicitly calculating expected utility. Wanting to get the most warm fuzzies or status per dollar is typical altruistic behavior.

The difference in Eliezer's exhortations is that he's asking you to introspect more and think about whether or not you really want warm fuzzies or actual utilons, after you find out that significant utilons really are at stake. Whether or not you believe those utilons really are at stake at a certain probability becomes a question of fact, not a strain on your moral intuitions.

I had a broader meaning of everyday life, as things everyone might do.

Even taking a literal view of the sentence, burning down fields isn't an every day kind of thing.

I was actually thinking of Anna Salamon and her back of the envelope calculations about how worth it is to donate to SIAI, with that comment. I believe she mentions donating to givewell as a baseline to compare it with. Saving a human life is fairly significant utilons itself. So it was asking me to weigh up saving a human life to donating to SIAI. So the symmetric question came to mind. Hence this post.

So it was asking me to weigh up saving a human life to donating to SIAI.

You phrase this as a weird dichotomy. It's more like asking you to weigh saving a life versus saving a lot of lives. Whether or not a lot of lives are actually at stake is an epistemic question, not a moral one.

[-][anonymous]00

(Insert large amount of regret about not writing "Taking Ideas Seriously" better.)

Insert a reminder pointing to your medidation post and your relisation that post hoc beating yourself up about things doesn't benefit you enough to make it worth doing.

In general, the primary problem with such behavior is that if lots of people do this sort of thing society falls apart. Thus, there's a bit of a prisoner's dilemma here. So any logic favoring "cooperate" more or less applies here.

Note also that for many people they would probably see this as wrong simply because humans have a tendency to see a major distinction between action and inaction. Action that results in bad things is seen as much worse than inaction that results in bad things. Thus, the death of the guard seems "bad" to most people. This is essentially the same issue that shows up in how people answer the trolley problem.

So, let's change the question: If there's no substantial chance of killing the guard should one do it?

[-][anonymous]00

If there's no substantial chance of killing the guard should one do it?

Is the guard working, maybe unknowingly, in the building where Skynet is just awaking? Torture vs. Dust Specks?

A topical real-life example of this is the DDoS attacks that Anonymous are making against various companies that pursue/sue people for alleged illegal file sharing.

I make no comment on the morality of this, but it seems to be effective in practise, at least some of the time, for example it may lead to the demise of the law firm ACS:law.