How much would you pay to see a typical movie? How much would you pay to see it 100 times?

How much would you pay to save a random stranger’s life? How much would you pay to save 100 strangers?

If you are like a typical human being, your answers to both sets of questions probably exhibit failures to aggregate value linearly. In the first case, we call it boredom. In the second case, we call it scope insensitivity.

Eliezer has argued on separate occasions that one should be regarded as an obvious error to be corrected, and the other is a gift bestowed by evolution, to be treasured and safeguarded. Here, I propose to consider them side by side, and see what we can learn by doing that.

(Eliezer sometimes treats scope insensitivity as a simple arithmetical error that the brain commits, like in this quote: “the brain can't successfully multiply by eight and get a larger quantity than it started with”. Considering that the brain has little trouble multiplying by eight in other contexts and the fact that scope insensitivity starts with numbers as low as 2, it seems more likely that it’s not an error but an adaptation, just like boredom.)

The nonlinearities in boredom and scope insensitivity both occur at two different levels. On the affective or hedonic level, our emotions fail to respond in a linear fashion to the relevant input. Watching a movie twice doesn’t give us twice the pleasure of watching it once, nor does saving two lives feel twice as good as saving one life. And on the level of decision making and revealed preferences, we fail to act as if our utilities scale linearly with the number of times we watch a movie, or the number of lives we save.

Note that these two types of nonlinearities are logically distinct, and it seems quite possible to have one without the other. The refrain “shut up and multiply” is an illustration of this. It exhorts (or reminds) us to value lives directly and linearly in our utility functions and decisions, instead of only valuing the sublinear emotions we get from saving lives.

We sometimes feel bad that we aren’t sufficiently empathetic. Similarly, we feel bad about some of our boredoms. For example, consider a music lover who regrets no longer being as deeply affected by his favorite piece of music as when he first heard it, or a wife who wishes she was still as deeply in love with her husband as she once was. If they had the opportunity, they may very well choose to edit those boredoms away.

Self-modification is dangerous, and the bad feelings we sometimes have about the way we feel were never meant to be used directly as a guide to change the wetware behind those feelings. If we choose to edit some of our boredoms away, while leaving others intact, we may find ourselves doing the one thing that we’re not bored with, over and over again. Similarly, if we choose to edit our scope insensitivity away completely, we may find ourselves sacrificing all of our other values to help random strangers, who in turn care little about ourselves or our values. I bet that in the end, if we reach reflective equilibrium after careful consideration, we’ll decide to reduce some of our boredoms, but not eliminate them completely, and become more empathetic, but not to the extent of full linearity.

But that’s a problem for a later time. What should we do today, when we can’t change the way our emotions work to a large extent? Well, first, nobody argues for “shut up and multiply” in the case of boredom. It’s clearly absurd to watch a movie 100 times, as if you’re not bored with it, when you actually are. We simply don’t value the experience of watching a movie apart from whatever positive emotions it gives us.

Do we value saving lives independently of the good feelings we get from it? Some people seem to (or claim to), while others don’t (or claim not to). For those who do, some value (or claim to value) the lives saved linearly, and others don’t. So the analogy between boredom and scope insensitivity starts to break down here. But perhaps we can still make some final use out of it: whatever arguments we have to support the position that lives saved ought to be valued apart from our feelings, and linearly, we better make sure those arguments do not apply equally well to the case of boredom.

Here’s an example of what I mean. Consider the question of why we should consider the lives of random strangers to be valuable. You may be tempted to answer that we know those lives are valuable because we feel good when we consider the possibility of saving a stranger’s life. But we also feel good when we watch a well-made movie, and we don’t consider the watching of a movie to be valuable apart from that good feeling. This suggests that the answer is not a very good one.

Appendix: Altruism vs. Cooperation

This may be a good time to point out/clarify that I consider cooperation, but not altruism, to be a core element of rationality. By “cooperation” I mean techniques that can be used by groups of individuals with disparate values to better approximate the ideals of group rationality (such as Pareto optimality). According to Eliezer,

"altruist" is someone who chooses between actions according to the criterion of others' welfare

In cooperation, we often takes others' welfare into account when choosing between actions, but this "altruism" is conditional on others reciprocating and taking our welfare into account in return. I expect that what Eliezer and others here mean by "altruist" must consider others’ welfare to be a terminal value, not just an instrumental one, and therefore cooperation and true altruism are non-overlapping concepts. (Please correct me if I'm wrong about this.)

New to LessWrong?

New Comment
40 comments, sorted by Click to highlight new comments since: Today at 6:58 PM

When I contribute to charity, it's usually to avoid feeling guilty rather than to feel good as such... imagining myself as being the guy who doesn't rescue a drowning swimmer because he doesn't want to get his suit wet isn't a state I want to be in.

These charities can save someone's life for about $1,000. If you spend $1,000 on anything else, you've as good as sentenced someone to death. I find this to be really disturbing, and thinking about it makes think about doing crazy things, such as spending my $20,000 savings on a ten year term life insurance policy worth $10,000,000 and then killing myself and leaving the money to charity. At $1,000 a life, that's ten thousand lives saved. I suspect that most people who literally give their lives for others don't get that kind of return on investment.

In most books, insurance fraud is morally equivalent to stealing. A deontological moral philosophy might commit you to donating all your disposable income to GiveWell-certified charities while not permitting you to kill yourself for the insurance money. But, yea, utilitarians will have a hard time explaining why they don't do this.

Do we value saving lives independently of the good feelings we get from it? Some people seem to (or claim to), while others don’t (or claim not to). For those who do, some value (or claim to value) the lives saved linearly, and others don’t.

Perhaps (this is a descriptive and not a normative answer) we value believing we're good people. If we know that we're doing a sub-maximal amount of good in order to feel better, we won't feel like good people anymore. That is, a particular logical sort of person only gets the warm fuzzies when they know they're acting objectively rather than trying to maximize warm fuzzies.

Example: Sally can donate $100/month to charity. She likes dogs, so she donates it to help stray dogs, and feels like a good person. Then someone points out to her that unless she thinks animal lives are more important than human lives, she should donate it to help humans. Now if she donates to animal shelters just because she likes animals, she will feel like a bad person because she knows she's more interested in fuzzy feelings than in actually doing good. Therefore, she donates to human-oriented charities. Now she knows that she's really helping other people instead of just feeling good about herself, and so she feels good about herself. Since she feels good about herself, she keeps doing it.

"Shut up and multiply" doesn't assume specifically total utilitarianism, you can value lives sublinearly and still hold important the principle of not just relying on intuition.

Compare:

  • shut up and think
  • shut up and compute
  • shut up and multiply

It seems to me that besides not just relying on feelings and intuitions, "compute" has the connotation that we already know what the right morality is, and can just apply it mechanically, and "multiply" has the additional connotation that the right morality values lives linearly. Shouldn't we use the phrase that most accurately conveys our intended meanings?

Vladmir makes a good point. Multiply has the connotation that expected outcomes are best determined by probability. It doesn't comment on whether one should value lives linearly (or at all). It does imply utilitarianism. At least, it implies that the expected outcome is significantly relevant to your decision. I'm not sure what Vladmir means by 'total utilitarianism'.

It seems to me that besides not just relying on feelings and intuitions, "compute" has the connotation that we already know what the right morality is, and can just apply it mechanically

I agreed up to here.

I expect that one source of the problem is seen in equating these two situations. On one hand you have 100 copies of the same movie. On the other hand, you have 100 distinct humans you could pay to save. To draw a direct comparison you would need to treat these as 100 copies of some idealized stranger. In which case the scope insensitivity might (depending on how you aggregate the utility of a copy's life) make more sense as a heuristic.

And this sort of simplification is likely one part of what is happening when we naively consider the questions:

How much would you pay to save a random stranger’s life? How much would you pay to save 100 strangers?

I wonder how this experiment would change if you presented lists of names? If you encouraged a different model for the 100 strangers.

To draw a direct comparison you would need to treat these as 100 copies of some idealized stranger.

Actually, I think a direct comparison would involve saving the same person 100 times (it was the same movie 100 times). I think at some point I'd begin to wonder if the gene pool wouldn't be better off without someone that accident-prone (/suicidal)... or at the very least I'd suspect that his ability to need saving would probably outlast my finances, in which case I might as well accept his inevitable death and quit paying sooner rather than later!

I'd guess that the people who arguing for linearity are more extrapolating from their horror at the thought of their own death, rather than the pleasure of saving other peoples lives.

That's probably correct, at least in my own case. I don't try to save people because I like the thought of having saved a lot of people, but because (a) they don't want to die or (b) total annihilation is far too horrible a punishment for their mistake. Seeing the portion of the movie where everyone cheers the hero doesn't move me much; a certain scene in which a sick young girl screams that she's afraid, she doesn't want to die, and then she dies, will stay with me until I end Death or it ends me.

Does that get us any closer to the position that we should assign value to other people's lives, value that is in addition to how those lives affect our feelings?

How do you cross the gap from "I feel bad (or good) about this" to "I should assign a value to this independent of my feelings" and not have that same argument apply to boredom?

When thinking more clearly about your feeling doesn't make it go away. Really, maybe boredom is a bad example. We definitely wouldn't want to potentially feel arbitrarily much boredom if we couldn't escape from it.

Michael, your first sentence seems completely ungrammatical. I can't parse it or guess its meaning.

I hesitate to imagine others' death and suffering as my own (in other words, to really empathize), although doing so in the context of a good film or book is quite pleasurable.

(Eliezer sometimes treats scope insensitivity as a simple arithmetical error that the brain commits, like in this quote: “the brain can't successfully multiply by eight and get a larger quantity than it started with”. Considering that the brain has little trouble multiplying by eight in other contexts and the fact that scope insensitivity starts with numbers as low as 2, it seems more likely that it’s not an error but an adaptation, just like boredom.)

Arithmetic is a relatively late cognitive technology that doesn't appear by its own (1). We can to a certain degree train ourselves to use exact numbers instead of approximate magnitudes in our reasoning, but that remains an imperfect art - witness the difficulty people have truly grasping numbers that are at all higher. An imprecise analog magnitude representation is the brain's native way for representing numbers (2.pdf) 3), and while there is evidence about that analog system indeed being capable of multiplication (4), I'd be careful about making claims concerning what low-level systems we have no introspective access to can or cannot multiply.

(Especially since we do know plenty of cases where a particular system in the brain doesn't share the capabilities other systems do - we might intuitively solve differential equations in order to predict a baseball's flight path, but that doesn't mean we can natively solve abstract equations in our head.)

we do know plenty of cases where a particular system in the brain doesn't share the capabilities other systems do - we might intuitively solve differential equations in order to predict a baseball's flight path, but that doesn't mean we can natively solve abstract equations in our head.

Good point, but I'd go even further: we are not even solving differential equations in predicting a baseball's flight path, but rather, pattern-matching it to typical falling objects. Though I frequently criticize RichardKennaway's points about control systems, he is right that you actually need to know very little about the ball's dynamics in order to catch it. You just need to maintain a few constant angles with the ball, which is how humans actually do it.

To the extent that "you" are solving a differential equation, the solution is represented in the motions of your body, not in any inference by your brain.

Consider a related problem - how much dynamics do you have to know in order to make a 3-point shot in basketball?

Your link syntax (2) is broken; to fix it put backslashes before parentheses inside the URL, like this:

[2](http://www.duke.edu/web/mind/level2/faculty/liz/Publications/Brannon%20\(2006\).pdf)

Thanks! Fixed.

[-][anonymous]15y00

I'd be careful about making claims concerning what low-level systems we have no introspective access to can or cannot multiply.

I think I didn't get my point across successfully here. I'm not making any claims about whether some low-level system can or can't multiply, but instead saying that it's not trying to multiply in the first place.

In other words, it's likely wrong to believe that we wouldn't have scope insensitivity, if only evolution could have come up with a way to make some subsystem do multiplication correctly. If that were the reason for scope insensitivity, then it would make sense to think of it as a simple arithmetical error.

Does that help make my point clearer?

How much would you pay to save a random stranger’s life? How much would you pay to save 100 strangers?

If I have the chance to abstract the decision, $0. I expect decreased population to increase the chance of a desireable long term future (ie. One without human extinction.)

In most cases, I would suffer extreme akrasia when faced with such a decision. I have remarkably strong emotional impulses towards heroism. Not so much the 'being a hero' part. It's more like I experience wrath for whatever 'bad thing' (such as death) is occurring and would combat it even when it made me appear the villain.

Don't worry, the aforementioned akrasia and the rather low weight I place on the topic will prevent me from declaring a jihad on excess population.

In human social situations the optimal response it seldom to answer it, at least not directly. With morally relevant questions in particular the correct action is to respond in such a way as to demonstrate in group status. To do otherwise is either naive or contrary.

This is a rare case in that I actually appreciate the downvote (metaphoric status slap). This demonstrates (to me at least) that what we do in these threads is a very different thing to multiplication, and the discussion thereof. Given that this sort of discussion is one where I expect impulses towards bias to be amplified keeping that fact primed is important. (To me at least. There is of course the 'punish those who don't punish the defectors' strategy to be aware of.)

[-][anonymous]15y00

I expect decreased population to increase the chance of a desireable long term future

A snarky but half-serious question: Why don't you kill yourself?

Notice the 'rather low weight' bit? The value I place on my own life is rather high. I directly value my own existence. Not only that, the expected value of the utility of the rest of the universe is higher with me alive than not. That is, I believe I have a valuable contribution to make. I expect this is a common position.

I expect it is a very common position, which is actually my point. Most of the people you think the world would be better without probably hold the same position, saying they have a valuable contribution to make.

Underlying your point here, and far more so the bizarre question 'why don't you kill yourself?' are some assumptions. Without those premises your implied arguments are completely irrelevant. If you write those premises out explicitly it may be obvious from what I have said so far which I do not share.

Edit: 'Completely' irrelevant. is not accurate. "Why don't you kill yourself?" is relevant yet trivial.

Yes, it is good to distinguish altruism and cooperation in just the way you did. Cooperation, when done well, should have a rough linearity in lives saved, while feel-good altruism usually does not.

And the only known way to coordinate cooperative activities broadly is through the market.

What do you mean by coordinating cooperative activities broadly? Surely culture also coordinates cooperative activities in numerous ways without the requirement of the market.

Religions, the use of force, ad campaigns, and volunteer organizations can all coordinate cooperative activities that are not already embedded in the culture as well. Not to mention the contributions of evolution in inclining us to cooperate and providing the tools we need to do so.

The market didn't build Rome or Babylon.

Of course the market is flexible in what sorts of cooperation it coordinates, but there are still some types of generally desirable cooperation that it fails to coordinate. And much real-world cooperation--for instance, on the family level, and everything predating the market--relies on other means of coordination. It would be a shame to neglect these.

I have a question for everyone. What is the evolutionary function of "feelings about feelings"? Is there one, or is it more of a spandrel? It seems that our emotions are largely hardwired, and it wouldn't be very useful to feel bad about how they work.

Maybe it serves some signaling purpose if we discuss our feelings with others? (As in "I feel really bad that I don't love you anymore.") But why should others take these signals seriously?

There is an evolutionary function to having feelings about feelings, and the function is that if your feelings are not helping you, you tend to feel bad about them, and then you might do something to change them. It is quite possible and people do it frequently.

That just seems related to the evolution of politics. That is, you have to signal feeling bad for asking others to grant you more power (or otherwise do what you want).

Seems like it's mostly signalling. Most feelings about feelings seem to be triggered by a feeling that goes against social norms*. Once the feeling is detected you have a choice of listening to your feeling about that (and pushing yourself in line with your in-group), or coming up with overriding reasons (in which case you will still avoid showcasing your feelings, thus hiding being out of line with your in-group).

I don't think talking about our feelings with others serves much of a signalling purpose (although it could be a signal of group membership, insofar as you accept group values sufficiently to use them to judge your basic feelings), so much as not talking about them does. By feeling bad about a feeling you know not to mention it and lose group membership points.

Although maybe you've already come to a similar conclusion some time in the last 3.3 years.

*at least I can't think off-hand of any cases in which you would feel bad about a feeling which is fully endorsed by your in-group.

Yes, utilons aren't the same as warm fuzzies.

I'm also more in favor of cooperation than altruism, but I want to encourage altruists to displace their natural boredom-prone diminishing-returns pleasure seeking for more rational utility boosting. Surely there's some enlightened satisfaction to be had in achieving a "high score" on actual benefits caused, like the Gates' foundation.

The popular programming-advice site stackoverflow quantifies your "altruistic" contribution in a number (more meaningful somehow than on a mere discussion site such as this; you're actually helping individuals achieve real goals). Many people are extremely addicted to increasing this number. If utility could be as reliably quantified, then altruists would certainly pursue it in the same game-like fashion.

I quit using stackoverflow after a few weeks of achieving nearly the maximum possible daily score; boredom (in theproblems I was solving) was a factor, but equally I was disgusted by the habitual nature (easily an hour a day at times) of my reward-seeking.

Re: Do we value saving lives independently of the good feelings we get from it?

Sure: there's the issues of rewards, reputation and status to consider. The effect of saving lives on the former may scale somewhat linearly - but the effect on the others certainly does not.

Valuing lives on a sliding scale makes sense to me... Saving 1 person for X dollars is good citizenship, 100 lives at 100*X is being taken advantage of.

Maybe that's the long and short of it though. We aren't 'buying' lives, we're 'buying' communality.

[-][anonymous]11y00

The reason saving lives is ~linear while watching the same movie is not, is about where you are on your utility curve.

Let's assume for a minute that utility over movies and lives are both a square root or something. Any increasing function with diminishing returns will do. The point is that we are going to get this result even if they are exactly the same utility curve.

Watching the movie once gives 1 utilon. Watching it 100 times gives 10 utilons. Easy peasy.

Saving lives is a bit different. We aren't literally talking about the difference between 0 people and n people, we are talking about the difference between a few billion and a few billion+n. Any increasing function with diminishing returns will be linear by this point, so for small games, shut up and multiply.

By this same argument, the fact that lives are locally linear is not much evidence at all (LR = ~1) that they are globally linear, because there aren't any coherent utility functions that aren't linear at this scale at this point. (unless you only care about how many lives you, individually save, which isn't exactly coherent either, but for other reasons.)

(I think the morally proper think to talk about is people dieing, not people living, because we are talking about saving lives, not birthing babies. But the argument is analogous; you get the idea.)

I hope this helps you.

Uh... what?

Sqrt(a few billion + n) is approximately Sqrt(a few billion). Increasing functions with diminishing returns don't approach Linearity at large values, their growth becomes really Small (way sub-linear, or nearly constant) at high values.

This may be an accurate description of what's going on (if, say, our value for re-watching movies falls off slower than our value for saving multiple lives), but it does not at all strike me as an argument for treating lives as linear. In fact, it strikes me as an argument for treating life-saving as More sub-linear than movie-watching.

It's not the overall growth rate of the function that becomes linear at high values; it's the local behavior. We can approximate: sqrt(1000000), sqrt(1001000), sqrt(1002000), sqrt(1003000) by: 1000, 1000.5, 1001, 1001.5. This is linear behavior.

You asked for corrections:

"Cooperation" doesn't necessarily imply behaviour conditional on reciprocation.

See the dictionary: http://dictionary.reference.com/browse/cooperation