2 min read20th Mar 201039 comments

4

Less Wrong readers are familiar with the idea you can and should put a price on life. Unfortunately the Big Lie that you can't and shouldn't has big consequences in the current health care debate. Here's some articles on it:

Yvain's blog post here (HT: Vladimir Nesov).
Peter Singer's article on rationing health care here.
Wikipedia here.
Experts and policy makers who debate this issue here.

For those new to Less Wrong, here's the crux of Peter Singer's reasoning as to why you can put a price on life:

The dollar value that bureaucrats place on a generic human life is intended to reflect social values, as revealed in our behavior. It is the answer to the question: "How much are you willing to pay to save your life?" — except that, of course, if you asked that question of people who were facing death, they would be prepared to pay almost anything to save their lives. So instead, economists note how much people are prepared to pay to reduce the risk that they will die. How much will people pay for air bags in a car, for instance? Once you know how much they will pay for a specified reduction in risk, you multiply the amount that people are willing to pay by how much the risk has been reduced, and then you know, or so the theory goes, what value people place on their lives. Suppose that there is a 1 in 100,000 chance that an air bag in my car will save my life, and that I would pay $50 — but no more than that — for an air bag. Then it looks as if I value my life at $50 x 100,000, or $5 million.

The theory sounds good, but in practice it has problems. We are not good at taking account of differences between very small risks, so if we are asked how much we would pay to reduce a risk of dying from 1 in 1,000,000 to 1 in 10,000,000, we may give the same answer as we would if asked how much we would pay to reduce the risk from 1 in 500,000 to 1 in 10,000,000. Hence multiplying what we would pay to reduce the risk of death by the reduction in risk lends an apparent mathematical precision to the outcome of the calculation — the supposed value of a human life — that our intuitive responses to the questions cannot support. Nevertheless, this approach to setting a value on a human life is at least closer to what we really believe — and to what we should believe — than dramatic pronouncements about the infinite value of every human life, or the suggestion that we cannot distinguish between the value of a single human life and the value of a million human lives, or even of the rest of the world. Though such feel-good claims may have some symbolic value in particular circumstances, to take them seriously and apply them — for instance, by leaving it to chance whether we save one life or a billion — would be deeply unethical.

New to LessWrong?

New Comment
39 comments, sorted by Click to highlight new comments since: Today at 3:38 PM

I think we already take this for granted around here.

Out of curiosity, how far do you go consciously putting a price on things? Do you actually have a numerical figure you put on your own life? Would you feel comfortable putting a price on a friendship or a fetus? How much money is a point on Less Wrong worth to you?

Here is a simple way to assess your value-of-life (from an article by Howard).

Imagine you have a deadly disease, certain to kill you. The doctor tells you that there is one cure, it works perfectly, and costs you nothing. However, it is very painful, like having wisdom teeth pulled continuously for 24 hours without anesthetic.

However, the doctor says there is one other possible solution. It is experimental, but also certain to work. However, it isn’t free. “How much is it?” you ask. “I forgot,” says the doctor. “So, you write down the most you would pay, I’ll find out the cost, and if the cost is less than you are willing to pay, I’ll sign you up for the treatment. Otherwise, I’ll sign you up for the painful procedure.” What do you write down? Call that dollar amount X. For example, you might decide that you wouldn’t pay more than $50,000.

Now scratch the above paragraph; actually the treatment is free. However, it isn’t perfectly effective. It always cures the disease, but there is a small chance that it will kill you. “What is the chance?” you ask. “I forgot,” says the doctor. “So, you write down the largest risk of death you are willing to take, I’ll find out the risk, and if the risk is less than you are willing to take, I’ll sign you up for the treatment. Otherwise, I’ll sign you up for the painful procedure.” What do you write down? Call that probability Y. For example, you might decide that you aren’t willing to take more than a half-percent chance of death to avoid the pain.

Now you’ve established that Pain = $X loss of dollars, and that Pain = Y probability of death. Transitivity implies that $X loss of dollars = Y probability of death. Divide X by Y and you have your value-of-life. Above, $50K/0.5% = $10M value-of-life.

If you want, you can divide by one million and get a dollar cost for a one-in-a-million chance of death (called a micromort). For example, my micromort value is $12 for small risks (larger risks are of course different; you can’t kill me for $12M). I use this value to make health and safety decisions.

Would you accept a 95% chance of death for $36 million?

How much money is a point on Less Wrong worth to you?

That's perhaps a generalization of the question, "how much is an hour of your time worth to you", that was once brought up on Marginal Revolution. Not an easy question to answer.

I bought the book "Your Money or Your Life" on a friend's recommendation, in good part to figure out how I answer that question. It goes into some of the complications like figuring out the real "total cost of ownership" of your job - this blog gives a nice brief illustration of the kind of thinking the book encourages. I haven't really figured it out yet, but just thinking through the issues turns out to be valuable.

I've been toying with the idea of a post which illustrates on a smaller scale example the notion of applying "shut up and multiply" to some biggish life decisions, reporting on how my wife & I decided to no longer own a car after we figured out that the car was costing us around 10€ each day that we left it sitting in the garage, and that was a vast majority of the days in each year. No-one around us in similar situations seems to question that having a car is the "normal" choice.

"How much is this really worth" or "how much does this really cost" is an interesting question, one you can't always answer with full precision, but the attempt is often in itself instructive and wortwhile. An hour or two spent on back-of-the-envelope calculations, maybe even a little Excel spreadsheet, can be a great way to identify less-than-fully-rational decisions you're making just by not thinking about them.

Morendil, my guess is that you don't question whether having a car is the "normal" choice, either, but you have started to question whether it's the efficient choice. Most people don't evaluate the economic efficiency of owning a car precisely because everyone agrees that it's normal to own a car, and people often just do what they see as normal without stopping to think about it.

Incidentally, zipcar.com, an hourly car rental service, sometimes runs ads that break down the cost of using a Zipcar for your driving needs on an annual basis vs. the cost of owning a car. I checked the math on one of those ads and found it persuasive; I've been using Zipcar for 3 years now and have never felt the need to own a car.

That doesn't follow. From our recognition of the finite value of "very valuable things" like our lives and friendships, it does not follow that we consciously put a specific price on things. Rather, it's a recognition that, for any kind of rational (not-self-defeating) behavior, our actions must be as if they didn't put an infinite value (or price) on anything.

And I hate to say it, but this article is really just telling the LW crowd things it already knows, and, more importantly, already appreciates beyond merely "knowing it in the abstract".

And I hate to say it, but this article is really just telling the LW crowd things it already knows, and, more importantly, already appreciates beyond merely "knowing it in the abstract".

I think you're overestimating the level most LessWrong viewers are on. And anyway, dismissing good posts about elementary rationality stuff "because things discussed are already known" does sound a bit worrysome. We all start at the bottom.

I agree. But note that I was careful to say:

this article is really just telling the LW crowd things it already knows, and, more importantly, already appreciates beyond merely "knowing it in the abstract".

I'm fine with articles that tell us stuff we already know, or that someone wrote an article about before. No one's perfect, we need remindings, the article might present it with a better perspective or explanation, etc. What makes this article different is that LWers don't just know it, they actually appreciate the insight, i.e. put it into practice and successfully avoid errors based on assuming something has infinite value.

And, for that matter, I don't think that any of the newbies, except maybe the really "out-there ones", have made such an error.

LWers don't just know it, they actually appreciate the insight [that life has a price], i.e. put it into practice and successfully avoid errors based on assuming something has infinite value. And, for that matter, I don't think that any of the newbies, except maybe the really "out-there ones", have made such an error.

Would this imply no Less Wrong readers believe in God?

Also, shedding the belief that life has infinite value is not a quick and simple process. It's more like the beginning of an ongoing process. It requires pricing many things we consider priceless, where plenty of cognitive biases will get in the way.

P.S. You're saying Clippy is "out there"? :)

No need to worry to hate to say it - I want your truthful opinion. I've used your feedback to trim the article back appropriately.

Okay, but my other complaint was going to be that your article is mostly a quote from someone else, and now you've made it even more so!

There's a set of useful links for anyone wanting to investigate the issue; it took a while to find the useful ones. In addition, some people (morendil, NancyLebovitz, bill) have made some useful contributions to this topic in the comments. If you and others still don't think this page is useful, I'll delete the article.

Because of the discussion, and the concise summary and collection of highly applicable links, I recommend against deleting.

Thanks, I tweaked the post to better reflect that. It turns out (HT: Vladimir Nesov) Yvain posted about this topic on his blog, so I also included a link to that from this post.

A couple of links about how people sometimes think about the question.

People are less content if deals about sacred values involve money. They'd rather have a concession from the other side's sacred values.

Unfortunately, it's hard to see how this can apply to most health care questions, but here's a case where it can:

The Israeli "no give, no take" organ donation program.

"What is a cynic? A man who knows the price of everything, and the value of nothing." "What is a cynic? A person who believes nothing has an infinite price."

How does the former imply the latter? Valuing something does not imply infinite price, and the morale of the Oscar Wilde -quote seemed to be more along the lines of "cynic is someone who knows the price of everything, but doesn't really enjoy(or admit enjoying, even to themself) anything"

Your point is valid. Sometimes the genius and the curse of a pithy quote is you find it really meaningful, and then later find out someone else has a really different interpretation of the same quote! It's an interesting topic in itself as to whether the ambiguity present in literature and poetry - which is regarded as a good thing - encourages us to simply hear what we want to hear. To me, the connotation of Wilde's quote was that it's a bad thing to be aware that everything has a price, and the truly valuable things, e.g. "the smile of a baby" (the cuteness of a bunny?) are priceless. That's the connotation I object too. I'd agree that it's not the one and only connotation.

To me, the connotation of Wilde's quote was that it's a bad thing to be aware that everything has a price, and the truly valuable things, e.g. "the smile of a baby" (the cuteness of a bunny?) cannot be priced.

That could be implied by larger context, but the quote, as it stands, only expresses the idea that prices and values are separate things. It could be that there was some meaningful conversion chart, or it could be that there wasn't. If we take that there isn't any chart for some things, it still doesn't imply that the price was infinite, it just means that talking about price doesn't make any sense. Analogy would be measuring happiness in kilograms. Lack of conversion chart doesn't imply that happiness means infinite kilograms.

The disconnect between values and prices could be described as something like "It has a high price because many people value it", not the other way around. Values are why we do things, and losing sight of those, staring only at price tags without understanding why there are prices in the first place, that's what Wilde seems to describe cynicism to be.

Based on your feedback I've entirely removed the Wilde quote from the article. I see no point perpetuating a flawed concept.

Agreed, the quote in my opinion captures two important truths: that price and value are not the same thing and that value is subjective. It suggests that the cynic errs in conflating price and value and further that the cynic is also (or as a consequence) poor or deficient at making their own value judgements.

In practice anyone who ever purchases or trades anything implicitly recognizes that value is subjective and not the same thing as price but the confusion continues to trip up a lot of people.

That's well argued. I have included a footnote to your comment where I state the interpretation of the quote in the original post.

My interpretation was to read "value" as roughly meaning "subjective utility", which indeed does not, in general, have a meaningful exchange rate with money.

[-][anonymous]14y00

To me, the connotation of Wilde's quote was that it's a bad thing to be aware that everything has a price, and the truly valuable things, e.g. "the smile of a baby" (the cuteness of a bunny?) cannot be priced.

That could be implied by larger context, but the quote, as it stands, only expresses the idea that prices and values are separate things. It could be that there was some meaningful conversion chart, or it could be that there wasn't. If we take that there isn't any chart for some things, it still doesn't imply that the price was infinite, it just means that talking about price doesn't make any sense. Analogy would be measuring happiness in kilograms. Lack of conversion chart doesn't imply that happiness means infinite kilograms.

The disconnect between values and prices could be described as something like "It has a high price because many people value it", not the other way around. Values are why we do things, and losing sight of those, staring only at price tags without understanding why there are prices in the first place, that's how I interpret that quote.

I like how Robin Hanson points out that healthcare spending gets messed up by peoples need to signal loyalty to each other.

When you attach a price to medicine you are signaling a limit to you loyalty.

Reposting this from the Open Thread, since it's relevant here:

QALYs and how they are arrived at. "Quality Adjusted Life Years" are the measure used by UK drug approval bodies in deciding which treatments to approve. They aim to spend no more than £30,000 per QALY.

How much you value your own life is of course different from how much society values its citizens' lives. Though, decision theory suggests the latter should maybe come to dominate the former.

I thought that utility not being linearly scalable was well accepted.

Thanks, I updated the post with a link to Yvain's article.

Extraneous script tag in the middle of the first para from Peter Singer; extraneous space character in "redu ce" in that para.

It's a code bug. Re-editing the document inserts the script.

Ouch, nasty one. It seems to be gone now.

I went to the raw HTML view and manually removed it.

The Aggregate pricing only contains a price signal when participation is voluntary.

In this case the aggregate price reflects how the participant group values the service.


When participation is not voluntary the aggregate price only informs how the controlling interest values the service.

The non-voluntary members suffer the injustice of paying more then they would like for a service.


The problem is forced participation.

[-][anonymous]14y00

The colonoscopy is a good example of a procedure which on the macro level is low on the list of things that save lives. A planer, who will always be acting with limited resources, would be rational to not offer this service.

Colonoscopy's do save lives though and individuals have different medical budgets. Given different budgets it is rational for some to get colonoscopies while others should not.

I speculate that is is very rare to find a doctor who if you ask if you should get a colonoscopy will ask you how much money you make or what you medical budget is before giving you an answer.

A colonoscopy is a good idea when you consistently have lasting pain from defecation.

The More You Know ...

[-][anonymous]14y00

JACK (V.O.) I'm a recall coordinator. My job is to apply the formula.

....

JACK (V.O.) Take the number of vehicles in the field, (A), and multiply it by the probable rate of failure, (B), then multiply the result by the average out-of-court settlement, (C). A times B times C equals X...

JACK If X is less than the cost of a recall, we don't do one.

BUSINESS WOMAN Are there a lot of these kinds of accidents?

JACK Oh, you wouldn't believe.

BUSINESS WOMAN ... Which... car company do you work for?

JACK A major one.

[-][anonymous]14y00

People are less content if deals about sacred values include money. They'd happier if the deal is just about the other side makes a concession about its own sacred values.

Israel's "no give, no take" organ donation program"