Less Wrong readers are familiar with the idea you can and should put a price on life. Unfortunately the Big Lie that you can't and shouldn't has big consequences in the current health care debate. Here's some articles on it:
Yvain's blog post here (HT: Vladimir Nesov).
Peter Singer's article on rationing health care here.
Wikipedia here.
Experts and policy makers who debate this issue here.
For those new to Less Wrong, here's the crux of Peter Singer's reasoning as to why you can put a price on life:
The dollar value that bureaucrats place on a generic human life is intended to reflect social values, as revealed in our behavior. It is the answer to the question: "How much are you willing to pay to save your life?" — except that, of course, if you asked that question of people who were facing death, they would be prepared to pay almost anything to save their lives. So instead, economists note how much people are prepared to pay to reduce the risk that they will die. How much will people pay for air bags in a car, for instance? Once you know how much they will pay for a specified reduction in risk, you multiply the amount that people are willing to pay by how much the risk has been reduced, and then you know, or so the theory goes, what value people place on their lives. Suppose that there is a 1 in 100,000 chance that an air bag in my car will save my life, and that I would pay $50 — but no more than that — for an air bag. Then it looks as if I value my life at $50 x 100,000, or $5 million.
The theory sounds good, but in practice it has problems. We are not good at taking account of differences between very small risks, so if we are asked how much we would pay to reduce a risk of dying from 1 in 1,000,000 to 1 in 10,000,000, we may give the same answer as we would if asked how much we would pay to reduce the risk from 1 in 500,000 to 1 in 10,000,000. Hence multiplying what we would pay to reduce the risk of death by the reduction in risk lends an apparent mathematical precision to the outcome of the calculation — the supposed value of a human life — that our intuitive responses to the questions cannot support. Nevertheless, this approach to setting a value on a human life is at least closer to what we really believe — and to what we should believe — than dramatic pronouncements about the infinite value of every human life, or the suggestion that we cannot distinguish between the value of a single human life and the value of a million human lives, or even of the rest of the world. Though such feel-good claims may have some symbolic value in particular circumstances, to take them seriously and apply them — for instance, by leaving it to chance whether we save one life or a billion — would be deeply unethical.
Here is a simple way to assess your value-of-life (from an article by Howard).
Imagine you have a deadly disease, certain to kill you. The doctor tells you that there is one cure, it works perfectly, and costs you nothing. However, it is very painful, like having wisdom teeth pulled continuously for 24 hours without anesthetic.
However, the doctor says there is one other possible solution. It is experimental, but also certain to work. However, it isn’t free. “How much is it?” you ask. “I forgot,” says the doctor. “So, you write down the most you would pay, I’ll find out the cost, and if the cost is less than you are willing to pay, I’ll sign you up for the treatment. Otherwise, I’ll sign you up for the painful procedure.” What do you write down? Call that dollar amount X. For example, you might decide that you wouldn’t pay more than $50,000.
Now scratch the above paragraph; actually the treatment is free. However, it isn’t perfectly effective. It always cures the disease, but there is a small chance that it will kill you. “What is the chance?” you ask. “I forgot,” says the doctor. “So, you write down the largest risk of death you are willing to take, I’ll find out the risk, and if the risk is less than you are willing to take, I’ll sign you up for the treatment. Otherwise, I’ll sign you up for the painful procedure.” What do you write down? Call that probability Y. For example, you might decide that you aren’t willing to take more than a half-percent chance of death to avoid the pain.
Now you’ve established that Pain = $X loss of dollars, and that Pain = Y probability of death. Transitivity implies that $X loss of dollars = Y probability of death. Divide X by Y and you have your value-of-life. Above, $50K/0.5% = $10M value-of-life.
If you want, you can divide by one million and get a dollar cost for a one-in-a-million chance of death (called a micromort). For example, my micromort value is $12 for small risks (larger risks are of course different; you can’t kill me for $12M). I use this value to make health and safety decisions.
Would you accept a 95% chance of death for $36 million?