Less Wrong readers are familiar with the idea you can and should put a price on life. Unfortunately the Big Lie that you can't and shouldn't has big consequences in the current health care debate. Here's some articles on it:
Yvain's blog post here (HT: Vladimir Nesov).
Peter Singer's article on rationing health care here.
Wikipedia here.
Experts and policy makers who debate this issue here.
For those new to Less Wrong, here's the crux of Peter Singer's reasoning as to why you can put a price on life:
The dollar value that bureaucrats place on a generic human life is intended to reflect social values, as revealed in our behavior. It is the answer to the question: "How much are you willing to pay to save your life?" — except that, of course, if you asked that question of people who were facing death, they would be prepared to pay almost anything to save their lives. So instead, economists note how much people are prepared to pay to reduce the risk that they will die. How much will people pay for air bags in a car, for instance? Once you know how much they will pay for a specified reduction in risk, you multiply the amount that people are willing to pay by how much the risk has been reduced, and then you know, or so the theory goes, what value people place on their lives. Suppose that there is a 1 in 100,000 chance that an air bag in my car will save my life, and that I would pay $50 — but no more than that — for an air bag. Then it looks as if I value my life at $50 x 100,000, or $5 million.
The theory sounds good, but in practice it has problems. We are not good at taking account of differences between very small risks, so if we are asked how much we would pay to reduce a risk of dying from 1 in 1,000,000 to 1 in 10,000,000, we may give the same answer as we would if asked how much we would pay to reduce the risk from 1 in 500,000 to 1 in 10,000,000. Hence multiplying what we would pay to reduce the risk of death by the reduction in risk lends an apparent mathematical precision to the outcome of the calculation — the supposed value of a human life — that our intuitive responses to the questions cannot support. Nevertheless, this approach to setting a value on a human life is at least closer to what we really believe — and to what we should believe — than dramatic pronouncements about the infinite value of every human life, or the suggestion that we cannot distinguish between the value of a single human life and the value of a million human lives, or even of the rest of the world. Though such feel-good claims may have some symbolic value in particular circumstances, to take them seriously and apply them — for instance, by leaving it to chance whether we save one life or a billion — would be deeply unethical.
That doesn't follow. From our recognition of the finite value of "very valuable things" like our lives and friendships, it does not follow that we consciously put a specific price on things. Rather, it's a recognition that, for any kind of rational (not-self-defeating) behavior, our actions must be as if they didn't put an infinite value (or price) on anything.
And I hate to say it, but this article is really just telling the LW crowd things it already knows, and, more importantly, already appreciates beyond merely "knowing it in the abstract".
I think you're overestimating the level most LessWrong viewers are on. And anyway, dismissing good posts about elementary rationality stuff "because things discussed are already known" does sound a bit worrysome. We all start at the bottom.