Here's a strategy that places infinite value on life.
List all interventions that increase life available, can you buy them all? If yes, do so. If no, check all possible combinations of purchases for the combination that provides the maximum total life. If multiple options are tied for maximizing total life, pick the cheaper one.
Is this how people spend money when the life they're saving is their own?
Just an occasional reminder that if you value something so much that you don’t want to destroy it for nothing, then you’ve got to put a finite dollar value on it.
The multiple-negatives in this was hard for me to interpret. is "don't want to destroy for nothing" "want to preserve at any cost" or "want to ensure we get something for destroying"? I presume the first, but that's not the naive reading of the sentence.
In any case, I agree with the premise, and usually hear it framed as "you can't value multiple things infinitely". "human lives" is not a valid infinity target (because obviously that prevents any rational decision-making between two unpleasant options), but one specific life may be.
Edit: on further reflection, I'm surprised that I forgot to mention my preferred way of thinking about value: in humans (and perhaps in general decision mechanisms), it's always relative, and generally marginal. It's about the comparison of two (or more) differences from the status quo. "Value" (in these examples and context) is only defined in terms of decisions and tradeoffs - it just doesn't matter what you value or how much you value it if you can have everything. The valuation only comes into play when you have to give up something you value to get something you value more.
There's a debate between Tyler Cowen and philosopher Agnes Callard around valuing human lives with a number. Tyler Cowen starts by saying that it's actually a complicated issue but that having some bounds that depend on circumstances is useful. Agnes Callard then says that you don't need to put any value on human lives at all to make practical tradeoffs because you can think about obligations.
After hearing that exchange it seems to me like the position that you have to put monetary values on human lives to be able to make good decisions seems questionable to me and naive due to lack of knowledge of the alternatives about how to make the decisions.
Thinking about social actors making promises to each other and then having obligations to deliever on those promises is a valid model for thinking about who makes effort to safe peoples lives.
It seems like "agent X puts a particular dollar value on human life" might be ambiguous between "agent X acts as though human lives are worth exactly N dollars each" and "agent X's internal thoughts explicitly assign a dollar value of N to a human life". I wonder if that's causing some confusion surrounding this topic. (I didn't watch the linked video.)
If the care chooser is maximising expected life-years ie favours saving the young then he can be "inconsistent".
Also if you had enough money you would just buy all the options. The only options that get dropped are those that interfere with more saving options.
If somebody truly considered a life to be worth some dollar amount and their budjet was increased then they would still pick the same options but end up with a bigger pile of cash. Given that this "considers worth to be" floats with the budjet I doubt that treating it as a dollar amount is a good idea.
The opportunity cost is still real thougth. If you use a doctor to save someone that means you can't simultanously use them to save another. So assigning a doctor or equipment you are simultanousy saving and endangering lives. And being too stubborn about your decisio making just means you endaangerment side of things just grows without bound.
I'm a bit confused by "you" in the claim. If we're talking about individuals I'm not at all sure one must put a monetary value on something. That seems to suggest nominal values are more accurate than real, subjective personal values those monetary units represent.
In a more general setting, markets for instance, I think a stronger case can be made but for any given individual am not certain it would be required.
Broadening it out more, where multiple people are trying to work together to some ends I think would be the strongest case.
Robert must behave like somebody assigning some consistent dollar value to saving a human life.
Note that this number provides only a lower bound on Robert's revealed preference regarding the trade-off and that it will vary with the size of the budget.
One could imagine an alternative scenario where there is a fluctuating bankroll (perhaps with a fixed rate of increase — maybe even a rate proportional to its current size) and possible interventions are drawn sequentially from some unknown distribution. In this scenario Robert can't just use the greedy algorithm until he runs out of budget (modulo possible knapsack considerations), but would have to model the distribution of interventions and consider strategies such as "save no lives now, invest the money, and save many more lives later".
Just an occasional reminder that if you value something so much that you don’t want to destroy it for nothing, then you’ve got to put a finite dollar value on it. Things just can’t be infinitely more important than other things, in a world where possible trades weave everything together. A nice illustration from Arbital:
In particular, if there is no dollar value for which you took all of the opportunities to pay less to save lives and didn’t take any of the opportunities to pay more to save lives, and ignoring complications with lives only being available at a given price in bulk, then there is at least one pair of opportunities where you could swap one that you took for one that you didn’t take and save more lives, or at least save the same number of lives and keep more money, which at least in a repeated game like this seems likely to save more lives in expectation.
I used to be more feisty in my discussion of this idea: