Risk-Aversion Sets Life Value
Many pandemic cost-benefit analyses estimate larger containment benefits than did I, mainly due to larger costs for each life lost. Surprised to see this, I’ve been reviewing the value of life literature. The key question: how much money (or resources) should you, or we, be willing to pay to gain more life? Here are five increasingly sophisticated views:
Infinite – Pay any price for any chance to save any human life.
Value Per Life – $ value per human life saved.
Quality Adjusted Life Year (QALY) – $ value per life year saved, adjusted for quality.
Life Year To Income Ratio – Value ratio between a year of life and a year of income.
Risk Aversion – Life to income ratio comes from elasticity of utility w.r.t. income.
The first view, of infinite value, is the simplest. If you imagine someone putting a gun to your head, you might imagine paying any dollar price to not be shot. There are popular sayings to this effect, and many even call this a fundamental moral norm, punishing those who visibly violate it. For example, a hospital administrator who could save a boy’s life, but at great expense, is seen as evil and deserving of punishment, if he doesn’t save the boy. But he is seen as almost as evil if he does save the boy, but thinks about his choice for a while.
Which shows just how hypocritical and selective our norm enforcement can be, as we all make frequent choices that express a finite values on life. Every time we don’t pay all possible costs to use the absolutely safest products and processes because they cost more in terms of time, money, or quality of output, we reveal that we do not put infinite value on life.
The second view, where we put a specific dollar value on each life, has long been shunned by officials, who deny they do any such thing, even though they in effect do. Juries have awarded big claims against firms that explicitly used value of life calculations to not to adopt safety features, even when they used high values of life. Yet it is easy to show that we can have both more money and save more lives if we are more consistent about the price we pay for lives in the many different death-risk-versus-cost choices that we make.
Studies that estimate the monetary price we are willing to pay to save a life have long shown puzzlingly great variation across individuals and contexts. Perhaps in part because the topic is politically charged. Those who seek to justify higher safety spending, stronger regulations, or larger court damages re medicine, food, environmental, or job accidents tend to want higher estimates, while those who seek to justify less and weaker of such things tend to want lower estimates.
The third view says that the main reason to not die is to gain more years of life. We thus care less about deaths of older and sicker folks, who have shorter remaining lives if they are saved now from death. Older people are often upset to be thus less valued, and Congress put terms into the US ACA (Obamacare) medicine bill forbidding agencies from using life years saved to judge medical treatments. Those disabled and in pain can also be upset to have their life years valued less, due to lower quality, though discounting low-quality years is exactly how the calculus says that it is good to prevent disability and pain, as well as death.
It can make sense to discount life years not only for disability, but also for distance in time. That is, saving you from dying now instead of a year from now can be worth more than saving you from dying 59 years from now, instead of 60 years from now. I haven’t seen studies which estimate how much we actually discount life years with time.
You can’t spend more to prevent death or disability than you have. There is thus a hard upper bound on how much you can be willing to pay for anything, even your life. So if you spend a substantial fraction of what you have for your life, your value of life must at least roughly scale with income, at least at the high or low end of the income spectrum. Which leads us to the fourth view listed above, that if you double your income, you double the monetary value you place on a QALY. Of course we aren’t talking about short-term income, which can vary a lot. More like a lifetime income, or the average long-term incomes of the many associates who may care about someone.
The fact that medical spending as a fraction of income tends to rise with income suggests that richer people place proportionally more value on their life. But in fact meta-analyses of the many studies on value of life seem to suggest that higher income people place proportionally less value on life. Often as low as value of life going as the square root of income.
Back in 1992, Lawrence Summers, then Chief Economist of the World Bank, got into trouble for approving a memo which suggested shipping pollution to poor nations, as lives lost there cost less. People were furious at this “moral premise”. So maybe studies done in poor nations are being slanted by the people there to get high values, to prove that their lives are worth just as much.
Empirical estimates of the value ratio of life relative to income still vary a lot. But a simple theoretical argument suggests that variation in this value is mostly due to variation in risk-aversion. Which is the fifth and last view listed above. Here’s a suggestive little formal model. (If you don’t like math, skip to the last two paragraphs.)
Assume life happens at discrete times t. Between each t and t+1, there is a probability p(et) of not dying, which is increasing in death prevention effort et. (To model time discounting, use δ*p here instead of p.) Thus from time t onward, expected lifespan is Lt = 1 + p(et)*Lt+1. Total value from time t onward is similarly given by Vt = u(ct) + p(et)*Vt+1, where utility u(ct) is increasing in that time’s consumption ct.
Consumption ct and effort et are constrained by budget B, so that ct + et ≤ B. If budget B and functions p(e) and u(c) are the same at all times t, then unique interior optimums of e and c are as well, and also L and V. Thus we have L = 1/(1-p), and V = u/(1-p) = u*L.
In this model, the life to income value ratio is the value of increasing Lt from L to L+x, divided by the value of increasing ct from c to c(1+x), for x small and some particular time t. That is:
(dL * dV/dL) / (dc * dV/dc) = x * u / (x * c * du/dc) = [ c * u’(c) / u(c) ]-1.
Which is just the inverse of the elasticity of u with respect to c.
That non-linear (concave) shape of the utility function u(c) is also what produces risk-aversion. Note that (relative) risk aversion is usually defined as -c*u”(c)/u’(c), to be invariant under affine transformations of u and c. Here we don’t need such an invariance, as we have a clear zero level of c, the level at which u(c) = 0, so that one is indifferent between death and life with that consumption level.
So in this simple model, the life to income value ratio is just the inverse of the elasticity of the utility function. If elasticity is constant (as with power-law utility), then the life to income ratio is independent of income. A risk-neutral agent puts an equal value on a year of life and a year of income, while an agent with square root utility puts twice as much value on a year of life as a year of income. With no time discounting, the US EPA value of life of $10M corresponds to a life year worth over four times average US income, and thus to a power law utility function where the power is less than one quarter.
This reduction of the value of life to risk aversion (really concavity) helps us understand why the value of life varies so much over individuals and contexts, as we also see puzzlingly large variation and context dependence when we measure risk aversion. I’ll write more on that puzzle soon.
Added 23June: The above model applies directly to the case where, by being alive, one can earn budget B in each time period to spend in that period. This model can also apply to the case where one owns assets A, assets which when invested can grow from A to rA in one time period, and be gambled at fair odds on whether one dies. In this case the above model applies for B = A*(1-p/r).
Added 25June: I think the model gives the same result if we generalize it in the following way: Bt, and pt(et,ct) vary with time, but in a way so that optimal ct = c is constant in time, and dpt/ct = o at the actual values of ct,et.