Discover more from Overcoming Bias
For Discount Rates
(A long retort to Eliezer’s post Against Discount Rates.)
Imagine you are a multi-billionaire who, to benefit mankind, will construct an asteroid deflector. Since your budget is limited, the deflector cannot be perfect. And trying to deflect an asteroid heading toward one place may mean increasing the risk it will hit somewhere else. So you must decide how much protection to offer different parts of the globe. Let us assume that your protection can be described by a single parameter P at each place X — roughly how much you have reduced the probability that an impact there will cause a physical effect (e.g. temperature) there above a certain threshold.
Initially you decide that it would be biased to prefer some places X on Earth to others, and so you decide to give the same protection level P(X) = P to all places. To make clear to everyone the magnitude of your generosity, you publish a cost schedule C(X) saying how many dollars per square mile it will cost you, per unit of protection, to increase (or decrease) protection at each place. It might, for example, cost more to protect places near the equator, and less to protect places toward the poles.
Soon you find that rich densely-populated places X toward the poles are offering to pay you a price above C(X) to increase their protection P(X). Accepting their offers benefits them, and gives you more money to spend on benefiting everyone, so you accept.
Soon after, you find that poor sparsely populated places X near the equator are suggesting that you reduce their protection P(X), saving yourself money at a rate C(X). They also suggest you pay them a large share of these savings. Thank you very much for your kind offer of protection, they say, but we would really prefer the cash. You agree that this would benefit those places, and give you more cash to help everyone, so why not?
After all these adjustments have been made, you find that the protection level P(X) that you are providing is not at all the same across places X. You are now “biased” toward protecting rich pole cities more than empty equator deserts. Why? Because you wanted to help people, people are biased about places, and some people are richer than others.
Even if you wished that people were not biased about places, or that some were not richer than others, this is still the best you can do. Your gift of an initial protection plan P(X) was in effect a gift of wealth to people and places of your choosing. But then they spent their wealth as they saw fit. And since for most places X your gift was only a small fraction of their total wealth, your gift could only made a small change in their final chosen protection P(X).
What if instead of being an independent multi-billionaire, you represent a consortium of places on Earth who join together to build an asteroid deflector? Well if they have you on a short leash, you should have little effect on the final protection plan P(X); your consortium members may give gifts to people and places if they want to, but not just because you tell them to.
Now let us imagine that instead of building an asteroid deflector to protect places, you build a “disaster deflector” to protect future times from disasters, as well as from costs to prevent disaster. You might, for example, try to prevent damage from global warming or from rampaging robots. As with asteroid deflection, imagine that there are tradeoffs between protecting different times, and that you use a single parameter P to describe how much you reduce the degree of disaster at each time T.
As a multi-billionaire, you could donate your wealth to provide some initial protection plan P(T), and you could calculate a cost schedule C(T) saying how much more it would cost to add another unit of protection at each time T. If you can guess how much people then would value that protection, then you can guess whether they would, if they had the choice, offer to pay you for more protection, or prefer to be paid and get less protection.
Imagine that you can use financial markets to invest now, to be benefit people at time T, and that other people today are already in effect doing this, so that you might induce those people to invest less. (A in effect invests for T if A invests for B, B invests for C, and so on up to T.) If so, then you should want to adjust your initial protection plan P(T). If you accept future folks’ judgments about what is good for them, then for times T that would want to be paid to be protected less, you should protect them less and instead invest for them, while for times T that would want to pay to be protected more, you should convince others who were planning to invest for them to instead pay you to protect them more.
After your adjustments, your final protection plan P(T) should agree with market rates of return. Of course if your initial plan P(T) in effect greatly increased the wealth of people at time T, then you might have changed the market rates of return, relative to what they would have been without your gift. But discounting the dollar value people get from protection at time T by the market rate of today’s low cost of investing to get dollars at that time, no other plan should produce a higher total dollar value today. Your plan is now “biased” about times, because you want to help people, people are biased about times, and some times are richer than others.
But doesn’t discounting at market rates of return suggest we should do almost nothing to help far future folk, and isn’t that crazy? No, it suggests:
Usually the best way to help far future folk is to invest now to give them resources they can spend as they wish.
Almost no one now in fact cares much about far future folk, or they would have bid up the price (i.e., market return) to much higher levels.
Very distant future times are ridiculously easy to help via investment. A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.
So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them? How can you think anyone on Earth so cares? And if no one cares the tiniest bit, how can you say it is “moral” to care about them, not just somewhat, but almost equally to people now? Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.
So why do many people seem to care about policy that effects far future folk? I suspect our paternalistic itch pushes us to control the future, rather than to enrich it. We care that the future celebrates our foresight, not that they are happy.
Large legal barriers now hinder us from making deals with the far future, and from saving today to benefit them. We do not enforce many kinds of terms in wills, and charities are required to spend a certain fraction of their capital each year, to prevent their endowments from growing large. The best way to help the far future might be to break down such barriers. But few try this because, well, we just don’t care.