# For Discount Rates

(A long retort to Eliezer’s post Against Discount Rates.)

Imagine you are a multi-billionaire who, to benefit mankind, will construct an asteroid deflector.  Since your budget is limited, the deflector cannot be perfect. And trying to deflect an asteroid heading toward one place may mean increasing the risk it will hit somewhere else.  So you must decide how much protection to offer different parts of the globe.  Let us assume that your protection can be described by a single parameter P at each place X — roughly how much you have reduced the probability that an impact there will cause a physical effect (e.g. temperature) there above a certain threshold.

Initially you decide that it would be biased to prefer some places X on Earth to others, and so you decide to give the same protection level P(X) = P to all places.  To make clear to everyone the magnitude of your generosity, you publish a cost schedule C(X) saying how many dollars per square mile it will cost you, per unit of protection, to increase (or decrease) protection at each place.  It might, for example, cost more to protect places near the equator, and less to protect places toward the poles.

Soon you find that rich densely-populated places X toward the poles are offering to pay you a price above C(X) to increase their protection P(X).  Accepting their offers benefits them, and gives you more money to spend on benefiting everyone, so you accept.

Soon after, you find that poor sparsely populated places X near the equator are suggesting that you reduce their protection P(X), saving yourself money at a rate C(X).  They also suggest you pay them a large share of these savings.  Thank you very much for your kind offer of protection, they say, but we would really prefer the cash.  You agree that this would benefit those places, and give you more cash to help everyone, so why not?

After all these adjustments have been made, you find that the protection level P(X) that you are providing is not at all the same across places X.  You are now “biased” toward protecting rich pole cities more than empty equator deserts.  Why?  Because you wanted to help people, people are biased about places, and some people are richer than others.

Even if you wished that people were not biased about places, or that some were not richer than others, this is still the best you can do.  Your gift of an initial protection plan P(X) was in effect a gift of wealth to people and places of your choosing.  But then they spent their wealth as they saw fit.  And since for most places X your gift was only a small fraction of their total wealth, your gift could only made a small change in their final chosen protection P(X).

What if instead of being an independent multi-billionaire, you represent a consortium of places on Earth who join together to build an asteroid deflector?  Well if they have you on a short leash, you should have little effect on the final protection plan P(X); your consortium members may give gifts to people and places if they want to, but not just because you tell them to.

Now let us imagine that instead of building an asteroid deflector to protect places, you build a “disaster deflector” to protect future times from disasters, as well as from costs to prevent disaster.  You might, for example, try to prevent damage from global warming or from rampaging robots.  As with asteroid deflection, imagine that there are tradeoffs between protecting different times, and that you use a single parameter P to describe how much you reduce the degree of disaster at each time T.

As a multi-billionaire, you could donate your wealth to provide some initial protection plan P(T), and you could calculate a cost schedule C(T) saying how much more it would cost to add another unit of protection at each time T.  If you can guess how much people then would value that protection, then you can guess whether they would, if they had the choice, offer to pay you for more protection, or prefer to be paid and get less protection.

Imagine that you can use financial markets to invest now, to be benefit people at time T, and that other people today are already in effect doing this, so that you might induce those people to invest less.  (A in effect invests for T if A invests for B, B invests for C, and so on up to T.)  If so, then you should want to adjust your initial protection plan P(T).  If you accept future folks’ judgments about what is good for them, then for times T that would want to be paid to be protected less, you should protect them less and instead invest for them, while for times T that would want to pay to be protected more, you should convince others who were planning to invest for them to instead pay you to protect them more.

After your adjustments, your final protection plan P(T) should agree with market rates of return.  Of course if your initial plan P(T) in effect greatly increased the wealth of people at time T, then you might have changed the market rates of return, relative to what they would have been without your gift.  But discounting the dollar value people get from protection at time T by the market rate of today’s low cost of investing to get dollars at that time, no other plan should produce a higher total dollar value today.  Your plan is now “biased” about times, because you want to help people, people are biased about times, and some times are richer than others.

But doesn’t discounting at market rates of return suggest we should do almost nothing to help far future folk, and isn’t that crazy?  No, it suggests:

1. Usually the best way to help far future folk is to invest now to give them resources they can spend as they wish.
2. Almost no one now in fact cares much about far future folk, or they would have bid up the price (i.e., market return) to much higher levels.

Very distant future times are ridiculously easy to help via investment.  A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.

So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them?  How can you think anyone on Earth so cares?  And if no one cares the tiniest bit, how can you say it is “moral” to care about them, not just somewhat, but almost equally to people now?  Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.

So why do many people seem to care about policy that effects far future folk?   I suspect our paternalistic itch pushes us to control the future, rather than to enrich it.  We care that the future celebrates our foresight, not that they are happy.

Large legal barriers now hinder us from making deals with the far future, and from saving today to benefit them.  We do not enforce many kinds of terms in wills, and charities are required to spend a certain fraction of their capital each year, to prevent their endowments from growing large.  The best way to help the far future might be to break down such barriers.  But few try this because, well, we just don’t care.

GD Star Rating
Tagged as: , ,
• Telnar

Investment discounting relates to a transfer of resources over time at the discount rate. There is no question that people who invest now (at less than 100% of their assets) in order to use those resources in the future are making decisions which set the discount rate and we should respect their choices.

Similarly, individuals who choose to consume now rather than invest for their own later consumption (or their heirs later consumption) are facing the same choice. They are just making it differently. The main limitation here on accepting the discount rate at face value is that they may have a budget constraint which requires significant current consumption in order to survive to the relevant future periods (or information costs which make them behave that way). Still, we could in principle learn what their budget constraints are and therefore whether any of their consumption dollars are marginal.

In contrast, when we talk about large public projects, we are mostly talking about actions by governments to tax people’s wealth in one period and use it to benefit another. This has several difficulties not encountered when modeling individuals since we have to model interpersonal transfers. That said, it’s still fairly clear that no one cares about the distant future, even if they do care about imposing their short and medium term priorities on others.

• Carl Shulman

“So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them?”
You claim to care about possible people coming into existence. (http://www.overcomingbias.com/2007/12/it-is-good-to-e.html) I *am* incredibly eager to bring vast numbers of far future folk with good lives into existence relative to bringing small numbers of very near-term future folk with good lives into existence.

“How can you think anyone on Earth so cares?”
Because I know a small number of specific people who clearly do guide their (limited) altruistic efforts in accordance with such moral systems. This is a separate question from their ratio of effort between self-centered and altruistic matters, or akrasia.

“And if no one cares the tiniest bit, how can you say it is “moral” to care about them, not just somewhat, but almost equally to people now? Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.”
Most people and governments seem to value the well-being of foreigners or people of other races as worth many times less than those of their compatriots, separately from the mixture of goods that will best promote their well-being given their current wealth levels, etc. Nevertheless, we can (and sometimes do, at increasing rates with increasing education and the like) say that such devaluation of foreigners is immoral.

• Nick Tarleton

I notice you still weight future people’s preferences equally with present people’s, so you don’t seem to be intrinsically discounting.

• Carl, are you saying you have in fact funded investments to be paid to people 12,000 years hence, to use as they see fit?

Nick, how exactly could someone weigh 12Kyr future folk “intrinsically” the same as us and yet not be eager to use a googol return investment to help them?

Telnar, it is obvious most people do have marginal dollars to spend.

• Ben Jones

Man, now my brain asplode. What do you call a pyramid scheme that has its apex in the present and its base in the future? [No, the answer is not ‘a decision market’!]

• bsball

Wouldn’t it have been great if someone had invested \$1 about 2000 years ago at a measly 2% interest rate. We could split it evenly and each person in the whole world would be a multi-millionaire right now! Then one of us could invest \$1 for the people in 2000 years so they can be millionaires, too. This is so cheap! We could even invest \$1 each year to be redeemed by whoever is around in 2000 years. Then everyone will get several million a year to live on in the future. Wow, think what life would be like then! Why didn’t someone think of this earlier? Or maybe they did and decided that we were not worth even a single \$1.

• Nick Tarleton

Because of uncertainty that current markets or investments or anything related to current finance will be around in 12,000 years, the best investments to help future people seem to be existential risk prevention and friendly AI research. Speaking for myself, I have in fact put some money toward these causes – not nearly as much as I should, but as Carl says, that’s because of akrasia, not because I consciously regard future people as less valuable.

• Nick, what probability would you assign that resources invested today would actually be paid off then? If it is more than one in ten to the fifty, it still seems like a good deal, if you cared about them relative to yourself more than one in ten to the fifty.

• A 2% annual return adds up to a googol (10^100) return over 12,000 years

Well, just to point out the obvious, there aren’t nearly that many atoms in a 12,000 lightyear radius.

I am by no means certain that the market cannot violate (our current models of) the laws of physics, because it is a fair historical regularity of economically progressive societies that they violate the previous century’s conception of some of the laws of physics – while obeying others, of course. We violate Newton’s version of gravitation, but not conservation of energy.

But I don’t bank on our being able to violate any particular “law” of physics in 12,000 years, though it’s a good historical bet that future markets will do many things we deem “impossible”.

So if you are not incredibly eager to invest this way

I am, though I tend to invest primarily in assuring these future folks’ existence, rather than trying to create capital infrastructure that will be obsolete 30 seconds after the invention of nanotechnology.

Ben Casnocha recently asked a group I was part of, “What do you believe?” And it seemed to me that I had two unusual beliefs which lie at the core of all the unusual things I do. One of these beliefs is that mind is not magic – you can learn a theory of rationality that describes how to think more efficiently; you can build AI.

And the other belief is that humanity’s future is greater than its past – there will be more people with a higher quality of life, many many more and much much higher, if we survive out this century; and that it is a great rare privilege to be born into an era where I can help effect this.

• A suspiciously large fraction of people who claim to care about the third world poor believe that the best way to help is to pursue their favorite hobby or career, and not to just give the poor money. Medical researchers seek disease cures, computer folk build laptops or subversive software, musicians hold concerts to inspire donations, policy wonks lobby governments to build schools, and so on. This tendency seems even more pronounced for “helpers” of far future folk. Can anyone offer a concrete calculation suggesting that pursuing your favorite hobby or career actually does help them more than simple investing?

Eliezer, I know of no law limiting economic value per atom, but even if there were such a law, surely the amount of value set by that limit would still be a very tempting quantity to offer our descendants for a tiny cost today, if one actually cared about them.

• J Hill

Maybe I’m missing something, or just have a flawed understanding of economics. But it seems to me that there is no way to magic money out of nowhere. I was always under the impression that there is a finite amount of value in the world (Which includes all products and services [including government]) while not necessarily measurable still very much real. Money has to be based on something with value after all. It used to be gold, now it happens to be bombs (or the nations good name whichever you prefer). Also, I would think the inflation due to turning 1 dollar into a google would cancel out most of the “increase”. 1 dollar 12,000 years from now wont mean what it does today. Perhaps it will be 1/google as valuable. Which would make the actual return on the investment 0.

• Cynical Masters Student

J Hill: I assume he was referring to 2% as the as what is known as the “real interest rate” (which is nominal interest corrected for inflation). Otherwise this would be an exercise in futility, indeed.

But even when ignoring inflation, there are a lot of practical issues with this. How are you going to manage this investments? And will productivity really rise with 2% per year so you can really buy something with it? And if so, what’s to say future generations won’t be by far richer than we are, anyway (as was true for the past 300 years)?

• Nick Tarleton

Nick, what probability would you assign that resources invested today would actually be paid off then? If it is more than one in ten to the fifty, it still seems like a good deal, if you cared about them relative to yourself more than one in ten to the fifty.

I would assign a higher probability than 10^-50, but still low enough that my money would be better used to do things in the short run that increase the probability that there will be people at all in 12,000 years. If my \$1 donation to [organization] decreases the probability of existential disaster by one in a trillion, I’ve effectively made the future .000000000001% richer (about \$10^103, assuming 2% growth for 12,000 years).

• Douglas Knight

believe that the best way to help is to pursue their favorite hobby or career, and not to just give the poor money…musicians hold concerts to inspire donations

Surely the musicians produce more money by labeling the concert as for charity than if they gave a normal concert and individually donated the money!

• “Very distant future times are ridiculously easy to help via investment. A 2% annual return adds up to a googol (10^100) return over 12,000 years,”

This never actually works. If it did, someone somewhere would have ~\$100 quadrillion from the \$1 their ancestors saved back in Roman times. Lending money at interest is a really, really old practice; the Old Testament has several specific prohibitions against it. So why aren’t there any trillionaires?

• Doug S.

Wouldn’t it have been great if someone had invested \$1 about 2000 years ago at a measly 2% interest rate. We could split it evenly and each person in the whole world would be a multi-millionaire right now! Then one of us could invest \$1 for the people in 2000 years so they can be millionaires, too. This is so cheap! We could even invest \$1 each year to be redeemed by whoever is around in 2000 years. Then everyone will get several million a year to live on in the future. Wow, think what life would be like then! Why didn’t someone think of this earlier? Or maybe they did and decided that we were not worth even a single \$1.

Well, considering what the equivalent of a million dollars would have been able to buy 2000 years ago, maybe we are all multi-millionaires?

• …there is a finite amount of value in the world…

Man, those cavemen must have been rolling in it!

Strange that people are talking about investing a dollar and waiting for it to turn into trillions. Hey, why not work really hard, invest ten thousand, and cut down the wait by a few orders of magnitude? When it’s a serious sum that could really make a difference a few decades down the line, you really show yourself how much you care about the future.

• This is one of the best posts I’ve read defending intrinsic discounting, but I am still completely unconvinced.

I was quite surprised to see this slip:

A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.

I’m surprised to see this justification. It is only true if the value we derive from money is linear and I don’t know anyone who believes this: particularly for such large sums. Logarithmic utility from money is much more popular, and says that the expected benefit could well be less than that of a certain \$1. Things are different if it is spread over more people, but then Eliezer’s points come in (people presumably require more than one atom each!).

We must also consider that these people will already be spectacularly wealthy by out standards (perhaps having wealth of \$10^50 if we are to believe these growth rates) and thus the benefits of getting much more are quite diminished.

Nick, what probability would you assign that resources invested today would actually be paid off then? If it is more than one in ten to the fifty, it still seems like a good deal, if you cared about them relative to yourself more than one in ten to the fifty.

The same goes for this piece of reasoning.

Toby.

• John Maxwell

Benjamin Franklin did this. Here is a quote from wikipedia:

• AS

I am unaware of any reasonably accepted physical theory that has been violated in the last 500 years, if ever. And I mean reasonably accepted physical theory in the sense that there have been experiments that support the theory and there are experiments which would falsify it. If Newton’s theory of gravity was not correct we would be getting barbecued inside the Sun or flung far away into the universe. In the limit of extremely high mass and velocity, it just gets included in the theory of general relativity.

• John, great example, dare I say the exception that proves the rule.

Toby, yes of course do discount the investment for chances that the money will be stolen, the future folk won’t be there, they will be ridiculously rich, or they will betray our hopes so badly we repudiate them. But does the chance that none of these things will happen really fall exponentially in time nearly as fast as the market rate of return; does it for example approach one in a googol chance over 12Kyr?

• Andy

Maybe Baumol’s article “On the Social Rate of Discount” (1968, AER) is relevant here? Basically, he says that governments should undertake projects with the same discount rate that the market does — this best reflects the social welfare function.

• Surely the musicians produce more money by labeling the concert as for charity than if they gave a normal concert and individually donated the money!

Douglas, suppose the market is 100% efficient. It seems likely that one of the two statements would turn out to be true:

1. The concert is a much more cost-effective way of helping the poor than writing the open-source software, or

2. Writing the software is a much more cost-effective way of helping the poor than putting on the charity concert

If the former is true, the software developers should try to make money, and hire the musicians to put on the concert. If the latter is true, the musicians should try to make money, and hire the developers to write the software.

In practice, that does happen to some degree; for example, auto-company workers tend to donate money to famine relief funds, rather than offer to build cars for poor people with their bare hands.

However, people often prefer to donate their services directly, rather than give money. Reasons may include:

1. Moral hazard, and other market inefficiencies

2. Irrational overconfidence; everyone believes their hidden genius is under-appreciated by society, and that the intrinsic value of their services is far above market wages

3. Selfish desire for recognition and fame. (How famous is Leonardo da Vinci, vs. how famous is Ludovico Sforza, the guy who actually paid the bills?)

The criticism always seems strange coming from academic economists, though. Maybe Robin can comment on which of these reasons explains why he writes papers and blog postings himself, rather than using his time to make extra money, and then hiring other people to blog and write papers for him.

• Daniel

Robin,

As far as I can tell, you’re pointing out that people don’t come anywhere near acting consistently with the claim that we should have no temporal discount rates. Similarly, people don’t come anywhere near acting consistently with the claim that we should have no spatial discount rates. People tend to weigh the interests of people who live near them far more than the interests of people who live on the other side of the world. This doesn’t, I think, establish the moral claim that we ought not weigh the interests of future people equally, any more than observing that people tend to care about those near to them establishes that we ought have a spatial discount rate. At best, it shows that people who claim that we shouldn’t have a temporal discount rate, but who don’t set away money for future generations, are hypocrites.

Any utilitarian is a hypocrite in this sense-nobody who has ever lived can plausibly claim to have done their best to maximize total global utility even at a given time. But pointing this out doesn’t amount to a refutation of utilitarianism. Just like utilitarianism without spatial discount rates but with temporal discount rates, a utilitarian theory according to which we ought not discount the interests of future people would be a very demanding one, and it would require us to act in ways very different from how we currently act. I don’t think that shows that it’s wrong.

• I think this is a good post but misses the point entirely. There is one simple reason for discounting. Question: Which do you prefer?
A: a hot meal today
B: no hot meal today, but a hot meal tomorrow
C: no hot meal today or tomorrow, but one after that
D: etc.

• Unknown

Along the same lines as Daniel, it seems that Eliezer is claiming that people should not discount the value of future people, while Robin is claiming that in fact, everyone does discount their value.

We could equally say that no one really desires to overcome all of his biases; if a person did desire this, he would not disagree with others who did the same, as Robin has shown himself.

If the empirical claim that everyone has a discount rate is proof that we should have discount rates, then the fact that everyone is irrational is a proof that we should be irrational.

In other words, it seems that Eliezer’s position is that the ideal set of preferences would equally value all times. Robin’s position is that no one actually has this set of preferences. These two positions could easily both be true.

• michael vassar

Robin: I’m confused. Are you really saying that the best way to help the third world poor is simply to give them money? With what confidence? Does this opinion agree with that of any economist who has seriously studied the problem?

Seems to me that when we give the first world poor money the results are less than impressive. Is life really all that great in oil and diamond exporting third world countries with generous doles compared to much less wealthy but decently managed third world countries? I’m pretty sure I’d much rather live in Jamaica than in Saudi Arabia, even if in Saudi Arabia I’d get a government stipend much greater than the per capita GDP of Jamaica.

• michael vassar

I suppose that I should follow up that comment by pointing out that I lived in Kazakhstan for a year in 2000 and visited for a few weeks this year. Their GDP per capita has been more than doubled in that time period due to oil wealth and changing oil prices, and since I was in the capital both times the local per-capita GDP had increased by 10 or 20 fold. Despite this, life really isn’t substantially better there than it was before. I’d vastly prefer to live in most parts of the US on 30K/year than to live anywhere in Kazakhstan with the same income like my wife’s family do, and I’d vastly prefer to live in Costa Rica on 80% of Costa Rica’s official PPP than to live anywhere in Kz at nominally the same standard in the form of Kz’s PPP.

• michael vassar

Seems to me that Franklin’s investment got a 1.034% nominal return. Assuming that nominal rate of return continues and that growth of the fund is relinquished including not reinvesting to compensate for inflation, the interest on it can raise 7 people from poverty to the median income until inflation reduces its value.

Invested more directly in helping the people of his day Franklin might very plausibly have raised more than 7 people from poverty the poverty of his day to the median income of his day (with much larger attendant life expectancy gains) at a lesser expense through the simple purchase of real property for them. How many slaves could have been freed in 1785 with \$8800? How much wealth created through public health spending, science prizes, etc? There were only a thousand or so scientists in the world back then. If their work was indirectly responsible for 10% of the ongoing global economic growth and if marginal scientists would be 10% as valuable as average scientists it was possible to increase global economic growth by .001% by endowing a university chair. What did that cost? Less than \$8800 by far at Harvard I’d guess.

All of the above, of course, suffers from massive selection bias. How many people tried to do what Franklin did and *didn’t* get anything for their money? If annual default risk for such endowments in general was 1% (absurdly low for any 2 centuries of human history) The expected value of Franklin’s investment today is about \$1M, not \$7M. Also, is the return listed above after management fees or did Franklin’s symbolic relationship with his cities lead to the management fees going under some other budget item? Hedge Fund fees would leave his fund with \$37,000 which today won’t nearly buy the cash or gold he initially invested.

• michael vassar

Also, a general point regarding the health of this site. “Nick, what probability would you assign that resources invested today would actually be paid off then? If it is more than one in ten to the fifty, it still seems like a good deal, if you cared about them relative to yourself more than one in ten to the fifty” seems to me to be an instance of “Pascal’s Mugging”. I hope that we can all agree that discourse isn’t well served by people making arbitrary claims with very high attached utility (or dollar!) values and pointing out that due to the limits of human probability calibration no-one can refute those claims with the confidence necessary to not have their decision dominated by them if they try to make their decision using decision theory and calibrated confidences.

• Toby, yes of course do discount the investment for chances that the money will be stolen, the future folk won’t be there, they will be ridiculously rich, or they will betray our hopes so badly we repudiate them. But does the chance that none of these things will happen really fall exponentially in time nearly as fast as the market rate of return; does it for example approach one in a googol chance over 12Kyr?

Robin, I think you’ve misunderstood me or I’ve misunderstood you. I’m saying that the utility of money is not linear, but has diminishing marginal utility. I believe this is uncontroversial. This means that the expected utility of a probability p chance of getting \$X does not equal that of a certainty of \$pX, but is potentially much lower. For example, with logarithmic utility from money, the expected utility of a p chance of \$X would be p.log(X). This would be equivalent to a certainty of getting exp(p.log(X)) dollars. Depending on the exact functions, this can be quite tiny and is not even of the same order as \$pX (eg it is out by ~100 orders of magnitude in one of your earlier examples). Logarithmic utility functions cancel out the exponential buildups…

In any event, Daniel and Unknown have summarized the ethical position perfectly: the fact that people don’t do something does not show that the thing wasn’t morally required — especially if it would involve university level economics to see that the thing was even helpful (if indeed it is helpful…).

As a final remark, the intuition you (and other economists) are trying to defend is that we shouldn’t save a large fraction of our money to give to much richer future people, even if it would help them tremendously. However, by your own argument, you would *still* be committed to doing just this if only the return on investment was higher. For example, if it was super-exponential, or even if it was just higher than our discount rate. What is the chance that the real rate of return will be greater than 5%? Is this chance greater than 1 in 10^50? If so, your argument would ‘mug’ you too.

PS I agree with Michael’s points too. His last couple of posts have included valuable insights. I do think that the real rate of return on good charitable projects is likely to be higher than that of the risk free return on investment, taking much of the sting out of the argument even if it did work.

• All, yes we preach some ideals we do not achieve. But there is a huge difference between an ideal most people accept, and many people act on a lot, and an ideal only a few people voice, and almost no one much acts on. For example, compare the idea we should care about our children like ourselves, to the ideal that we should care about ants as much as ourselves. Surely the first kind of ideal has a much stronger case to be called “moral.”

Toby, I understood your reference to declining marginal utility with increasing wealth. With U = log(\$), MU = 1/\$, so to cancel the attraction of a 10^50 return they’d need to be 10^50 times richer than us. Any modest wealth increase only gives a modest correction, leaving the investment very attractive to anyone who cares much about them.

Michael, you badly miscalculated Franklin’s rate of return. Yes, I think giving cash to individuals (in small amounts spread over time) works better than the average non-cash “help” done in the name of the world’s poor. And I see nothing unethical with pointing out how small a payoff probability would justify investing for far future folk one actually cared about. The scenario where future folk exist, have comparable per-capita wealth, and usually pay their debts is far from “arbitrary”; I consider it the default reference scenario.

• Michael Sullivan

Toby, I understood your reference to declining marginal utility with increasing wealth. With U = log(\$), MU = 1/\$, so to cancel the attraction of a 10^50 return they’d need to be 10^50 times richer than us.

I don’t think you did understand. They would need to be 10^50 times richer than us only if the return was *certain*. But it is hardly certain. And as toby said, with a log utility function, the probability works on the *log*, so if there is only a 10% probability of the fund making it through 12,000 years, then they need only be 10^5 richer than us. What I know of economic history suggests that if our future course is such that the probability of that fund still being around in 14,008 is much higher than 1-2%, then the probability that our 140th century descendants will be at least 10^10 richer than we are seems very high.

When funds can be counted on to maintain solvency and generate returns for many years, productivity and consumption grow. In general, the expected real return on very long term investment must be bounded above by productivity*population growth, otherwise it would be possible to have a passive investment eventually be worth more than the entire world’s wealth.

So if it’s possible to have an investment sit for 12000 years earning a 2% real return, that will only be because the world has been getting 2% richer over that time.

I see two general classes of possibility. Either the growth rates of the last 200 odd years are a new thing that will indefinitely prevail or even accelerate (in which case, your investment will have a reasonable chance of accruing unfathomable wealth that is actually accessible in 12000 years, but our descendants will also on average be unfathomably wealthy anyway), or they are an anomaly which will not be sustained, in which case our descendants may be little wealthier than we, or even poorer, but our chances of actually getting them any wealth set aside today, let alone a googol of euros or whatever are {choose your favorite apocalyptic metaphor}

Your utility calculus only works if we imagine that somehow financial and investment and social systems stay intact enough for 12000 years to return all that wealth to our passive account, without our descendants generating increased wealth of roughly the same order as our investment returns. That seems highly implausible to me at first blush.

• Michael, repeat after me: “Expected utility is linear in probabilities. Expected utility is linear in probabilities. …” For marginal changes to wealth, it is marginal utilities that matter.

• When will need ever be greater than it is right now? Despite population spiralling upwards, can anyone tell me when was the last time average quality of life went down? I mean, we’ll all be living in computers with nanobots before long anyway, right?

Get some perspective: there will probably never be a time when more people will be in more need than today. If there is, well, something’s gone horribly, horribly wrong anyway. By this rationale, give your charity to someone who’s in trouble today. There is no mechanism with an interest rate high enough that it would be more use to the human race than doing something with it now. Human history (and our future) is not a linear progression from ‘bad’ to ‘ok’ to ‘fine, thanks’ and on forever. Otherwise, what are we working towards? I want everyone to be alive, and happy, as soon as possible.

Even if you disagree with that, how about our moral imperative to improve the world as we see it? Putting money away for future generations is a cop-out as far as I’m concerned, and one that throws up serious concerns about what one is living, and acting, in aid of.

This isn’t irrational indignation, this is the way humanity progresses: We make a better future by working at problems in the present. I imagine a few weeks on the ground in a third world country would probably bring most high-thinkers around. “I would help you out, but I’ve already given – to people like you in the future!”

• Michael Sullivan

not if your utility function is logarithmic it isn’t.

Are you suggesting that you would not accept the following trade:

you are playing a game where you will receive 1 billion US dollars if a random number generated from 1-100 comes up as the number of your choice, and lose \$100,000 US dollars if it does not. It’s a contract that you and a counterparty have signed, and the money is accounted for — you have no reason to believe that you will not be paid the 1 billion if you win. Your position in this game is worth \$9,901,000 by linear expectation.

Are you seriously suggesting that you would not accept a \$6,000,000 payment for your position in the game? (or if you are wealthy enough that \$100K means nothing to you, that some hypothetical you with a net worth under \$1,000,000 would not).

But this is ignoring the very strong evidence that stable real returns of *any* size simply do not happen when the overall wealth of the world is not growing roughly apace with those returns. When in history could you have reasonably expected a fund to earn a 2% return with low risk over even 2-300 years while the wealth of the world (or at least your corner of it) was not growing similarly over that period?

• Nick Tarleton

not if your utility function is logarithmic it isn’t.

Yes, it is. With a log utility function, the utility of an expectation of N dollars with probability P is P log N, not log(PN).

I imagine a few weeks on the ground in a third world country would probably bring most high-thinkers around. “I would help you out, but I’ve already given – to people like you in the future!”

Perhaps true for helping already-wealthy people in the future, but not for ensuring that those people can exist at all. If you take “living in computers with nanobots” seriously, presumably you take existential risk seriously. Would you rather spend X dollars to save ten current lives, or decrease the probability of existential disaster by one in a billion, effectively saving only 6 current lives but allowing some large number of additional future people to exist (between 10^21 and 10^38, by Nick Bostrom’s numbers)? Seems like an easy decision to me.

• John Maxwell

Michael,

According to Franklin’s will, the principal was distributed after the first 100 years. The money that was distributed after 200 years was just the interest on the interest of the first 100 years.

• Mason

Two things: One for Eliezer one for Robin

“If you wouldn’t burn alive 1,226,786,652 people today to save Giordano Bruno from the stake in 1600, then clearly, you do not have a 5%-per-year temporal discount rate in your pure preferences.”

Agreed I wouldn’t have a 5% discount rate, but that isn’t the same as not having any discount rate. The purpose of the discount rate is to leave one indifferent between the two situations. To figure out the discount rate for sparing Bruno, you’d have to estimate how he would have affected our world had he lived longer.

I would sacrifice 5 people (myself, my sister, and my cousins) today to save/ensure my grandfather was able to conceive our parents.

Procreation certainly isn’t the only thing to discount. My grandfather also ran a steel mill and I’m sure the jobs he created helped keep many people alive/improve the quality of their lives (he lived in South Africa where jobs/food and their relation to death was real for many). But I can’t reasonable estimate the quality improvements to his employees lives to start discounting them.

Another example might be sacrificing ten unmade friends to save their grandparents from WWII. Either way I don’t have those friends. Ideally, I’d be able to make a beneficial trade with the past, and sacrifice less than all 10 friends in order to save all of their grandparents.

“yes of course do discount the investment for chances that the money will be stolen, the future folk won’t be there, they will be ridiculously rich, or they will betray our hopes so badly we repudiate them.”

I don’t see why they have to be ridiculously rich, as long as they are richer then wouldn’t the money do as much good now as later. In fact the growth you’re able to get on you money will be equal to the amount that the future is richer.

If that one dollar could be used to give a vaccine to a child today who would live instead of die, and then have kids who would have kids who …ect. Wouldn’t you be indifferent between several thousand more people in 12,000 years and a bunch of extra money?

• Ben Jones

Nick,

One in a billion? I’ll save the six lives mate. No, seriously, I will. I’ll save the six lives, and tell those six people they are alive because I trust them to be part of the global push to make sure humanity’s on the right track. They don’t have to do much, just make it a billionth less likely that we’re all going to fry. I’d even help them out to make up for my grave grave sin against mankind. If seven individuals in a world of apathy and ignorance can’t manage that between them, well, we’re in some trouble, aren’t we?

Most of Bostrom’s ideas on existential risk do seem very real to me, particularly the human-engineered scenarios. However, if you’re giving such risks the weight you seem to, then it doesn’t make sense to spend a waking second on anything but guarding against a Doomsday Scenario. I imagine even Bostrom takes a second each day to smell his coffee. Surely the sentence would you rather do X, or decrease the probability of existential disaster by one in Y? is one of the most unproductive things ever written. It’s an argument against pretty much anything! What if it’s the same 6 people, and one in a trillion? One in a Googol? If it’s an ‘easy decision’ at one in a billion (a truly, truly small probability shift), where does it end? Why are you wasting time reading this? Go save the world!

Either way, I wouldn’t say it’s an argument for future discounts, or against my point above. There is no ‘Existential Risk Avoidance Fund’ that trades human lives (or anything else) for discrete probability shifts. Definitely don’t put money in a savings account with a view to avoiding the end of the world! How will you know when to withdraw it for maximum effect? All the big problems with the world, existential ones included, need work today.

[It’s hard to take it seriously when it’s described as ‘living in computers with nanobots’, isn’t it? Wasn’t aiming for flippancy above! Ahem: The Technological Singularity.]

• michael vassar

“Michael, you badly miscalculated Franklin’s rate of return.”

May I see your calculations then? It seems to me that my calculations are correct.

“Yes, I think giving cash to individuals (in small amounts spread over time) works better than the average non-cash “help” done in the name of the world’s poor.”

Does anyone at all (including people who are generally skeptical of aid such as William Easterly) think that it’s not easy to provide help better than the average done in the name of the world’s poor? Note that you, when proposing cash, are also claiming that you know how to do better than the average, but the details of your claim disagree with those of essentially all experts on the subject.

“The scenario where future folk exist, have comparable per-capita wealth, and usually pay their debts is far from “arbitrary”; I consider it the default reference scenario.”

I disagree with the above claim in favor of the claim that the default reference is something close to radical skepticism about the far future, usually expressed as “no-one can predict the future”. I’m quite confident however that the “default reference scenario” among economists and physicists would flat-out deny the possibility of 2% compounded growth for 12,000 years. All economic models I am aware of suggest long-run diminishing returns to capital, as does all of the economic history during which economic growth was rapid and property rights secure. Black holes bound the entropy and information processing possible within a given volume in any event.
I’m curious though, if the above is the “default reference scenario” for 12,000 years in the future is it also the “default reference scenario” for 3^^^3 years in the future? If so, what are we to think of your third post here http://www.overcomingbias.com/2007/10/pascals-mugging.html ?

“And I see nothing unethical with pointing out how small a payoff probability would justify investing for far future folk one actually cared about.”

My objection is to the abuse of decision theory in discourse here. I will have to make a separate post on this topic. The short version is that the invocation of decision theory combined with very small probabilities in the analysis of other people’s decisions in order to argue for some course of action or to make claims about their psychology or beliefs is outside the bounds of polite academic discourse. It isn’t done in lectures, presentations, debates, papers, or textbooks. I think that there are good reasons for this. If you disagree, I would like to know why you disagree with expert consensus. Also, I’d like to know why you don’t play the lottery. Surely your calibrated probability that you have badly underestimated the probability of winning can’t be as low as the ratio of the utility to you of \$1 and \$100M or so, can it?

• michael vassar

OK John, that clarification of the details of the will slightly changes my observations about his rate of return, or maybe more than slightly. Now I need details as to how much money was left in the fund in 1885 in order to do a serious calculation. Have you done one? Are the results seriously different? Different enough to invalidate the point about promoting science, or even about freeing slaves?

• Carl Shulman

“Seems to me that Franklin’s investment got a 1.034% nominal return.”
It looks like Michael wrote the number by which Franklin’s savings were multiplied each year, 1.034, and put a percentage sign beside it, when he meant to say that the annual nominal return was 3.4%.

• michael vassar

Yep, my mistake. If I meant 1% then management fees, if any were charged, would have made the returns negative. Likewise, a 1% rate of return on \$7M would have only raised 2 people from poverty to an average income.

• Black holes bound the entropy and information processing possible within a given volume in any event.

But these still leave, what, 50+? orders of magnitude of room, and it’s possible we’ll be able to get around them (basement universes, that sort of thing), and value depends not just on the number of computations, but on their kind, and the economic value of different possible kinds/combinations of computations is probably also bounded but not obviously by anything close to the same number that bounds the number of computations.

• tcpkac

2,000£ was apparently the total of old Ben’s salary for his 3 years as President of Pennsylvania. The equivalent for a Governor today, would be, say, 465k\$. Plugging that into the terms of his will (3/4 of the funds withdrawn after 100 years) would imply, to get to the combined funds’ 7.3m\$ today, a real rate of return of….. 2.1%.
Keep going ! Only another 11,800 years of stable economic growth to go !

• Douglas Knight

michael vassar,
according to Robin Hanson’s link, Boston got 4.6% and Philadelphia 3.6% for the first 100 years. The management fees were, as you guessed, donated. Franklin dictated the investment: loans to apprentices. I presume he expected that would have an immediate benefit.

• Robin,

I really don’t think you have got my point: the factor of 1/1000 does wreck your calculation. I appreciate that you are busy and don’t have the time to try to work out exactly what each commenter is saying, so I’ll spell things out a bit more. Let’s start with a logarithmic utility function (I’ll choose base 2, but it doesn’t matter much).

u(\$X) = log2(X+1)

[for those who don’t know, the +1 is needed to have zero utility at \$0 rather than negative infinity]

therefore

u(\$10^50) = log2((10^50)+1)
u(\$10^50) = 166 utils

The expectation of a 1/1000 chance of 166 utility is 0.166 utils, so the expected benefit of investing is 0.15 utils. (Actually, this is a significant over estimation, as the person would have already had a lot of money and we should only look at the difference, but this is an upper bound on the utility increase).

The benefit of a guaranteed dollar to a poor person with \$7 is

u(\$8) – u(\$7) = log2(9) – log2(8)
u(\$8) – u(\$7) = 3.170 – 3.00000 utils
u(\$8) – u(\$7) = 0.170 utils

Thus it is better to simply give it to someone with \$7 (say in sub-saharan Africa) than to invest it in this crazy scheme and give it to a single person.

Even giving the dollar to a millionaire produces at worst one 10^5th of the utility of the long term saving scenario as opposed to one 10^50th. Thus it is better to give it to a millionaire than to invest it if there were only a one 10^8th chance of the investment scenario working.

This seems to be a *significant* error in your calculation (losing about 50 orders of magnitude). Now there are better strategies you could appeal to than that of giving it all to a single person (such as splitting it evenly), but unless I’ve got something quite wrong here (and I’d like to know if that is so) your original estimates are very flawed for the reasons above.

• very minor correction:

The expectation of a 1/1000 chance of 166 utility is 0.166 utils, so the expected benefit of investing is 0.15 utils.

Of course that last number should be 0.166 utils as well — copy and past error.

• Toby, I was proposing that you save a tiny amount today, and then spread the resulting payoff across all the people alive then, in a way that only modestly increases the wealth of each person. For this proposal, what matters is the marginal value of wealth to them at their status quo wealth level.

• Robin,

The spreading over people helps, but not by anything near what you suggest. The number of people certainly is bounded above by the constraints that others have mentioned and I don’t know anyone suggesting that population will grow exponentially in the long term. Thus the dollars given to each person will rise dramatically in the long run and the diminishing marginal utility of money will kick in. Therefore you still cannot get away with things like:

…what probability would you assign that resources invested today would actually be paid off then? If it is more than one in ten to the fifty, it still seems like a good deal…

Even if your argument did eventually work (when we consider all the best evidence on long term exponential growth of money and people — perhaps via some radical upload scenarios) it is so amazingly non-obvious that you can’t expect the average person to have come up with it, and thus can’t claim that their actions to the contrary count as any kind of moral evidence (which was required to reach your ultimate conclusion that discount rates are morally OK).

• Toby, come on, school kids have for generations been taught the wonders of compound interest. It doesn’t take a genius to realize that if you keep compounding you can get to astronomical ratios. For example, Franklin knew and publicized this two hundred years ago.

If you are afraid there won’t be enough people to appreciate your huge gift, as it would give them each too much, well then invest less now, or don’t wait as long before the gift pays out. But this surely isn’t an excuse for doing almost nothing for them, which is what most almost everyone does.

• tcpkac

Robin, could you demonstrate that your financial capital accumulating on someone’s balance sheet will increase the real wealth available in the year 14,000 ? Or will it just create an almighty redistribution problem when it matures ?
And don’t forget all the money supply problems, so beloved of the Reaganites and Thatcherites, that that swollen balance sheet is going to create along the way.

• “How will you know when to withdraw it for maximum effect? All the big problems with the world, existential ones included, need work today.”
The fact that they need work today doesn’t tell me how to compare the risk that money I spend on existential problems today will be wasted to the risk that I won’t find better uses of my money tomorrow.
If I think there’s a 20% chance that I’ll gain knowledge in the next decade that will improve 10 fold the expected effectiveness of money I spend to reduce the risk of unfriendly AI, it seems appropriate to save money. I think I’m current ignorant enough about how to avoid unfriendly AI that a 10 fold improvement in how well I spend it is quite plausible. If my confidence that I’ve found a wise way to spend money guarding against AI risks gets above 5%, then further 10 fold improvements will become less plausible, and saving will become less wise.

• Regarding strictly internal subjective discount rates, Paul Samuelson once argued that a reason why they tend to be positive is the phenomenon of time subjectively appearing to go faster as we age (along with our finite life spans, assuming that we do not have infinite horizon bequest perspectives, which is also probably realistic for most people).

• If I think there’s a 20% chance that I’ll gain knowledge in the next decade that will improve 10 fold the expected effectiveness of money I spend to reduce the risk of unfriendly AI, it seems appropriate to save money.

Peter, if there’s additional evidence and insights out there that can improve the expected effectiveness by a factor of 10, then surely spending current resources on uncovering that evidence is, itself, of high utility.

Squirreling funds away in a Vanguard donor-directed fund may or may not be the optimal choice, but I think your comment over-simplifies all the factors to weigh in such a decision.

• Rolf,
Yes, it might be selfish of me to not spend most of my time looking for effective ways to reduce AI risks. Or maybe someone will publish the necessary insight five years from now, and my devoting too much effort now to find that insight will cause me to burn out and not pay enough attention to AI discussions then to notice that insight.
Do you have a description of all the factors to weigh that isn’t over-simplified?

• Hal

I’d like to make a belated comment here, because I had a different interpretation of Robin’s argument and I want to know if it is right.

At first, the billionaire defends everyone on earth equally. He treats everyone the same. But then when their individual preferences are taken into account, and possible side payments allowed, he modifies his expenditures. Some areas and some people get more protection than others. It’s not that he cares less about certain people, but in a way he acts AS IF he does.

Then we turn our attention to the future, and a similar thing happens. At first he treats everyone equally and tries to defend everyone the same. But then he takes their individual preferences (as far as can be guessed) into account, and allows side payments. He modifies his distribution as a result. Robin says that the end result is that the provided protection decreases with each year, at a rate equal to the market rate of return. Robin doesn’t explain or prove this, but I accept that it is an elementary and standard economic result.

This means, as I see it, not that the billionaire cares less about people in the future, but that his protection schedule is set up AS IF he does. He acts AS IF he discounts the welfare of future generations by about 3% a year. And presumably this kind of argument extends to other situations where we try to make preparations for the future. We will act AS IF we are discounting the welfare of future individuals.

I keep capitalizing AS IF because I thought that was the point of these two examples (of first spatial and then temporal inequality). When he protected different regions differently, it was not because he cared less about, or discounted the welfare, of the regions he protected less. Rather, it was basically due to him deciding how he could provide the most protection possible for his funds, in terms of the preferences of the people being protected. So I think this reasoning is supposed to apply as well to the temporal case. We discount the welfare of future people not because we care less about them, but because when we distribute our funds and our efforts and take into consideration the preferences of future generations, the most efficient approach is to act as if such discounting is in place.

If my interpretation is correct, this justification for discounting is different from others that have been offered, such as pure rate of time preference, or uncertainty about the future. In particular, this one has the unique feature of discount rate being equal to market interest rates, which at least has the advantage of being concrete and somewhat predictable over the short term at least. We don’t have to delve into the mysteries of human psychology.

One final point I will make is that this topic is far from hypothetical. It is of great practical importance today in one of the most significant and potentially far-reaching political issues we face, global warming. In particular last year’s Stern Report triggered an extensive controversy by recommending a very low discount rate in evaluating how we should guard future generations against the threat of climate change. The question of whether it is better to reduce GDP today in order to lower carbon emissions, vs investing in research to improve technology, vs saving so as to make future generations richer and better able to solve problems on their own, is a very pressing one that different groups are fighting over. It would be nice to see the economic community speaking with one voice, or at least communicating their consensus, on this issue.

• Welcome back Hal, however temporarily; we’ve missed you! 🙂 Yes, one of the points I was trying to make clear is that after adjusting any plan to achieve gains from trade, it will look as if it had been chosen by discounting according to market rates of return, the plan might have changed such rates of return. I am somewhat mystified that not all economists agree with me on this.

• Do you have a description of all the factors to weigh that isn’t over-simplified?

Peter, since my response is off-topic, I’ll respond on the SL4 list.

• ShardPhoenix

“The scenario where future folk exist, have comparable per-capita wealth, and usually pay their debts is far from “arbitrary”; I consider it the default reference scenario.”

If by “comparable” you mean “comparable to the present”, then I’d say that the scenario where future people have wealth comparable to the present, and yet an investment has been able to grow to \$10^50, is not only not “default”, it’s obviously impossible.

In fact, I’d say it’s pretty clear that maintaining a 2% real growth rate (by modern measures of wealth) over 12000 years is also impossible, which undermines the basis of your argument.

I don’t actually disagree with that idea that we don’t care as much about future people as ourselves, however – that seems pretty clear to me too.

• ShardPhoenix

Oops, didn’t notice that this post was so old.

• Shard, I think it is fine to comment on old posts. Interest rates need not equal growth rates, and some people then could have comparable wealth to today without the average wealth being at that level.

• ShardPhoenix

But regardless, it’s not possible for anyone to meaningfully have \$10^100 (or even \$10^50) as measured in today’s money and by today’s standards of what having wealth actually means. There aren’t enough resources in the accessible universe. For example, \$10^50 would be enough, at today’s prices, to buy a sphere of gold about 10^14 light years wide, if I calculate correctly. That amount of resources doesn’t exist, so any attempt to actually spend this kind of money on material goods is just going to cause massive inflation and effectively destroy the supposed gains.

You can define this inflation out of existence by picking the right “basket” of goods to measure it with (eg only considering intellectual property, which is presumably much less limited), but you’d have to completely ignore the cost/availability of physical materials, which seems like a case of moving the goalposts.

Either way, you won’t have access to anything like 10^100 times what a dollar will buy you today, which suggests that 2% real gains over 12000 years are impossible.

• NE1

Any tycoon has need for at most 100 yachts. But every beggar would treasure just one.

• ShardPhoenix, I wonder if your statement “Either way, you won’t have access to anything like 10^100 times what a dollar will buy you today, which suggests that 2% real gains over 12000 years are impossible” will end up next to a list of Lord Kelvin quotes in some future history of misplaced skepticism…

Boy, there’s a universe I’d like to live in.

• Just a note to let ShardPhoenix know that this reader read his comment even though the post is old.

• I was wondering about this a year ago – would it be possible to invest money, while giving half the interest to tax payers, the eventual goal being an elimination of taxes?

This type of stuff – investments and whatnot – has always made my mind boggle though.

• David Simmons

Robin, do you still find it plausible that investing to give in the far future is the best way to help future generations? Should EAs be looking into this more seriously?

• Robin Hanson

Sure it is still plausible.

• David Simmons

Thanks! Do you mind if I ask some possibly stupid questions?

Your argument makes it sound as though an injection of cash into an economy automatically benefits the people there proportionally to how much cash was injected. But if that were true, wouldn’t we want governments to just print off loads of money all the time? Yes, we can target our funds so that it is not just a random injection (e.g. by giving money to far future charities), but a government could do this as well.

I am having a hard time seeing what the difference is between a trust fund maturing and its money being distributed according to the terms of its contract, versus a far future government printing and spending the same amount of money. I know the standard answer is that the latter would cause inflation, but why would it do so any more than the former would? If you have the majority of the world’s money (which would be implied by the fact that you were gaining money at a rate higher than that of the economy as a whole) and then suddenly release it, wouldn’t that cause huge inflation?

Or do you think that we probably know better than a far future government what the needs of its world are?

• Robin Hanson

We economists are almost always talking about real resources, instead of just money.

• David Simmons

I’m not sure if I follow you — are you saying that if you invest in financial markets and make \$1000 (adjusted for inflation), then it means the world has created \$1000 worth of resources (on average) that it wouldn’t have created if you hadn’t invested? Maybe this is really basic but it’s not obvious to me why this should be true — intuitively it seems like it could depend a lot on what kind of investment you are making.

Edit: Would another way to put it be: as an oversimplification, things people do can be divided into investment and consumption. If everyone was a perfect utilitarian that valued everyone equally with no discounting, they would only invest and not consume, since consumption benefits them now but investment benefits future generations more. But since they aren’t, they consume as well as invest. By investing in financial markets, a trust fund would be increasing the fraction of the world that is investing rather than consuming, and as a side-effect the fraction of the world’s resources that are controlled by the trust fund would increase. When the trust fund matures and releases its resources to the world, this cancels the side-effect but the resources created by investment still exist.

As an intuition pump, we could imagine the trust fund eventually grows to become OmniCorp which basically controls all economic activity, and does so with the goal of maximizing productivity without care for the happiness of the workers, who might consequently perceive its existence as dystopian. However, this is supposed to be good because it means that it’s that much sooner that humanity can effectively use all of the resources in the galaxy, and once that happens OmniCorp will stop demanding productivity and will allow everyone to consume the resources they’ve produced.

• Why would one be preoccupied with helping future generations when by most calculations they will be far better off than are we?

• David Simmons

The argument for caring about far future generations is that we can have a much greater effect on them than on current generations. Usually this point is made in terms of existential risk, i.e. that averting existential risk even by a small amount has a huge expected value since there is a chance Earth-originating life will colonize space and then survive for millions of years at a galactic scale: this is Bostrom’s “astronomical waste” argument. I think the most in-depth analysis is Beckstead’s thesis https://rucore.libraries.rutgers.edu/rutgers-lib/40469/.

However, Robin’s argument in this post is that we can also have a large effect on the future simply by investing to give in the far future. It’s true that this argument has less emotional salience than averting X-risk because the people we are helping are richer (at least counterfactually). However, from a utilitarian point of view the correct way to respond to this is simply to downweight the utility of money to someone in correspondence to how rich they are. Robin discusses this issue with Toby Ord in the comments above and it seems that it is not fatal to Robin’s argument, as far as I can tell.

Of course, if you’re not a utilitarian, you might not find this line of argument very convincing, on the grounds that altruism should be about helping people who are worse off than you or something like that. (Personally I am not a utilitarian but this is one of many issues where I agree with the utilitarian position.)