67 Comments

While I do not doubt that one should do all donations to a single truly best cause until its saturated, then to the new best, and so on, the problem is that people's evaluation of "best" is prone to accidental and deliberate subversion by superstimuli.

For not so politically correct example, if your reproductive instincts are to freely choose "sexiest" woman, you will end up choosing some pornstar with breast implants and very low interest in reproduction.

Likewise, if you are to choose charities based on attractiveness and perceived honesty of spokesperson, at the small ranges it may well weakly correlate with the honesty but at the large range it will correlate with having a makeup team and with willingness to spend non trivial time watching videos of oneself and practising honest face - having a test group evaluate the perceived honesty in the videos - perhaps even plastic surgery - something that few honest people would do.

Likewise, if you are to choose charities based on perceived 'caring' level as set by gut instinct, you are likely to end up donating for very expensive medical treatments for small children with very poor prognosis, while a huge number of children elsewhere lack most basic medical care.

Spreading between several best causes decreases the tendency to put all donations into superstimulus charities. Without this, people would be like a bird that puts all the food into cuckoo's superstimulus mouth which is brighter and larger than bird's own chick mouths.

Furthermore, diversification decreases the expected pay-off from deliberate creation of superstimuli, hopefully keeping down the number of superstimuli.

I do not believe that the argument against diversification of charity spendings needs to be made. The people who can not come with it themselves would certainly be unable to determine the best charity without falling for some kind of superstimuli.

While the society certainly does instil the bias that everyone should be doing things in balance, it feels more rewarding to donate only to the cause that tugs the hardest at your heart strings - most likely, a rather bad cause.

Expand full comment

It's not often I come across a real economic analysis of altruistic efforts, like charitable foundations. I think part of that is that publicly traded corporations have the simple purpose of making money for shareholders, while the goal of charities is less clear, and the source of the money is so removed from the ends to which it is put. An exception to this is this from the Becker-Posner blog.

Expand full comment

"(I'm not sure of what I was meant to learn from the Bayesian inference link, or rather, how it applies to this question. I have a rough understanding of the basic principles involved in Bayesian inference, though I'm far from being an expert.)"As opposed to:http://en.wikipedia.org/wik...

Expand full comment

At the time you decide to make the bet or not, the status of the flip has already been determined. Is this any different from betting on the outcome of a coin flip before it is conducted?

In that particular case, no.

(I'm not sure of what I was meant to learn from the Bayesian inference link, or rather, how it applies to this question. I have a rough understanding of the basic principles involved in Bayesian inference, though I'm far from being an expert.)

[As an aside, are you familiar with this problem? http://en.wikipedia.org/wik...]

I am, yes. It managed to keep me quite puzzled for a while, before a friend explained to me the right way to think about it. (The "what if there were a million doors and the host eliminated all but two" variant helps considerably.)

But what if you aggregate across alternate Everett-Wheeler branches, distant planets in a big universe, and especially logically possible worlds? You can judge that if agents in your epistemic position acted in a certain way, on the whole the results would be good, in the same way as you would with the coin example and a large population on this world.

This is a side of the issue that I have so far seen the wisest to ignore, as it's treading rather speculative ground that I'm not qualified to evaluate. (I barely know the basic concepts of usual physics, let alone basing ethical theories on interpretations of quantum mechanics.) The basic idea of "there may be other beings in other worlds very much like you, so you should act in a way that caused the most good on average if all your equivalents acted like it" does sound valid in principle, but that's about as much expertise as I have about it, so I'm more than a little unsure...

I.e. that one should diversify only if one is motivated by additional concerns beyond consequentialist altruism.

I suppose you could say that, yes.

Expand full comment

"Hmm. I had not considered it from this point of view. In that scenario, it'd be good if most people would choose the risky option, since many people making a choice once is in effect the same as one person making it many times..."http://en.wikipedia.org/wik...

Suppose I have a 3rd party flip a coin and write down the result in a sealed envelope. Then, without knowledge of its contents, I ask you whether you would like to bet that the result of the coin flip was 'tails.' At the time you decide to make the bet or not, the status of the flip has already been determined. Is this any different from betting on the outcome of a coin flip before it is conducted?

[As an aside, are you familiar with this problem? http://en.wikipedia.org/wik...]

"Though I'm not sure if it directly generalizes to charitable giving in general, since you don't get a new coin toss each time that somebody new donates to a charity. If people judge that there's a 10% chance that a charity produces a lot of good for each dollar given, when in reality it won't, having lots of people donating to it won't mean that 10% of the money donated will end up doing lots of good. (Of course, in reality things aren't so simple, since a charity's net-benefit-for-dollar-invested isn't a linear function, and a charity's probability of doing good probably goes up as more people donate to it... at least up to a certain stage.)"But what if you aggregate across alternate Everett-Wheeler branches, distant planets in a big universe, and especially logically possible worlds? You can judge that if agents in your epistemic position acted in a certain way, on the whole the results would be good, in the same way as you would with the coin example and a large population on this world.

"Of course not - my argument was not that diversification follows logically, but rather that nondiversification would not logically follow, either."I.e. that one should diversify only if one is motivated by additional concerns beyond consequentialist altruism.

Expand full comment

Carl,

So you would say that saving 2000 lives is (from an altruistic perspective) better than saving 1000 lives to the same degree that saving 1000 lives is better than saving none from an altruistic perspective?

Yes.

What if a million people had the option of each giving $1 or flipping a coin (with $2.01 being delivered to do good on heads, and $0 for tails)? If you were one of the million, would you take the risky option? You still only donate once.

Hmm. I had not considered it from this point of view. In that scenario, it'd be good if most people would choose the risky option, since many people making a choice once is in effect the same as one person making it many times...

Though I'm not sure if it directly generalizes to charitable giving in general, since you don't get a new coin toss each time that somebody new donates to a charity. If people judge that there's a 10% chance that a charity produces a lot of good for each dollar given, when in reality it won't, having lots of people donating to it won't mean that 10% of the money donated will end up doing lots of good. (Of course, in reality things aren't so simple, since a charity's net-benefit-for-dollar-invested isn't a linear function, and a charity's probability of doing good probably goes up as more people donate to it... at least up to a certain stage.)

If you have $10,000 to donate, the ex ante probability that your donations will not be wasted is maximized by giving everything to B.

Mm. True.

Of course people can have direct preferences for anything, even sadism or murder, but the argument is that charitable diversification does not follow logically and instrumentally from the desire to do good in the way that investment diversification follows from a desire for the benefits of wealth.

Of course not - my argument was not that diversification follows logically, but rather that nondiversification would not logically follow, either.

Expand full comment

If a charitable model has legs, then early funds and successes should enable it to raise more funds from third parties in the future. A contribution of a given size is more likely to make the difference in allowing an organization to survive and initiate such a virtuous circle early in its history.

Expand full comment

"The size of the organisation has nothing to do with the marginal value of a dollar donation. In fact, a small organisation is an indication of high (relative) fixed costs, suggesting a slightly lower marginal value."

I didn't think of that, you may be right. I meant it only in the sense that a smaller organization can grow faster in size relative to itself than a larger. Does that change anything or am I still confused? ;)

Additionally: Does this mean that it's generally better (all else equal) to support larger org. than smaller. This seems absurd to me, how would any new org. be started if everyone followed that rule?

Expand full comment

Kaj,

"(I do understand the concept of diminishing marginal utility when it comes to investment, though I'm somewhat sceptical of the suggestion that it's by itself enough to explain diversification.)"

Diminishing marginal utility of income (because of fixed costs of food and shelter, among other reasons)can provide a strong justification for diversification, but you can also have a brute distaste for uncertainty.

http://en.wikipedia.org/wik...http://en.wikipedia.org/wik...

"But anyway, that's digressing. I don't think diminishing returns of any kind really have anything to do with the core argument itself."

So you would say that saving 2000 lives is (from an altruistic perspective) better than saving 1000 lives to the same degree that saving 1000 lives is better than saving none from an altruistic perspective?

"Premise 2: The formula of expected returns isn't applicable in cases where we are only making the decision of where to donate once."

What if a million people had the option of each giving $1 or flipping a coin (with $2.01 being delivered to do good on heads, and $0 for tails)? If you were one of the million, would you take the risky option? You still only donate once.

Also, you should distinguish between the probability that you are giving to a charity that will do good, and the probability that your donation will do good. Suppose there are two charities, A and B, such that either each $1000 donated to A has a 1% chance of saving a million lives while donations to B are wasted, or each $1000 donated to B has a 1.01% chance of saving a million lives while donations to A are wasted, and the two scenarios each have 50% subjective probability. If you have $10,000 to donate, the ex ante probability that your donations will not be wasted is maximized by giving everything to B.

"Conclusion 1: There isn't a definite way of establishing the amount of risk avoidance that is the best, or the most rational.""Conclusion 2: Therefore it is up to each individual's own preferences how risk averse they should be."Of course people can have direct preferences for anything, even sadism or murder, but the argument is that charitable diversification does not follow logically and instrumentally from the desire to do good in the way that investment diversification follows from a desire for the benefits of wealth.

Expand full comment

Kaj,It's NOT that the formula of expected returns isn't applicable when only making a decision only once... it still is your expected return for one roll.... and if your utility function is linear, the formula of expected returns will agree with the decision made using expected utility. The point is that in many cases (including when you are risk averse), solely using an expected return without taking into account your utility function will not be adequate as it does not take enough factors into account.

Perhaps this is degrading to semantics: not applicable versus inadequate given certain assumptions and personal preferences such as risk aversion.

Expand full comment

Carl, I'm not really sure if I'd word my argument quite like that. The bit about charity diversification being exactly equal to business diversification was just a sidenote thrown in at the end, not an integral part of the argument as such.

(I do understand the concept of diminishing marginal utility when it comes to investment, though I'm somewhat sceptical of the suggestion that it's by itself enough to explain diversification. But anyway, that's digressing. I don't think diminishing returns of any kind really have anything to do with the core argument itself.)

I'd rather say my argument goes like this:

Premise 1: There is risk with respect to the efficacy of charitable donations.Premise 2: The formula of expected returns isn't applicable in cases where we are only making the decision of where to donate once.Conclusion 1: There isn't a definite way of establishing the amount of risk avoidance that is the best, or the most rational.Conclusion 2: Therefore it is up to each individual's own preferences how risk averse they should be.

Expand full comment

The constant or diminishing marginal value of saving lives has been discussed in an earlier exchange:http://www.overcomingbias.c...

Expand full comment

"Since you don't have an infinite amount of rerolls, then, you should not think in terms of expected return. What you should do would depend on how risk-avoidant you were - personally I might prefer not to put all of my money in one basket, and diversify."

Kaj, a situation where you had an infinite amount of rerolls would be one in which no risk existed. Your argument is essentially this:

Premise 1: there is risk with respect to the efficacy of charitable donations.Premise 2: we should be risk-avoidant with respect to charitable donations as we would be with personal investments.Premise 3: diversification reduces risk.Conclusion: we should diversify our charitable donations in accordance with our general risk-aversion.

Premise 2 assumes away the issue we have been discussing.

"This is exactly the same reason why I'd diversify when investing - it's better to recieve some money than none at all." Having a personal income of $200,000 per year is personally better than having $100,000, but generally not by as much as having a $100,000 income is better than having no income at all. In other words, personal income has diminishing marginal utility for most people (according to their own utility functions).

If an income of $100,000 gives a utility of 4, while an income of $200,000 gives a utility of only 6, then you would be better off in terms of expected utility (EU) to take a guaranteed $100,000 rather than a coin flip that would give you either $0 or $200,000, since the first option would have an EU of 4, while the 2nd would have an EU of 3.

"Likewise it's better to save some lives than none at all."Of course, but the question is whether saving 2000 lives is better than saving 1000 lives to the same extent that saving 1000 lives is better than saving none at all. If you were a consequentialist altruist, the fact that you had already saved 1000 lives would not make saving another 1000 any less desirable from your perspective, i.e. the EU of saving 2000 lives would be twice that of saving 1000 lives. So you would not be risk-averse.

Expand full comment

Peter, I was using units of utility as a generic measure of net benefit to humanity - or maybe to a subsection of humanity, or nature, or the universe itself, or whatever the individual donor considers to be the most valuable. Lives saved, species saved from extinction, units of energy saved from entropy, whatever.

Nick, it's been my understanding that the amount of matter in the universe is finite, even if the universe itself is infinite? (Wikipedia seems to agree, though very few references are given so that's not necessarily saying much.)

Expand full comment

Kaj, my justification would be that the universe is infinite, so there actually are an infinite number of duplications, and in 1% of worlds where I give to that charity it does end up creating 10,000 utility units, so that over any sufficiently large finite subset of the universe the first charity does more actual good than the second.

There's probably a similarly good argument for non-modal realists, but I have a hard time thinking of one.

Expand full comment

you are only making it once, when you decide to concentrate all of your giving to one charity.

You have a good point there. If something may be a scam, or if they act to prevent a problem that may never happen (such as the SI or anti-small pox measures), there is a case to treat them differently from other charities (as rational people have different interpretations of probability theory).

a) The Lifeboat foundation is a *very* small organization, the marginal value of a dollar is enormous in the beginning.

The size of the organisation has nothing to do with the marginal value of a dollar donation. In fact, a small organisation is an indication of high (relative) fixed costs, suggesting a slightly lower marginal value.

Expand full comment