Bad Balance Bias

Balance in life is good. You need to work, relax, have fun, try new things, continue old things, have sex, do sport, play games, sleep. The balanced lifestyle is the ideal, and we all know this.

Problems start when we extend this idea of balance beyond our personal lives. We deal with political and charitable choices as if balance was a virtue. It’s bad enough for governments – they are expected to fund highways, trains, buses and subways, to subsidise clean energy, petrol exploration and energy efficiency, pay money for the opera, for theatre, for sport, for museums and for films. At least in the government’s case the sums involved are so huge that they change the marginal value of these various activities, making this balance obsession possibly acceptable.

But personal charity is the worst. People will give money to combat hunger in Africa, to help the victims of the Tsunami, to educate the under-privileged, to combat global warming and malaria. Since most donations are small, there must be one charity whose marginal value is the highest; rationality implies we should give all our cash to that one. Not only is this not the case, but people seem to prefer to spread their donations around. “You can’t just do one thing” is the reaction I get when questioning this. Yes you can, for charitable giving, and you should.

What are the implications, if my analysis is correct and people demonstrate an irrational love of balance? First, that people will react better to statement like “we are transferring part of X’s funding to Y” rather than “We are cutting X’s funding. We are also increasing Y’s funding.” Secondly that it will be easier to reduce the funding for some X, but much harder to get rid of X entirely. Lastly, that charities boasting a range of different types of projects will fare much better than they should.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Random Visitor

    How about diversification? People may not know ex ante which charity will yield the highest return on the donation? Or for that matter, which ones are honestly run…

  • Anonymous Coward

    Part of the reason for diversification isn’t just some inherent “goodness” in balance. It’s spreading out risk. Investing all of your money in one asset is a bad idea because that one asset might turn out to be bad, and so you spread out the risk by investing a little bit in a lot of places; something like that would also make sense for charities, so that if the one charity you “invest” in fails then you’ve also contributed to some others.

    As to your example of comparing the statements “we are transferring part of X’s funding to Y” and “We are cutting X’s funding. We are also increasing Y’s funding”, there’s I think a significantly simple reason for people to react much better to the first one than the second one. My reaction to the second would be “good, they’ve increased funding for Y, but why’d they have to decrease funding for X??” whereas in the first one, the correlation between the two statements is much more explicit – you know that the money is being redirected from X to Y. I don’t think that’s evidence for bias about balance.

  • http://www.spaceandgames.com Peter de Blanc

    What’s the point of spreading out risk?

    Say you’re investing in various companies in order to increase your personal wealth. If your investment is small compared to the size of the company, then your expected monetary payoff is proportional to your investment. So to maximize your expected return on investment, you would invest all of your money in the company with the highest expected payout.

    But maximizing your expected wealth is not your goal. If your (curried) utility function on wealth is concave, then you can maximize your expected utility by diversifying. This is why people diversify their investments.

    I don’t think this applies to charities; at least not if you’re trying to maximize the expected benefit to humanity and your contribution is relatively small. In this case, the expected benefit of any contribution to a charity should be proportional to your contribution, so you should donate only to a single charity.

  • Biomed Tim

    I think you guys will appreciate this:

    Tim Harford: Charity Is Selfish–The Economic Case Against Philanthropy

  • Hopefully Anonymous

    Peter, that analysis doesn’t make intuitive sense to me. All things equal, why wouldn’t one be better off dividing a small investment in equal halves, and then investing each half into a different company that equally has the highest expected pay out. By doing so, it seems would diversify away risk without losing any expected pay out. It seems that’s the starting point of the inherent advantage of diversification. Particularly if the error bars about the exact expected pay outs of the companies are in the same range as the efficiency cost of dividing up one’s small sum of money to invest in both of them.

    I think the same thing applies to charities too. The idea that “there must be one charity whose marginal value is the highest” sees to me to be unproven and contestible -and maybe unmeasurable with current abilities. If charities seem to have equal marginal value, within our ability to measure, then again it sees to me that we’re diversifying away risk by spreading out donations amongst them.

    I think the tone of certainty with which your post is written is unjustified given the circumstance of the apparent world we live in.

  • http://www.spaceandgames.com Peter de Blanc

    I think the tone of certainty with which your post is written is unjustified given the circumstance of the apparent world we live in.

    Okay, I got a little carried away.

    I think the same thing applies to charities too. The idea that “there must be one charity whose marginal value is the highest” sees to me to be unproven and contestible

    It occurs to me that if all philanthropists were rational and were trying to maximize the same thing, then you would be right and there would not be a single charity whose marginal value is the highest (but I think that scenario is ridiculous).

    In general, it is very unlikely for any two measurements taken from the real world to “line up” exactly.

    Peter, that analysis doesn’t make intuitive sense to me. All things equal, why wouldn’t one be better off dividing a small investment in equal halves, and then investing each half into a different company that equally has the highest expected pay out.

    Of course one would be better off investing in two companies than one. Investing in more than two would be even better. I only said that, if you are trying to maximize your expected wealth, then investing in only the best company is the optimal strategy. But in real life, I care less about gaining $1 million vs. $1.01 million than I do about gaining $10,000 vs. $20,000. More generally, my curried utility function over wealth is mostly concave. AIUI, economists think this is true of most people.

    I don’t think the same thing holds for philanthropy. In a world with 6 billion people living, and 155,000 dying every day, saving 1 million people is about a million times as good as saving 1 person.
    (I do think, however, that saving 6 billion people is more than twice as good as saving 3 billion.)

  • http://www.spaceandgames.com Peter de Blanc

    Sorry about all those italics at the end. I guess I should have hit “preview.”

  • http://www.spaceandgames.com Peter de Blanc

    Haha, the italics bled past the end of the post. This should stop it.

  • Carl Shulman

    Rational risk-aversion in investment stems from diminishing marginal utility of wealth: going from a $100 k annual income to a $200k income is much less valuable than going from nothing to a $100 k income. For a small donor (relative to the best charity, or charities in the event of a literal tie), the marginal impact of donations will not decline.

    The objection about charities exactly tied for effectiveness is a red herring. From gifts to panhandlers to funding treatment for one HIV patient to funding HIV vaccine research to attempting to reduce existential risk, expected impact varies by many orders of magnitude. If several charities seem equally good (adjusting for suspected biases towards particular charities), all things considered you should be indifferent between them or roll a die to allocate your funds.

    This has been discussed on Marginal Revolution at length.

    Of course, one implication of the extreme variance in charitable impact is that a rational altruist would invest much more heavily in information than actual charitable donors, e.g. someone considering a $1,000 donation might reasonably spend $900 on research costs (including opportunity costs of time, adjusting for tax rates, etc) researching charities and then give $100. There’s a relevant thread on Felicifia.

  • Carl Shulman
  • Hopefully Anonymous

    There seems something very non-real world about this concept of “put all your money into the single best one” approach. First, I querry whether it’s possible (or whether the research costs or prohibitive) to determine either a single best company or a single best charity. It seems likely that there may be at least several within the save error/uncertainty bars. And then, it doesn’t make sense to randomly choose one of the several and to put all of one’s money into that one. The offices of that charity could could blow up due to a gas leak tomorrow. It’s that sort of real world risk that I think is mitigated (relatively cost free) when one divides one’s donation equally among several equally best companies or charities (as far as one can measure) rather than randomly pick one and put all of one’s donation or investment into it.

    But explanations from economist or other smart people are welcome as to why I’m wrong in this intuition/reasoning!

  • Hopefully Anonymous

    correction: “save error/uncerrtainy bars” should read “same error/uncertainty bars”.

  • Doug S.

    Another possibility: due to “irrational” psychological factors, one may end up giving more money in total when one decides to support several causes instead of donating to one charity.

  • Random Visitor

    Investing all your money in the highest expected return company seems like a rather bad idea for reasons which we know since Markowitz…

    If that one company fails, you will have nothing at all left, much the same for the charity.

    Looking at expected value alone generally is not a very good criteria for decisions (and, to be fair, combining expected value and variance is not optimal either).

  • Hopefully Anonymous

    Random Visitor,
    Actually, I understand why putting all one’s money in the highest expected return company is a better idea than diversification into lesser expected return companies: because risk of failure is already factored into the expected return. However, if within uncertainty bars several companies are equal, I see no downside to diversification, except the possible efficiency costs of spending the time to divvy up your money and do the paperwork for each investment.

  • Stuart Armstrong

    To those pointing out uncertainty, risk and the advantages of diversification – do you feel that charity givers think that way? And do they assess these risks correctly? If they don’t, is “balance bias” behind their misjudgements, or are other factors more important?

  • Stuart Armstrong

    Another possibility: due to “irrational” psychological factors, one may end up giving more money in total when one decides to support several causes instead of donating to one charity.

    Availability bias may be an explanation here – you see a good cause, give what you have available. See another good cause later, give what you have available then. Repeat.

  • Hopefully Anonymous

    Stuart, I’m not sure how the most charity givers operate, but given what we know about the pervasiveness of systematic bias and how it affects decision-making in the world, I think it’s likely that it has a large adverse impact on charity giving. A “balance bias” sounds like it could plausibly be a significant part of such bias. I think beyond that it’s an empirical question which is probably worth exploring further.

  • Nick Tarleton

    HA, I think you must have missed where it was pointed out that diversification in investment is good because of the diminishing marginal utility of money.

    Of course, if there are several charities that seem to have the same potential for good to within epsilon, diversifying within them and targeting are equally rational. However, I think there is one organization that really stands out.

  • http://amnap.blogspot.com Matthew C

    I think Nick Tarlton’s comment inadvertantly makes Anonymous Coward’s point about spreading risks.

    Nick recommends the Singularity Institute, whose entire raison d’être appears to be dealing with the impacts of strong artificial intelligence. However, strong AI is a project whose prospects are in considerable doubt. If, as I suspect, strong AI is a pipe dream, then people who put all of their eggs in the Singularity Institute basket have entirely wasted their charitable contributions.

  • mobile

    Maybe charities aren’t really all that effective at fixing society’s problems, and maybe they provide the most value to society just by giving donors an opportunity to feel good about themselves. In that case, donating to several causes may be the most welfare enhancing choice for people.

  • Hopefully Anonymous

    Nick,
    Isn’t that the organization run by people who are intentionally trying to create a self-improving intelligence smarter than humanity?

  • Carl Shulman

    “To those pointing out uncertainty, risk and the advantages of diversification – do you feel that charity givers think that way? And do they assess these risks correctly? If they don’t, is “balance bias” behind their misjudgements, or are other factors more important?”

    1. People often give when they are caught up in strong emotion. Subjects primed for logical thinking with math puzzles are less likely to give to famine relief after seeing images of suffering. When in a reflective state of mind, they may have no interest in giving at all, let alone efficiently.

    2. Large amounts of giving are tied to external social pressures and rewards. If one gives under social pressure when the plate is passed at church, to one’s workplace charity, to the young fundraisers on the street,etc, then one cannot reallocate funds freely without the unpleasant effort of ignoring those pressures.

    3. Insofar as charitable giving is a tool to signal personal virtues, allocating one’s donations well is ineffective. Willingness to sacrifice for others is demonstrated by the amount one gives, not how the gift is used. Frequent giving to diverse causes provide more opportunities to demonstrate generosity. Last, because most individuals are not well informed, and have varied preferences across charities, diversifying across faddish charities can best increase the likelihood that one’s giving will impress strangers.

    4. Much giving is motivated by non-consequentialist feelings of reciprocity and tribalism, e.g. donations to religious organizations, local community groups, one’s alma mater.

    5. People have absorbed the importance of diversification in investments, but don’t know or recall the basic economics of risk-aversion, so they apply it as a general rule-of-thumb without understanding.

    6. Optimizing charitable giving is frightening, since Singer’s “Famine, Affluence, and Morality” looms in the background: analysis of our charitable habits could tell us that by our own professed values we should be making large sacrifices. I know individuals who have devoted themselves to careers in non-profit charity, paying enormous opportunity costs, and explicitly ask not to be informed about the cost-effectiveness of interventions like child vaccination to avoid feelings of guilt and responsibility.

    7. Some fear that if they put all their donations in one charity, which ends up not achieving much, they will seem foolish to themselves or others in hindsight.

    8. Some people may take a satisficing ‘salvation by works’ perspective, wanting to earn a certain quantity of moral ‘points.’ For instance, buying carbon credits to offset one’s carbon emissions rather than directly funding basic research (with greater expected CO2 reductions but less certainty) guarantees that the ‘taint’ of global warming (see the work of Jon Haidt) will be expunged. The moral psychology of disgust and contamination is not part of the consequentialist calculation.

    9. Given all of the above reasons, consolidating all of one’s charitable giving is socially peculiar, and social norms set by those acting for reasons other than rationally planned consequentialist altruism help to lock in the effect.

  • Hopefully Anonymous

    Carl, you have some hypotheses there for empirical inquiry, to see the degree they may or may not be accurate explanations for various charitable giving biases. For example, I doubt #5 is a significant factor in individual diversification of charitable giving, but it’s a question for empirical inquiry.

  • http://www.spaceandgames.com Peter de Blanc

    Let’s try some concrete examples.

    Which do you prefer:
    1. a 50% chance of saving 10 lives.
    2. saving 4 lives for sure.

    Which do you prefer:
    A. a 50% chance of receiving $10 million
    B. receiving $4 million for sure.

  • http://profile.typekey.com/tobyord/ Toby Ord

    I largely agree with Carl’s set of explanations. I think that while it is not so important in the general public, #5 explains a lot of resistance to non-diversification that was found on the MR discussion. I was quite surprised to see so much confusion about what seems to be a pretty obvious (though rarely acknowledged) point, and #5 explains this quite well.

    In my experience of explaining this point to others, #7 is quite common and probably is a larger part of the total explanation than a particular ‘bad balancing bias’ (although it could be seen as constituitive of such a bias).

    I would also like to back up Carl’s point that in reality, charity effectiveness is spread over many orders of magnitude and there should not be many subjective ties (unless they are being judged very intuitively or coarsely). See http://www.ncbi.nlm.nih.gov/books/bv.fcgi?rid=dcp2.table.365 for some examples of vastly differing efficiencies of sensible seeming interventions. As some of you know, I am starting an organisation ( http://givingwhatwecan.org ) one of whose major aims is to help assess the efficiency of aid organisations, and even the information we have gathered so far should allow most sincere givers to increase their effectiveness by a factor of 10 to 1,000.

  • Barbar

    The silly thing about this analysis is that it criticizes diversified charity by arguing that you should donate to one charity — and yet it gives you no way of identifying this charity (although one commenter helpfully suggests the Singulary Institute).

    So what difference does this argument make to someone who donates to charity? If I give money to Oxfam and Doctors Without Borders, I am faced with a rigorous argument that I am not maximizing global utility. I can circumvent this argument by not donating to either charity, and yet I can assure you that this switch has decreased global utility, despite the gains to firm believers in the logical foundations of economic theory.

  • Barbar

    My comment obviously does not apply to Toby Ord.

  • Carl Shulman

    Peter,

    Framing effects and loss aversion will affect the answers to your hypotheticals. Also the choice between A and B will depend on all sorts of extraneous financial and other variables.

    I’d select 1 over 2, and B over A, unless it was possible to contract with third parties to hedge my risk and get a guaranteed net return between $4MM and $5MM.

  • http://profile.typekey.com/tobyord/ Toby Ord

    Barbar,

    The argument only applies if you have suspicions that a charity is better than another: i.e. if the expected utility of a donation to one is at all greater than for the other (error bars are irrelevant here). If you have no idea whether Oxfam is more efficient at the margin than MSF, then give to either (or to both, unless it is too much of a waste in terms of bookkeeping). If you have the barest hint that one is better, then give it all to that one. Of course, everyone would also advise you to seek more information on the efficiencies (it is not easy to find, but I hope to change that by later this year).

  • Carl Shulman

    Barbar,

    I gave a link above to a discussion of the implications (hyperlinked from the word ‘Felicifia’), and mentioned that one should reallocate effort towards researching charities before giving. Estimating the quality adjusted years of life saved by each charity would be one possible criterion.

    Because the variance in impact is so tremendous, even a dramatic fall in total giving can be more than offset by reallocation to the best available cause. From a consequentialist perspective, charities that reduce existential risk can be billions of billions of billions…of times as beneficial (www.nickbostrom.com/astronomical/waste.html ), so a 99.99+% drop in total giving could be offset by improved allocation.

  • Stuart Armstrong

    So what difference does this argument make to someone who donates to charity? If I give money to Oxfam and Doctors Without Borders, I am faced with a rigorous argument that I am not maximizing global utility. I can circumvent this argument by not donating to either charity

    This hints at a major point – if we are criticising those who give inefficiently to charity, then we should also criticise, and far more, those who not give to charity at all.

  • Carl Shulman

    “This hints at a major point – if we are criticising those who give inefficiently to charity, then we should also criticise, and far more, those who not give to charity at all.”
    We have limited resources (time, reputation, etc) to persuade others. If we criticize those who do not give at all, we then face the additional burden of persuading them to give efficiently (because of the variation in impact among charities, it’s unlikely to be worth our time to encourage the less efficient activities). We may expect the marginal efficacy of persuasion to be greatest with those who already show a fair amount of rationality, universalism, and altruism.

  • Hopefully Anonymous

    Is it even charity? Or is taking on a portion of the burden (perhaps proportionate or disproportionate portion) of solving a collective action problem. Singularity Institute for example purports to be solving a collective action problem. With something like Oxfam, it may be more disputable whether it’s charity or solving a collective action problem (such as reducing an element of existential risk).

  • Hopefully Anonymous

    Peter de Blanc, I take it 1 and A are the more rational choices (without inserting tricky caveats and nuances), but common (maybe even normative) biases lead people to choose 2 and B? These biases seem concretely wasteful, and are probably rooted in the functional but not optimal decision making mechanisms pervasive in our cultures and brains.

  • Hopefully Anonymous

    Toby Ord, you may like Carl’s examples, but shouldn’t we start with empirical foundations? Without them, these seem to me to be conjectures.
    By the way, I think your work, “starting an organisation ( http://givingwhatwecan.org ) one of whose major aims is to help assess the efficiency of aid organisation” sounds great, best of success with it.

  • Carl Shulman
  • Hopefully Anonymous

    Does Columbia University economist Ray Fisman suffer from bad balance bias? Or his recommendation objectively rational?

    http://www.freakonomics.com/blog/?s=fisman

    “Or alternatively, why put all of your rhino aid dollars in one basket? If it is feasible, you may consider running pilot aid projects in both countries and scaling up the efforts that are generating better rhino conservation returns.”

  • Hopefully Anonymous

    correction: “Or his recommendation objectively rational” seems redundant to me, should read “Or is his recommendation rational?”

  • Carl Shulman

    Hopefully,

    As discussed above, if you’re spending enough for decreasing marginal returns to come into play, diversification can be rational. Research into the effectiveness of different expenditures will have very high initial returns.

  • Hopefully Anonymous

    “why put all of your rhino aid dollars in one basket?” as a rhetorical question seems to me to be at least an appeal to the principle (or bias?) of aid diversication as a good in and of itself. It’s rather different from the sentence “Don’t put all your rhino aid dollars in one basket if you’re spending enough for decreasing marginal returns to come into play, or if you’ll get higher returns from researching into the effectiveness of different expenditures by initially putting some rhino aid dollars into multiple baskets.”

  • http://www.spaceandgames.com Peter de Blanc

    Peter de Blanc, I take it 1 and A are the more rational choices (without inserting tricky caveats and nuances), but common (maybe even normative) biases lead people to choose 2 and B?

    HA: I agree that 1 is better than 2, but I prefer B over A because I would get more than half as much benefit from $4 million as I would from $10 million. Although as Carl pointed out, in the real world you could probably hedge your risk with a contract.

    What do you mean by normative bias?

  • Hopefully Anonymous

    Peter, not to be cute, but based on a google search I just did, the normative definition of normative bias. 😉

  • Random Visitor

    @Hopefully Anonymous
    (Classical) Financial Economics (Markowitz/Tobin/Sharpe/Lintner CAPM) teaches that yes, the risk of default is priced into a security. However, that only applies to what is generally called systematic risk, i.e. risk that CAN’T be diversified away (otherwise there ARE risk arbitrage opportunities that will be exploited until we get back to this state as if there was a security that had too good returns for its risk, everybody would buy it and the price would rise).

    Furthermore, once you (partially) remove short sales constraints, which arguably the introduction of derivative instruments has done for many investors, you can get arbitrarily high expected returns if you invest enough borrowed money into a portfolio.

    As for the “let’s just divide the money into n-equal shares and invest it into n assets”, where assets may well be charities, there’s empirical work that shows the approach to quite well in many circumstances: http://faculty.london.edu/avmiguel/DeMiguel-Garlappi-Uppal-2006-11-22.pdf

    At that point, it comes as pretty straight forward result, that you should buy however many units of the portfolio with the best expected return/variance combination (in this model usually called the market portfolio) as your risk attitude suggests.

    Now there is a large body of work why investing in the real world may not be as easy, but the general point of diversification is good remains in all but the most extreme settings.

  • Nick Tarleton

    “If, as I suspect, strong AI is a pipe dream, then people who put all of their eggs in the Singularity Institute basket have entirely wasted their charitable contributions.”

    True, but this only suggests diversification if you count a wasted contribution as having (strongly) negative utility, as opposed to the zero utility it would have for a rational altruist. Even if my contribution has a 90% chance of being wasted, the good it would do in the other 10% of possibilities is great enough that the expected utility of an SIAI donation (or whatever your favorite long-shot high-impact charity) is greater than that of a donation anywhere else.

    If your goal is feeling good about yourself for donating, and you would feel like an idiot on finding out you wasted your money, then this argument would be rational.

  • Nick Tarleton

    One more point I have seen made is that the signaling function of donation may have some actual value in drawing attention to an issue that desperately needs more. Thus, an existential-risk-focused altruist might give the bulk of their money to SIAI but occasionally give $10 to the Lifeboat Foundation or Center for Responsible Nanotechnology, because the slightly increased profile given to existential risk/nano issues by one more donor is worth it.

  • Stuart Armstrong

    We may expect the marginal efficacy of persuasion to be greatest with those who already show a fair amount of rationality, universalism, and altruism.

    I’ll forget for a moment the idea of debiasing, and just look at persuading. For that, we’d want the right criticisms to reach the right ears – those who don’t give shouldn’t hear much about giving inefficiencies, those who already give should hear a narrative with different emphasis depending on their rationality.

    Businesses already manage to charge different consumers different amounts (see the undercover economist). But blogs are not doing as good a job. For example, those blogs that give prominence to this issue tend to be read by highly rational altruists (a plus) or by those who don’t give at all and want a justification for their behaviour (a minus). There may be a way to present the issue so that the plus remains but the minus is removed. Implicitly ranking givers, from “not giving” through “inefficient giver” all the way up to “efficient giver” might be a way to do it.

    But… overcoming confirmation bias is hideously hard. I’ll be sitting by the river of hell, waiting for a snowflake to float by.

  • Stuart Armstrong

    might give the bulk of their money to SIAI but occasionally give $10 to the Lifeboat Foundation or Center for Responsible Nanotechnology, because the slightly increased profile given to existential risk/nano issues by one more donor is worth it.

    Interesting. Anyone have any concpetion what the marginal value of a donation is? (independently of the amount donated).

  • Carl Shulman

    “Interesting. Anyone have any concpetion what the marginal value of a donation is? (independently of the amount donated).”
    This would vary enormously based on personal status, wealth and reputation, and the existing credibility and activities of the charity. A $1,000 public donation from Stephen Hawking or Terence Tao to a charity like the X-prize would be far more valuable than a like donation to a homeless shelter, by attracting additional funds. For an institute like FHI or SIAI, those endorsements would be still more valuable in assisting in the recruitment of additional talented research staff.

  • michael vassar

    Stuart: One of the major points that we have been making is that in most cases the difference between “inefficient giver” and efficient giver is VASTLY greater than that between “not giving” and “inefficient giver”.

  • Johan Edström

    “One more point I have seen made is that the signaling function of donation may have some actual value in drawing attention to an issue that desperately needs more. Thus, an existential-risk-focused altruist might give the bulk of their money to SIAI but occasionally give $10 to the Lifeboat Foundation or Center for Responsible Nanotechnology, because the slightly increased profile given to existential risk/nano issues by one more donor is worth it.”

    I did exactly this, with the stated justification. However I suspect there may be have biases hidden underneath: Avoiding regret if SIAI fails, and positive feelings from helping/beeing connected to more organizations.

    Aside from the bias issue, I think it is definitely worth it, since:

    a) The Lifeboat foundation is a *very* small organization, the marginal value of a dollar is enormous in the beginning.
    b) They have a complementary profile to SIAI in their efforts, rather than an exclusionary.
    c) Speculative: Creating connections between the two organizations PR wise.

  • Stuart Armstrong

    you are only making it once, when you decide to concentrate all of your giving to one charity.

    You have a good point there. If something may be a scam, or if they act to prevent a problem that may never happen (such as the SI or anti-small pox measures), there is a case to treat them differently from other charities (as rational people have different interpretations of probability theory).

    a) The Lifeboat foundation is a *very* small organization, the marginal value of a dollar is enormous in the beginning.

    The size of the organisation has nothing to do with the marginal value of a dollar donation. In fact, a small organisation is an indication of high (relative) fixed costs, suggesting a slightly lower marginal value.

  • Nick Tarleton

    Kaj, my justification would be that the universe is infinite, so there actually are an infinite number of duplications, and in 1% of worlds where I give to that charity it does end up creating 10,000 utility units, so that over any sufficiently large finite subset of the universe the first charity does more actual good than the second.

    There’s probably a similarly good argument for non-modal realists, but I have a hard time thinking of one.

  • http://www.saunalahti.fi/~tspro1/ Kaj Sotala

    Peter, I was using units of utility as a generic measure of net benefit to humanity – or maybe to a subsection of humanity, or nature, or the universe itself, or whatever the individual donor considers to be the most valuable. Lives saved, species saved from extinction, units of energy saved from entropy, whatever.

    Nick, it’s been my understanding that the amount of matter in the universe is finite, even if the universe itself is infinite? (Wikipedia seems to agree, though very few references are given so that’s not necessarily saying much.)

  • Carl Shulman

    “Since you don’t have an infinite amount of rerolls, then, you should not think in terms of expected return. What you should do would depend on how risk-avoidant you were – personally I might prefer not to put all of my money in one basket, and diversify.”

    Kaj, a situation where you had an infinite amount of rerolls would be one in which no risk existed. Your argument is essentially this:

    Premise 1: there is risk with respect to the efficacy of charitable donations.
    Premise 2: we should be risk-avoidant with respect to charitable donations as we would be with personal investments.
    Premise 3: diversification reduces risk.
    Conclusion: we should diversify our charitable donations in accordance with our general risk-aversion.

    Premise 2 assumes away the issue we have been discussing.

    “This is exactly the same reason why I’d diversify when investing – it’s better to recieve some money than none at all.”
    Having a personal income of $200,000 per year is personally better than having $100,000, but generally not by as much as having a $100,000 income is better than having no income at all. In other words, personal income has diminishing marginal utility for most people (according to their own utility functions).

    If an income of $100,000 gives a utility of 4, while an income of $200,000 gives a utility of only 6, then you would be better off in terms of expected utility (EU) to take a guaranteed $100,000 rather than a coin flip that would give you either $0 or $200,000, since the first option would have an EU of 4, while the 2nd would have an EU of 3.

    “Likewise it’s better to save some lives than none at all.”
    Of course, but the question is whether saving 2000 lives is better than saving 1000 lives to the same extent that saving 1000 lives is better than saving none at all. If you were a consequentialist altruist, the fact that you had already saved 1000 lives would not make saving another 1000 any less desirable from your perspective, i.e. the EU of saving 2000 lives would be twice that of saving 1000 lives. So you would not be risk-averse.

  • Carl Shulman

    The constant or diminishing marginal value of saving lives has been discussed in an earlier exchange:
    http://www.overcomingbias.com/2007/05/one_life_agains.html

  • http://www.saunalahti.fi/~tspro1/ Kaj Sotala

    Carl, I’m not really sure if I’d word my argument quite like that. The bit about charity diversification being exactly equal to business diversification was just a sidenote thrown in at the end, not an integral part of the argument as such.

    (I do understand the concept of diminishing marginal utility when it comes to investment, though I’m somewhat sceptical of the suggestion that it’s by itself enough to explain diversification. But anyway, that’s digressing. I don’t think diminishing returns of any kind really have anything to do with the core argument itself.)

    I’d rather say my argument goes like this:

    Premise 1: There is risk with respect to the efficacy of charitable donations.
    Premise 2: The formula of expected returns isn’t applicable in cases where we are only making the decision of where to donate once.
    Conclusion 1: There isn’t a definite way of establishing the amount of risk avoidance that is the best, or the most rational.
    Conclusion 2: Therefore it is up to each individual’s own preferences how risk averse they should be.

  • anon

    Kaj,
    It’s NOT that the formula of expected returns isn’t applicable when only making a decision only once… it still is your expected return for one roll…. and if your utility function is linear, the formula of expected returns will agree with the decision made using expected utility. The point is that in many cases (including when you are risk averse), solely using an expected return without taking into account your utility function will not be adequate as it does not take enough factors into account.

    Perhaps this is degrading to semantics: not applicable versus inadequate given certain assumptions and personal preferences such as risk aversion.

  • Carl Shulman

    Kaj,

    “(I do understand the concept of diminishing marginal utility when it comes to investment, though I’m somewhat sceptical of the suggestion that it’s by itself enough to explain diversification.)”

    Diminishing marginal utility of income (because of fixed costs of food and shelter, among other reasons)can provide a strong justification for diversification, but you can also have a brute distaste for uncertainty.

    http://en.wikipedia.org/wiki/Risk_aversion
    http://en.wikipedia.org/wiki/Modern_portfolio_theory

    “But anyway, that’s digressing. I don’t think diminishing returns of any kind really have anything to do with the core argument itself.”

    So you would say that saving 2000 lives is (from an altruistic perspective) better than saving 1000 lives to the same degree that saving 1000 lives is better than saving none from an altruistic perspective?

    “Premise 2: The formula of expected returns isn’t applicable in cases where we are only making the decision of where to donate once.”

    What if a million people had the option of each giving $1 or flipping a coin (with $2.01 being delivered to do good on heads, and $0 for tails)? If you were one of the million, would you take the risky option? You still only donate once.

    Also, you should distinguish between the probability that you are giving to a charity that will do good, and the probability that your donation will do good. Suppose there are two charities, A and B, such that either each $1000 donated to A has a 1% chance of saving a million lives while donations to B are wasted, or each $1000 donated to B has a 1.01% chance of saving a million lives while donations to A are wasted, and the two scenarios each have 50% subjective probability. If you have $10,000 to donate, the ex ante probability that your donations will not be wasted is maximized by giving everything to B.

    “Conclusion 1: There isn’t a definite way of establishing the amount of risk avoidance that is the best, or the most rational.”
    “Conclusion 2: Therefore it is up to each individual’s own preferences how risk averse they should be.”
    Of course people can have direct preferences for anything, even sadism or murder, but the argument is that charitable diversification does not follow logically and instrumentally from the desire to do good in the way that investment diversification follows from a desire for the benefits of wealth.

  • Johan Edström

    “The size of the organisation has nothing to do with the marginal value of a dollar donation. In fact, a small organisation is an indication of high (relative) fixed costs, suggesting a slightly lower marginal value.”

    I didn’t think of that, you may be right. I meant it only in the sense that a smaller organization can grow faster in size relative to itself than a larger. Does that change anything or am I still confused? 😉

    Additionally: Does this mean that it’s generally better (all else equal) to support larger org. than smaller. This seems absurd to me, how would any new org. be started if everyone followed that rule?

  • Carl Shulman

    If a charitable model has legs, then early funds and successes should enable it to raise more funds from third parties in the future. A contribution of a given size is more likely to make the difference in allowing an organization to survive and initiate such a virtuous circle early in its history.

  • http://www.saunalahti.fi/~tspro1/ Kaj Sotala

    Carl,

    So you would say that saving 2000 lives is (from an altruistic perspective) better than saving 1000 lives to the same degree that saving 1000 lives is better than saving none from an altruistic perspective?

    Yes.

    What if a million people had the option of each giving $1 or flipping a coin (with $2.01 being delivered to do good on heads, and $0 for tails)? If you were one of the million, would you take the risky option? You still only donate once.

    Hmm. I had not considered it from this point of view. In that scenario, it’d be good if most people would choose the risky option, since many people making a choice once is in effect the same as one person making it many times…

    Though I’m not sure if it directly generalizes to charitable giving in general, since you don’t get a new coin toss each time that somebody new donates to a charity. If people judge that there’s a 10% chance that a charity produces a lot of good for each dollar given, when in reality it won’t, having lots of people donating to it won’t mean that 10% of the money donated will end up doing lots of good. (Of course, in reality things aren’t so simple, since a charity’s net-benefit-for-dollar-invested isn’t a linear function, and a charity’s probability of doing good probably goes up as more people donate to it… at least up to a certain stage.)

    If you have $10,000 to donate, the ex ante probability that your donations will not be wasted is maximized by giving everything to B.

    Mm. True.

    Of course people can have direct preferences for anything, even sadism or murder, but the argument is that charitable diversification does not follow logically and instrumentally from the desire to do good in the way that investment diversification follows from a desire for the benefits of wealth.

    Of course not – my argument was not that diversification follows logically, but rather that nondiversification would not logically follow, either.

  • Carl Shulman

    “Hmm. I had not considered it from this point of view. In that scenario, it’d be good if most people would choose the risky option, since many people making a choice once is in effect the same as one person making it many times…”
    http://en.wikipedia.org/wiki/Bayesian_inference

    Suppose I have a 3rd party flip a coin and write down the result in a sealed envelope. Then, without knowledge of its contents, I ask you whether you would like to bet that the result of the coin flip was ‘tails.’ At the time you decide to make the bet or not, the status of the flip has already been determined. Is this any different from betting on the outcome of a coin flip before it is conducted?

    [As an aside, are you familiar with this problem? http://en.wikipedia.org/wiki/Monty_Hall_problem%5D

    “Though I’m not sure if it directly generalizes to charitable giving in general, since you don’t get a new coin toss each time that somebody new donates to a charity. If people judge that there’s a 10% chance that a charity produces a lot of good for each dollar given, when in reality it won’t, having lots of people donating to it won’t mean that 10% of the money donated will end up doing lots of good. (Of course, in reality things aren’t so simple, since a charity’s net-benefit-for-dollar-invested isn’t a linear function, and a charity’s probability of doing good probably goes up as more people donate to it… at least up to a certain stage.)”
    But what if you aggregate across alternate Everett-Wheeler branches, distant planets in a big universe, and especially logically possible worlds? You can judge that if agents in your epistemic position acted in a certain way, on the whole the results would be good, in the same way as you would with the coin example and a large population on this world.

    “Of course not – my argument was not that diversification follows logically, but rather that nondiversification would not logically follow, either.”
    I.e. that one should diversify only if one is motivated by additional concerns beyond consequentialist altruism.

  • http://www.saunalahti.fi/~tspro1/ Kaj Sotala

    At the time you decide to make the bet or not, the status of the flip has already been determined. Is this any different from betting on the outcome of a coin flip before it is conducted?

    In that particular case, no.

    (I’m not sure of what I was meant to learn from the Bayesian inference link, or rather, how it applies to this question. I have a rough understanding of the basic principles involved in Bayesian inference, though I’m far from being an expert.)

    [As an aside, are you familiar with this problem? http://en.wikipedia.org/wiki/Monty_Hall_problem%5D

    I am, yes. It managed to keep me quite puzzled for a while, before a friend explained to me the right way to think about it. (The “what if there were a million doors and the host eliminated all but two” variant helps considerably.)

    But what if you aggregate across alternate Everett-Wheeler branches, distant planets in a big universe, and especially logically possible worlds? You can judge that if agents in your epistemic position acted in a certain way, on the whole the results would be good, in the same way as you would with the coin example and a large population on this world.

    This is a side of the issue that I have so far seen the wisest to ignore, as it’s treading rather speculative ground that I’m not qualified to evaluate. (I barely know the basic concepts of usual physics, let alone basing ethical theories on interpretations of quantum mechanics.) The basic idea of “there may be other beings in other worlds very much like you, so you should act in a way that caused the most good on average if all your equivalents acted like it” does sound valid in principle, but that’s about as much expertise as I have about it, so I’m more than a little unsure…

    I.e. that one should diversify only if one is motivated by additional concerns beyond consequentialist altruism.

    I suppose you could say that, yes.

  • Carl Shulman

    “(I’m not sure of what I was meant to learn from the Bayesian inference link, or rather, how it applies to this question. I have a rough understanding of the basic principles involved in Bayesian inference, though I’m far from being an expert.)”
    As opposed to:
    http://en.wikipedia.org/wiki/Frequency_probability

  • TGGP

    It’s not often I come across a real economic analysis of altruistic efforts, like charitable foundations. I think part of that is that publicly traded corporations have the simple purpose of making money for shareholders, while the goal of charities is less clear, and the source of the money is so removed from the ends to which it is put. An exception to this is this from the Becker-Posner blog.

  • dmytryl

    While I do not doubt that one should do all donations to a single truly best cause until its saturated, then to the new best, and so on, the problem is that people’s evaluation of “best” is prone to accidental and deliberate subversion by superstimuli.

    For not so politically correct example, if your reproductive instincts are to freely choose “sexiest” woman, you will end up choosing some pornstar with breast implants and very low interest in reproduction.

    Likewise, if you are to choose charities based on attractiveness and perceived honesty of spokesperson, at the small ranges it may well weakly correlate with the honesty but at the large range it will correlate with having a makeup team and with willingness to spend non trivial time watching videos of oneself and practising honest face – having a test group evaluate the perceived honesty in the videos – perhaps even plastic surgery – something that few honest people would do.

    Likewise, if you are to choose charities based on perceived ‘caring’ level as set by gut instinct, you are likely to end up donating for very expensive medical treatments for small children with very poor prognosis, while a huge number of children elsewhere lack most basic medical care.

    Spreading between several best causes decreases the tendency to put all donations into superstimulus charities. Without this, people would be like a bird that puts all the food into cuckoo’s superstimulus mouth which is brighter and larger than bird’s own chick mouths.

    Furthermore, diversification decreases the expected pay-off from deliberate creation of superstimuli, hopefully keeping down the number of superstimuli.

    I do not believe that the argument against diversification of charity spendings needs to be made. The people who can not come with it themselves would certainly be unable to determine the best charity without falling for some kind of superstimuli.

    While the society certainly does instil the bias that everyone should be doing things in balance, it feels more rewarding to donate only to the cause that tugs the hardest at your heart strings – most likely, a rather bad cause.