On Fudge Factors

Most people base most of their judgements on intuition, rather than explicit calculations. Some people do base judgements on explicit calculations, and take such calculations at face value. But many others, especially on social questions, use calculations that include case-specific fudge factors which can be adjusted to ensure that calculations agree with case-specific intuitions. While this might estimate well when intuitions are far more informative than explicit calculations, this often seems to be done to achieve a hypocritical appearance of calculation-based decisions, while actually allowing intuitions to dominate.

As I shall explain below, Holden Karnofsky illustrates this preference for fudge factors:

While some people feel that GiveWell puts too much emphasis on the measurable and quantifiable, there are others who go further than we do in quantification, and justify their giving (or other) decisions based on fully explicit expected-value formulas. The latter group tends to critique us … based on our preference for strong evidence over high apparent “expected value,” and based on the heavy role of non-formalized intuition in our decision-making. …

People in this [later] group are often making a fundamental mistake, … estimating the “expected value” of a donation (or other action) based solely on a fully explicit, quantified formula, many of whose inputs are guesses or very rough estimates. We believe that any estimate along these lines needs to be adjusted using a “Bayesian prior”; that this adjustment can rarely be made (reasonably) using an explicit, formal calculation; and that most attempts to do the latter … are not making nearly large enough downward adjustments.

Karnofsky makes the valid statistical point that if you produce an error-prone estimate of the utilitarian effectiveness of some policy, you should not take that estimate at face value but instead adjust it based on your estimate of how noisy was that estimation process, and your prior expectation of how effective policies could plausibly be. Not doing so, he says, leads to mistakes like:

  • The Back of the Envelope Guide to Philanthropy lists rough calculations … [that] imply that donating for political advocacy for higher foreign aid is between 8x and 22x as good an investment as donating to VillageReach. …
  • Numerous people … argue that charities working on reducing the risk of sudden human extinction must be the best ones to support, since the value of saving the human race is so high that “any imaginable probability of success” would lead to a higher expected value for these charities than for others. …

[If people naively accepted explicit calculations,] it seems that nearly all altruists would put nearly all of their resources toward helping people they knew little about. … There would (too often) be no justification for costly skeptical inquiry of [a chosen] endeavor/action. …

Karnofsky’s preferred approach:

We generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good …

  • The more action is asked of me, the more evidence I require. Anytime I’m asked to take a significant action (giving a significant amount of money, time, effort, etc.), this action has to have higher expected value than the action I would otherwise take. …
  • I pay attention to how much of the variation I see between estimates is likely to be driven by true variation vs. estimate error. …
  • I put much more weight on conclusions that seem to be supported by multiple different lines of analysis. …
  • I am hesitant to embrace arguments that seem to have anti-common-sense implications … A too-weak prior can lead to many seemingly absurd beliefs and consequences. … When a particular kind of reasoning seems to me to have anti-common-sense implications, this may indicate that its implications are well outside my prior.
  • My prior for charity is generally skeptical.

Now I fully agree that one should discount utilitarian policy effectiveness estimates based on estimates of the noisiness of the estimation process, and of how effective policies could plausibly be. These considerations can justify Karnofsky’s use of a generally-skeptical prior, of his attending to variation between estimates, and of his preferring stronger evidence and multiple lines of reasoning.

These considerations do not, however, obviously suggest people are insufficiently skeptical of explicitly calculated estimates, nor do they obviously support avoiding existential risk charities, Back of the Envelope Guide to Philanthropy calculations, nor calculations that recommend large or anti-common-sense actions, or actions that help strangers.

First, to reject another’s calculation on the grounds that it insufficiently discounts due to errors and priors, one needs some evidence of such actual neglect. Unless we know that this consideration is only rarely included, or that if included it would typically be remarked upon, the mere fact that one does not see people explicitly discuss this consideration seems insufficient evidence for its being neglected.

More important, to reject a calculation of utilitarian charity effectiveness merely because it implies “anti-common-sense” actions, including large actions or those that help strangers, seems to give far too much weight to intuition, including intuitions that we shouldn’t do much or help strangers. Since few humans actually try to maximize utilitarian effectiveness in their charity choices, common human intuitions about good charity choices seem unlikely to be very informative about utilitarian charity effectiveness. So once one has estimated the likely distribution of policy effectiveness, and the degree of error in some analysis process, the additional fact that a calculation recommends weird-seeming actions should say little more about its utilitarian policy effectiveness.

It seems quite plausible that actual utilitarian maximizing policies would be weird, i.e., differ in many distinctive ways from common sense charitable actions. And it seems quite plausible two such difference would be that maximizing policies would have large actions, while common sense prefers small actions, and that maximizing policies might help strangers, while common sense prefers to help neighbors. In this context, your urge to put a lot of weight on common sense probably mainly reveals that you don’t actually want to maximize utilitarian policy effectiveness. That is, you are human, which shouldn’t be much of a surprise.

Holden Karnofsky prefers to rely on his intuitions about which are effective utilitarian charities, and has identified some adjustable fudge factors, i.e., estimates of analysis error and possible effectiveness, that he uses to justify his not endorsing counter-intuitive charities. There is a mismatch, however, between the ways he wants his recommendations to vary with context, and the kinds of variations that these fudge factors can reasonably justify. These fudge factors are not up to this task.

Even if Karnofsky accepts my critique, however, he’ll probably quickly identify some other fudge factors to let him continue to avoid endorsing counter-intuitive charities. After all, he says:

I present what I believe is the right formal framework for my objections to EEV [= explicit expected-value]. However, I have more confidence in my intuitions … than in the framework itself. … If the remainder of this post turned out to be flawed, I would likely remain in objection to EEV.

With new fudge factors, he’d continue to claim that he wants to maximize the utilitarian effectiveness of charities. But really, what are the chances of that?

GD Star Rating
Tagged as: ,
Trackback URL:
  • Jonah S


    You suggest that Holden’s appeal to his intuition serves as rationalization choosing conventional over utilitairan-optimal charities without substantiating your claim. Can you point to a single example of a charity that you believe to have higher utilitarian expected value than GiveWell’s top ranked charities and give a solid argument in favor of your position? If so, I’d be interested in hearing knowing more. If not, your suggestion is specious.

  • Aron

    “Most people base most of their judgements on intuition, rather than explicit calculations”

    Translation: Those commenters that don’t admire me.

    “Some people do base judgements on explicit calculations”

    Translation: Robin Hanson.

    “But many others, especially on social questions, use calculations that include case-specific fudge factors which can be adjusted to ensure that calculations agree with case-specific intuitions. ”

    Translation: Whoever I intend to status assassinate for the purpose of getting their attention to witness my awesomeness.. In this case Holden Karnofsky.

    “As I shall explain below, Holden Karnofsky…”

    O rly?

    • Aron, your last eight comments have been simply rude, without compensating insight. I will delete further such comments by you.

  • OSB

    Sigh. I am at work, working on the business case for a multi-million dollar investment project. My task this mornning is – precisely – to build into the spreadsheet some fudge factors which can be adjusted so that the business case will support the decision, made twelve months ago, to go ahead with the project.

    this isn’t exactly a new or uncommon phenomenon..

  • Alexander Kruel

    @Robin Hanson

    What do you suggest people who do not yet possess the sufficient math background to formally analyze charities and use explicit calculations? Should such people avoid charitable giving until they learnt the math and concentrate all their resources on acquiring the necessary education, or solely rely on their intuitions?

    Personally I am not able to follow most of the math in Karnofsky’ article at the moment. What I got out of reading it, including the comments, is mainly that it seems to be incredible hard to evaluate charities and that there are many open problems.

  • Sewing-Machine

    Terry Tao on a kind of fudge factor:

    I think an epsilon of paranoia is useful to regularise these sorts of analyses. Namely, one supposes that there is an adversary out there who is actively trying to lower your expected utility through disinformation (in order to goad you into making poor decisions), but is only able to affect all your available information by an epsilon. One should then adjust one’s computations of expected utility accordingly. In particular, the contribution of any event that you expect to occur with probability less than epsilon should probably be discarded completely.

  • JB

    This brings to mind the famous quote from the great John Von Neumann — “There is no sense in being precise when you have no idea what you are talking about”

  • Preferring concrete evidence does indeed seem to count against existential risk charites – much as Karnofsky claims. If it is hard for others to measure whether a cause is working, that increases the chances that it is not working, since it means it is likely to be challenging for the charity to measure their own progress themselves.

  • Dave

    I doubt if people will really be rational about these things. Most people with lots of money to give are not mathematically worshipful. Giving comes from subjective calculations (fudge factors),Or which charity got to you first.

    Say,I was considering giving $100000 to a charity that fed stray cats. Then I read a study put out by the Audubon Society showing that stray cats killed 10 thousand birds. Are you saying the “Bayesian prior” would help me make a rational decision? How do you balance a dead cat vs a dead bird. It just isn’t a mathematical question.

    • Alexander Kruel

      How do you balance a dead cat vs a dead bird. It just isn’t a mathematical question.

      Identify the common factor you most strongly care about, e.g. suffering or general intelligence, study cats and birds, see how they compare and then calculate which outcome minimizes suffering or increases intelligence. Or else decide how much more you care about cats than birds or vice versa and see if the numbers of cats or birds saved does outweigh the amount of value per cat or bird.

      Everything is a math question, most are just too complicated to solve.

      • Dave

        OK. I see you do like a lot of people.You tacitly decide whether cats or birds are more important. Then you attempt to quantitate the question.This is because normative or simply whimsical solutions are not supposed to be adequate.

        Here is how I solved the problem. I fed the cat so the she didn’t have to hunt birds,but now there are more rats. My back yard is like the welfare state.

  • Hi Robin,

    I do believe that Bayesian adjustments are not included in most expected-value estimates of the kind I discuss. More at my comment on Less Wrong.

    My understanding from our Google+ exchange is that we agree that the Bayesian adjustment described would have the property of requiring stronger evidence for more counterintuitive claims (all else equal), and that no other “anti-weird-claims” adjustment is needed or warranted.

    I sympathize with your uneasiness regarding fudge factors. In my post, I state:

    Of course there is a problem here: going with one’s gut can be an excuse for going with what one wants to believe, and a lot of what enters into my gut belief could be irrelevant to proper Bayesian analysis. There is an appeal to formulas, which is that they seem to be susceptible to outsiders’ checking them for fairness and consistency.

    But when the formulas are too rough, I think the loss of accuracy outweighs the gains to transparency. Rather than using a formula that is checkable but omits a huge amount of information, I’d prefer to state my intuition – without pretense that it is anything but an intuition – and hope that the ensuing discussion provides the needed check on my intuitions.