48 Comments

I think this idea has become more of a reality since this post was made: www.brightfunds.org

Expand full comment

The implication to hypothesis 2 is not even bullshit, it's *obvious* bullshit.

You're confusing absolute and relative senses of welfare losses. Just because one has a nontrivial welfare loss does not mean it is not tiny compared to the total gains provided to recipients. Compare losing -1 units, which is trivial, to give what ends up being +1000 units, with losing -10 units--which we'll say is "nontrivial"--ending up giving +10000 units.

Expand full comment

But what on earth would be the benefit of a 'level' above level 2?

Expand full comment

Wiblin proposed a business model that doesn't appear to be viable for theoretical reasons.

Furthermore, if this business model was viable, why isn't anybody implementing it already? It doesn't require any especially deep insight and it's not based on any new technology.

I'd say that the burden of empirical evidence of the viability of this business model rests on Wiblin.

Expand full comment

That is a reasonable critique, though almost entirely theoretical, like my reasons for skepticism. Would be nice to also see a more empirical critique.

Expand full comment

A refutation of this post, with several of the points other commenters made:

http://rationalconspiracy.c...

Expand full comment

Suppose there is a charity that spends a lot on fundraising, and does so successfully.  If you donate the first dollar, you can tell a story about how your dollar got amplified, and you did more than a dollar's worth of good.

However, not all donors can have above-average effectiveness.  There are only so many dollars collected, and so many dollars paid out.  Your extra effectiveness is someone else's decreased effectiveness.

In the general case, you're not donating the first dollar.  You're donating an incremental dollar to an ongoing operation, so you have to look at the average effectiveness, and the fact is that much of your donation is spent on fundraising, not distributed to the beneficiaries.

Expand full comment

In general, you can't assume the effects would be statistically independent. You may be testing entirely wrong idea or working on a disease that is particularly hard to beat (you don't know variance and the mean, those are also something you'd have to infer from the trials). I agree it can in principle be done right, assigning reasonable priors to that as well, but just that - in principle. In practice what is done is that we assume worst case (all drugs fail) and put a bound on probability of putting a faulty drug to market (which can be done in a multitude of ways).

Expand full comment

If you have a prior over effect size for drugs, and you regress to the mean appropriately after observing your noisy estimate, then the best of the 1000 drugs is as good as it seems. You don't need to do further correction for the fact that you tested 1000 things in this setting. The best drug will look particularly good because the best drug *is* particularly good, if you tested 1000 different things from a distribution with significant variance (and if the distribution of real effectivenesses doesn't have significant variance, you end up regressing all the way to the mean). Am I missing some finer point?

Expand full comment

 No way, it's far.

Expand full comment

Again from a disinterested perspective: The saturation/cannibalization issue seems most deadly. Wiblin suggests that the cannibalization of the inefficient by the efficient would be desirable, which seems no doubt true within his framework; but to suggest that increased fundraising would accrue preferentially to the efficient seems unwarranted. 

In fact, the underuse of fundraising begs for a better explanation than the (hypothetical) irrational sensibilities of donors. It seems almost obvious that the charities have a tacit agreement in restraint of trade against cutthroat fundraising, precisely because of its being costly when they're just counter-cannibalizing each other. The fact that charities aren't supposed to be cutthroat--they're altruists, after all--would seem likely to lay the basis for anticompetitive practices. (I don't know whether such practices by charities are even deemed restraints on trade under U.S. antitrust law.)

Expand full comment

Implication: Total donations are easily exhausted; fundraising cannibalizes other charity work.

Doesn't Wiblin respond to this argument adequately: "...may ‘crowd out’ gifts to other charities. However, the logic of giving to GiveWell’s top rated charities is that they make (much) better use of money than most other individuals or organisations. So if you have a fundraising ratio significantly above 1:1, these downsides shouldn’t much matter." Wiblin seems to be arguing it's a good thing if GiveWell-approved fundraising cannibalizes the lesser charities.

(My interest here being purely dialectical.)

Expand full comment

I'd say on the other hand, it seems to me that you should try to concentrate the donations to the smallest possible amount of charities. http://www.slate.com/articl...

Expand full comment

Questionalbe.

Hypothesis 1: People are selfish and accept only small welfare losses for altruism.

Implication: Total donations are easily exhausted; fundraising cannibalizes other charity work.

Hypothesis 2: People can be convinced by clever fundraising to accept higher welfare losses.

Implication: The donor's welfare losses are no longer trivial and not necessarily small relative to the gains.

Expand full comment

Fund-raising may be close enough to saturation so that raising funds for a charity A takes from charity B. Meanwhile, money get diverted into pockets of overpaid rich white men at the advertisement agency.

It's difficult to calculate expected utilities correctly.

Raising education in Africa seems definitely good but it's value is difficult to calculate; fighting malaria is easy to estimate in proximate-cause saved lives, but alone it could eventually lead to larger number of deathsdue to starvation. There's always a multitude of beneficial and adverse scenarios that may or may not happen as consequence of the help in question.

One scenario, evaluated *perfectly*, is like a drug efficacy estimate based on 1 patient. One scenario evaluated imperfectly is not even that. Worst of all, the distribution of utilities and dis-utilities of unevaluated consequences is not known, and consequently you can't calculate how much you should scale down due to small sample size.

edit: by the way, with regards to charity evaluation in terms of expected utilities, expected utilities are thoroughly counter intuitive and work nothing like, say, prices of life, and reporting them as such is extremely misleading. For example, suppose you are testing a thousand drugs, and for each individual drug you calculate expected efficacy somehow, from some limited clinical data. The highest expected efficacy is, say, 1.2 , the lowest is 0.7 (normalized comparing to placebo) . What is the expected efficacy of that drug with the highest rank, which you proclaimed the best? Is it still 1.2 ? In general, nope, for much same reason why in frequentist statistics you have to boost the required CI when dealing with multiple trials. Actually, if the best drug in the series of 1000 didn't outperform placebo as much as best placebo out of 1000 would have (via the sampling error), under most reasonable priors this best drug is probably worse than placebo. If you picked the cheapest drug out of 1000, it still has same list price; the list price didn't magically change; but if you had calculated expected utilities of 1000 drugs, then picked the best, you need to re-calculate.

Expand full comment

Charities spending money on fundraising with justifications like "raises more than a dollar with each dollar it receives" is part of why we have inefficient charities in the first place. Charities often do contract out their fundraising to other organisations, frequently with pretty unimpressive results.

Expand full comment