Why Do Firms Buy Ads?

Firms almost never have enough data to justify their belief that ads work:

Classical theories assume the firm has access to reliable signals to measure the causal impact of choice variables on profit. For advertising expenditure we show, using twenty-five online field experiments with major U.S. retailers and brokerages ($2.8 million expenditure), that this assumption typically does not hold. Evidence from the randomized trials is very weak because individual-level sales are incredibly volatile relative to the per capita cost of a campaign — a “small” impact on a noisy dependent variable can generate positive returns. A calibrated statistical argument shows that the required sample size for an experiment to generate informative confidence intervals is typically in excess of ten million person-weeks. This also implies that selection bias unaccounted for by observational methods only needs to explain a tiny fraction of sales variation to severely bias observational estimates. We discuss how weak informational feedback has shaped the current marketplace and the impact of technological advances moving forward. (more; HT Bo Cowgill)

More striking quotes below. The paper offers management consulting and nutrition supplements as examples of other products that people rarely have sufficient evidence to justify. In fact, I wouldn’t be surprised if this applied to a large fraction of what we and firms buy: we buy because others say it works, and we don’t have data to disprove them.

More striking quotes: 

The standard deviation of sales, on the individual level, is typically ten times the mean over the duration of a typical [ad] campaign and evaluation window. … Answering questions such as “was the ROI 15% or -5%,” a large difference for your average investment decision, or “was the annualized ROI at least 5%,” a reasonable question to calibrate against the cost of capital, typically requires at least hundreds of millions of independent person-weeks—nearly impossible for a campaign of any realistic size. …

If an ad costs 0.5 cents per delivery (typical of “premium” online display ads), each viewer sees one ad, and the marginal profit per “conversion” is $30, then only 1 in 6,000 people need to be “converted” for the ad to break even. Suppose a targeted individual has a 10% higher baseline purchase probability (a very modest degree of targeting), then the selection effect is expected to be 600 times larger than the causal effect of the ad. …

This regression amounts to detecting 35 cent impact on a variable with a mean of $7 and a standard deviation of $75. This implies that the R2 for a highly profitable campaign is on the order of 0.0000054.2 To successfully employ an observational method, we must be sure we have not omitted any control variables or misspecified the functional form to a degree that would generate an R2 on the order of 0.000002 or more, otherwise estimates will be severely biased. …

Our results do not necessarily apply to small firms, brand new products or direct-response TV advertising. However, according to estimates from Kantar AdSpender (and other industry sources), large advertisers using standard ad formats, such as the ones we study, account for the vast majority of advertising expenditure. …

The existence of vastly different advertising strategies by seemingly similar firms operating in the same market with similar margins is consistent with our prediction that very different beliefs on the efficacy of advertising are allowed to persist in the market.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Paul

    Is it not signaling?

    When I see an ad, I usually zone out, mute the ad, or otherwise attempt to ridicule or ignore it. But I remember “that business is doing well enough to waste money on ads, maybe it’s in it for the long haul”.

    • Paul

      I realize that this doesn’t answer your real question (how do we know that it works?), but do you accept that efficient markets would just displace wasteful firms?

      Ads must work, correct? The world is complex, and we don’t have a high enough n to measure many things. Don’t the business owners care the most (and most likely to know the best)? We have no counter-factual.

      Are you arguing that this is a society-wide error made by almost everyone faced with the decision? That’s a lot to swallow.

      • Norman Maynard

        Perfectly competitive markets would just displace wasteful firms. Monopolistically competitive markets (the types with firms we see constantly advertising) may not.

        Almost all of medical history is filled with society-wide errors made by almost everyone faced with the decision. Corrections can take many, many generations. It’s just not that uncommon.

      • LemmusLemmus

        Many firms don’t advertise.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    What about your idea that folks construct their identities through the products they buy, which would seem to require advertising to establish a brand?

  • hellotoRH

    Although this has probably been mentioned before, you reference supplements, any in general or a short link to a list(?)

  • IMASBA

    I agree with Stephen Diamond: ads can be useful to establish brand identities. They can also make people aware of sales events, but yeah, I’ve wondered whether ads really work since I was a child, since they tend to annoy me and most people I ask about it and I had doubts some subliminal influence would be strong enough to overcome it or that something so vague as a subliminal influence could be reliably demonstrated by a field as messy and filled with hot air BS as marketing (yes, I really thought about those things as a child).

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      Yes, it does seem unusual that you thought such things as a child. I had no question that ads worked because I’d see things advertised and want them.

      • IMASBA

        I saw ads for hundreds of different toys, I could only buy a few of them and it didn’t seem to me like I bought the ones with the most ads (often times even those that had no ads at all: I just liked legos and there were very ads for legos where I lived).

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        This may be, it occurs to me, a generational difference: ads were fewer when I was a kid. (I recall at age 4 a futuristic binoculars obtained by sending cereal boxtops and a little cash.)

        This suggests that ads may seem ineffective today because the market has been so saturated. It may be hard to measure their impact at the margin. But that doesn’t mean they aren’t effective inframarginally (to use a word recently learned from another commenter).

      • IMASBA

        Saturation is certainly part of it. Saturation leads to too many choices, too many competing claims of being the best and a general loss of trust (because of those competing claims but also because the number of scandals keep piling up and personal experience with rubbish products advertised as great will increase).

  • Quixote

    Buying ads ensures or at least promotes favorable coverage in the institution you are writing a check to. Ads are not about customers. Ads are about media.

  • http://www.selfishmeme.com/ The Watchmaker

    Of course, when we have insufficient data, we default to our priors, not “no action.”

  • BJ Terry

    This article had an interesting tidbit of information on this topic. I wish there were more public data about the findings: https://vdare.com/posts/the-obama-campaign-and-big-data

    Except I was doing exactly that for the BehaviorScan market research service on the Procter & Gamble account in 1983-85. For about 30,000 volunteer consumers, we knew every single thing they bought at every supermarket and drug store in town, plus every TV show and commercial they watched in their homes, and we could send different commercials to individual houses via their cable set-top-box.

    The set-top box recorded the exact channel each panelist was tuned to by the second. We then employed workers to watch videotapes of all TV shows shown in those markets and write down which commercials started when.

    [...]

    The unsettling finding of our 1980s Big Data business was that it wasn`t readily apparent from this vast amount of information we collected on consumers that changes in TV advertising had much impact on viewers` purchasing behavior. When we started testing and tracking, most brand managers were convinced that they could boost sales for existing consumer packaged goods by doubling their ad budgets, but our unbelievably sophisticated real world laboratory tests seldom showed that was true. About the best we could come up with is that if you have some actual news to tell consumers — e.g., you`ve added a breakthrough ingredient to your toothpaste that is endorsed by the American Dental Association because it`s so effective — yeah, then heavy advertising move the needle. But for most famous old products, where all you have to tell consumers is the same old same old, more ads don`t necessarily sell more product …

    Your linked paper regarding online ads would seem to suggest that their tests were underpowered to detect anything, but the structure is so different, it would be hard to say without knowing a more.

    • Randall Lewis

      In several other papers (which can be downloaded at my SSRN profile: http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=1502148), we discuss the old BehaviorScan experiments and results. They’re a great example of good experimentation that has been surprisingly rare in advertising practice (and academic publications). However, our read of their publications is in line with this linked author’s comments–that most of their results were statistically insignificant, even with low thresholds like 20%-one-sided (~t>1). Their meta-analyses of hundreds of studies suggest that ads are doing something, but knowing that ads work on average is a far cry from useful optimizing information if you suspect that there is significant amount of heterogeneity in the effectiveness among advertisers, creatives, and audiences.

      Those tests with 3,000 or 30,000 households were generally underpowered by the same calculations we’ve run. They had some advantages–it’s possible that consumer packaged goods (largely the subject of study in many of those tests) have slightly better power properties than general retail, but the main takeaway is that in spite of the statistical revolution of advertising over the past couple decades, measuring causal effects of ads is still quite hard.

  • robertwib

    Prior probabilities?

    • http://overcomingbias.com RobinHanson

      “Priors” are usually post something else further back in time. Unless you want to argue that we are born believing in ad effectiveness.

      • mgoodfel

        We probably are born believing that anything the group believes is true. So it would be natural to assume that advertising, which makes it look like an item is already popular, would be an effective strategy.

        And there’s the obvious flip side — if I don’t tell anyone about my product, how can I expect them to know to buy it?

      • Rob

        I was thinking more their impression that ads work on average, even though they can’t know whether their own ad in particular has worked.

  • Sean II

    Somewhere in corporate America, someone who didn’t make it to page 18 is saying: “Hmmm…according to this here, we might have no way of knowing if our ad money is well spent or just wasted. Better hire a bunch of d-bags from McKinsey and have them look into that.”

  • Robs1001

    You don’t need data to be confident that ads work.

    You need only two bits of knowledge:

    1) That ads are effective at influencing your own decisions.
    2) That you’re not that unusual.

    You can deduce both of these by attending to your own thoughts, experiences and decisions. No data required.

    • IMASBA

      The fact that 1) is a bit complicated in practice is alluded to in the name of this blog.

      • Robs1001

        I don’t think it’s at all hard to know 1), but certainly figuring out *which* ads will be effective in which ways and why is a complicated business.

      • Randall Lewis

        Right. And it’s not whether ads do “anything,” but rather whether they’re cost-effective–which is the decision-relevant information. We argue in the paper that not only is it hard to measure whether the ads are cost effective, but even demonstrating that they’re doing much at all is also difficult in many cases.

  • Curt Adams

    I can think of two problems with measuring the benefit of ads. First is that the effect is very long-term. When I’m at the supermarket and in a hurry I’m certainly more likely to get something I’m familiar with, and in some cases I can remember ads from 40 years ago. Second, there’s a externality in that exposure to more ads increases the desire to buy in general. I’d think that a experiment based on a single campaign is very unlikely to show benefit, even if there is one. Better would be a long-term trial.

    In general, it’s pretty rare to have a really successful brand without advertising.

    • DH

      “In general, it’s pretty rare to have a really successful brand without advertising.”

      Couldn’t be more wrong. Some of the best brands in history grew to be leaders in their industry solely by word of mouth and quality of product. Crossfit & Ferrari are two examples that come to mind. Neither advertise, but rely almost exclusively on reputation & user’s evangelizing the product. When you have something great, you don’t have to spend millions of dollars on advertising to bombard the world’s visual & auditory landscape with your value proposition; the quality/value of the product speaks for itself. Think about how Facebook grew to reach billions of users in under a decade yet during that time never purchased a single advert- their users became their advertisers.

      Advertising is a farce. Keeps marketers in a job at the expense of societal well being.

      • Eric Mineart

        Ish. All brands are built on word of mouth and PR. But most brands are sustained with (brand consistent) advertising. Sure, there are exceptions, Trader Joes is another, but the rule is almost always ads as the defense budget for a brand.

        A brand is also the sum of all touchpoints, so no amount of great advertising is going to sustain a brand if the other touchpoints are crap.

        Finally, marketers and not advertisers. Common conflation, but any marketer who is an advertiser first is probably not that good of a marketer.

      • Meaux

        Ferrari doesn’t advertise? What do you call their spending on F1?

        I would also note that when, like Ferrari, your annual volume is a bad day for Toyota, you can get away with living off of press release and car shows. Once you hit mass market, you need to advertise.

        I will also note, if there’s one company that would know the value of advertising, it would be Google, and they started to advertise as they went to the mass consumer market.

    • Randall Lewis

      We point out in the paper that the long-run effects of advertising are even more impossible to measure because you accumulate more baseline noise over longer time horizons. You’d have to spend even more money over a long period of time to overcome the background noise that accumulates over time. Further, you have to deal with word-of-mouth spillovers that would get progressively larger over longer periods of time as ad effects diffuse through social networks (not just the online variety), broadcast media channels (not just advertising), and other communication media such as blogs.

  • JW Ogden

    I am amazed at how confident people are in their beliefs about nutrition. I see very little strong evidence to support the kinds of pronouncements people make about nutrition.
    I feel that even today in areas like nutrition, economics, psychology and many other areas we still groping in the dark.
    Here is what that does to me: I accept the Intergovernmental Panel on Climate Change as the best guess on AGW, but because of the way people are in these other areas, I think that the best guess on AGW may well be completely wrong

  • justin

    We know advertising works because individual firms spend billions on them. If they didn’t work, we’d see firms that don’t advertise become massively more profitable than, say, Apple.

  • SisyphusRolls

    This paper seems to underestimate the amount of money spent on more measurable ad outcomes, e.g. through cost-per-click search or cost-per-action sales via email, etc. Online, display ads such as those discussed are a minority of total spend if you include email, search, social, etc., which of course you should. Social does not lend itself well to metrics, but search and email allow quite accurate ROI calculations, among other metrics.

    • Randall Lewis

      It is not an uncommon practice for consumers to avoid clicking on search or display ads while still acting upon the information. It’s also not uncommon for advertisers to target ads to users who are likely to convert anyway. Both of these effects go in opposite directions and can be orders of magnitudes larger than the actual (potentially profitable) ad effects. That’s another major thrust of the paper.

      • SisyphusRolls

        Those are both known, although they are lesser issues in the CPA space, e.g. for email advertising. There, you face the related issue that advertising effects can be confounded, because consumers often see multiple advertisements in different media for the same product, but only the last click is counted as a conversion. That is a real problem in assessing ROI across a campaign, although as tracking gets more sophisticated, knowing how many times an individual saw a display ad before converting via search or email becomes possible, etc.

    • IMASBA

      Cost per click? You are aware that websites will use clickbait to boost their numbers. People come for the sensationalist headline or the promise of boobs, they’ll probably won’t even notice the ad.

      • SisyphusRolls

        If you are paying by the click, your optimal strategy is not to get the most clicks, typically. It is to balance clicks against conversions.

        Clickbait strategies are a small minority of the cost per click advertising space. They are better suited to CPM display advertising, where they are more prevalent.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    “By creating more informed consumers, ads induce producers to offer better prices and quality, which benefits other consumers.” ( http://www.overcomingbias.com/2013/04/in-praise-of-ads.html )

  • Silent Cal

    It’s a bit misleading to say that firms don’t know if their ads ‘work’. That sounds like not knowing if the RoI is above -100%, when the actual situation is not knowing if it’s above 0%. Don’t firms have to make lots of investment decisions without conclusive evidence of positive RoI?

  • Pingback: What’s worth reading | Shreyasp's Weblog

  • MPS17

    I work for an internet publisher who has the means to collaborate to perform end-to-end measurement (with intent-to-treat holdouts) to verify that ads on our site provide, on average (for those who’ve done this), many times return on ad spend.

    • IMASBA

      Are you saying those businesses delay any other changes to their business during an extended period of time, half of which they operate without ads and half of which they operate with ads?

      • MPS17

        No, I’m saying that my business works with these businesses to divide the people we would show ads to into two groups, a treatment group who we actually show the ads to, and a control group who we don’t, and then we compare the sales revenue between these two groups.

      • Robin Hanson

        The paper this post is about says you usually won’t have enough data to have much confidence in your belief that the group you find with higher sales revenue in your test would also have more revenue if you reran the test again.

      • MPS17

        I don’t really want to take the time to read the paper, at least not now. The experiments I refer to are expensive so we only do them for big spend advertisers, who likewise advertise to a large audience. Statistical uncertainty can be measured by bootstrap. There is always a probability of false positive, but the point is that the probability is measured to be very small.

      • Robin Hanson

        So let me get this straight – you want us to believe you that the paper is wrong, even though you can’t be bothered to read it, and even though your salary depends on folks believing it is wrong?

      • MPS17

        I didn’t assert the paper was wrong, I asserted that my company had demonstrated (very) positive return on ad spend.

        Granted, my company is not a typical one. I don’t know what standards advertisers apply to decisions on ad spend with other companies. All I am claiming is that we have worked with many advertisers to demonstrate positive return on ad spend. I assumed that advertisers show similar interest in demonstrating value with other advertising platforms, but perhaps not.

        As I start to glance over the paper, the first thing I notice is a stated requirement of “ten million person-weeks,” which the company I work for can certainly deliver (and probably has for many of these experiments that I refer to), though I’m not yet convinced it’s necessary…

        …and the second thing I notice is that a major part of the claim is the need to overcome selection effects, which are potentially much larger than the signal. However these can be overcome within an “intent to treat” framework. In particular, your control group isn’t simply a random subset of people you didn’t show the ad to. Your control group is a sample of people that you would have chosen the ad too, if there were no experiment, but that you withhold showing the ad to, because of the experiment. So, for example, you can predefine an audience of people that you intend to show the ad to, and then randomly select a fraction of them for the holdout, and then compare outcomes for the ones you actually showed the ad to vs the ones you were going to show the ad but but then withheld for the holdout.

      • Randall Lewis

        We used intent-to-treat + counterfactual exposures for all of the experiments. I’m glad to hear that you’re doing the same. Such careful experimentation has not been as common in the industry as I wish it were, though things have changed a lot over the past 6 years.

      • MPS17

        Well, one anecdotal observation that supports the theme of your work is that I have heard people in my company express frustration over how we have been able to work with advertisers to provide an unprecedented level of measurement and yet the money is very slow to flow from “entrenched” industries like TV and direct mail.

        And, good measurement is very hard… I’m not on the measurement team but I’m aware of what they do and the challenges they face. It could be that audience size is a major constraint, and that it has never come up in our discussions because we do this only for big spenders with big campaigns. But the main constraint I am aware of in terms of doing this at scale is constructing a good “intent-to-treat” holdout. Our advertising platform is a real-time auction with feedback optimization, and while we have tools to work within this context for the purpose of big, “one off” studies (basically by creating pre-defined audiences as I explained), we are still working on a scalable solution with the dynamic audiences that our ad delivery is designed to serve.

      • Randall Lewis

        This is a nice confirmation of our findings in the paper–you’re able to run useful experiments for “big spend advertisers” and the website you work with is unusually large and can run experiments in the 10s of millions of users.

        Thanks for sharing your thoughts.

  • Daublin

    There are other dependent variables one can measure, and I would expect them to be less noisy than sales. Three such variables that come to mind are: brand recognition, attitude, and information about a product.

  • Dan Lavatan

    I don’t see why firms don’t just stop displaying ads for six months and see what happens. I would expect the conversation ratio to be less than one in ten million.

    • Rory

      We do. In fact, we always do – we have to. You can’t buy every ad spot and every poster and every banner in every town and city across the entire country. So you run a campaign in select cities or even select neighbourhoods, if you can afford that granularity of analysis. You then compare it to reasonably similar cities/towns/neighbourhoods, and analyse the sales in those areas. This can be done with both offline and online advertising – doing this kind of analysis is kind of a central part of our job.

  • DVM

    You might find this experimental paper on paid search interesting: http://faculty.haas.berkeley.edu/stadelis/Tadelis.pdf