53 Comments

We do. In fact, we always do - we have to. You can't buy every ad spot and every poster and every banner in every town and city across the entire country. So you run a campaign in select cities or even select neighbourhoods, if you can afford that granularity of analysis. You then compare it to reasonably similar cities/towns/neighbourhoods, and analyse the sales in those areas. This can be done with both offline and online advertising - doing this kind of analysis is kind of a central part of our job.

Expand full comment

I was thinking more their impression that ads work on average, even though they can't know whether their own ad in particular has worked.

Expand full comment

You might find this experimental paper on paid search interesting: http://faculty.haas.berkele...

Expand full comment

I don't see why firms don't just stop displaying ads for six months and see what happens. I would expect the conversation ratio to be less than one in ten million.

Expand full comment

There are other dependent variables one can measure, and I would expect them to be less noisy than sales. Three such variables that come to mind are: brand recognition, attitude, and information about a product.

Expand full comment

If you are paying by the click, your optimal strategy is not to get the most clicks, typically. It is to balance clicks against conversions.

Clickbait strategies are a small minority of the cost per click advertising space. They are better suited to CPM display advertising, where they are more prevalent.

Expand full comment

Those are both known, although they are lesser issues in the CPA space, e.g. for email advertising. There, you face the related issue that advertising effects can be confounded, because consumers often see multiple advertisements in different media for the same product, but only the last click is counted as a conversion. That is a real problem in assessing ROI across a campaign, although as tracking gets more sophisticated, knowing how many times an individual saw a display ad before converting via search or email becomes possible, etc.

Expand full comment

Well, one anecdotal observation that supports the theme of your work is that I have heard people in my company express frustration over how we have been able to work with advertisers to provide an unprecedented level of measurement and yet the money is very slow to flow from "entrenched" industries like TV and direct mail.

And, good measurement is very hard... I'm not on the measurement team but I'm aware of what they do and the challenges they face. It could be that audience size is a major constraint, and that it has never come up in our discussions because we do this only for big spenders with big campaigns. But the main constraint I am aware of in terms of doing this at scale is constructing a good "intent-to-treat" holdout. Our advertising platform is a real-time auction with feedback optimization, and while we have tools to work within this context for the purpose of big, "one off" studies (basically by creating pre-defined audiences as I explained), we are still working on a scalable solution with the dynamic audiences that our ad delivery is designed to serve.

Expand full comment

Cost per click? You are aware that websites will use clickbait to boost their numbers. People come for the sensationalist headline or the promise of boobs, they'll probably won't even notice the ad.

Expand full comment

In several other papers (which can be downloaded at my SSRN profile: http://papers.ssrn.com/sol3..., we discuss the old BehaviorScan experiments and results. They're a great example of good experimentation that has been surprisingly rare in advertising practice (and academic publications). However, our read of their publications is in line with this linked author's comments--that most of their results were statistically insignificant, even with low thresholds like 20%-one-sided (~t>1). Their meta-analyses of hundreds of studies suggest that ads are doing something, but knowing that ads work on average is a far cry from useful optimizing information if you suspect that there is significant amount of heterogeneity in the effectiveness among advertisers, creatives, and audiences.

Those tests with 3,000 or 30,000 households were generally underpowered by the same calculations we've run. They had some advantages--it's possible that consumer packaged goods (largely the subject of study in many of those tests) have slightly better power properties than general retail, but the main takeaway is that in spite of the statistical revolution of advertising over the past couple decades, measuring causal effects of ads is still quite hard.

Expand full comment

Right. And it's not whether ads do "anything," but rather whether they're cost-effective--which is the decision-relevant information. We argue in the paper that not only is it hard to measure whether the ads are cost effective, but even demonstrating that they're doing much at all is also difficult in many cases.

Expand full comment

We point out in the paper that the long-run effects of advertising are even more impossible to measure because you accumulate more baseline noise over longer time horizons. You'd have to spend even more money over a long period of time to overcome the background noise that accumulates over time. Further, you have to deal with word-of-mouth spillovers that would get progressively larger over longer periods of time as ad effects diffuse through social networks (not just the online variety), broadcast media channels (not just advertising), and other communication media such as blogs.

Expand full comment

It is not an uncommon practice for consumers to avoid clicking on search or display ads while still acting upon the information. It's also not uncommon for advertisers to target ads to users who are likely to convert anyway. Both of these effects go in opposite directions and can be many times larger than the actual (potentially profitable) ad effects. That's another major thrust of the paper.

Expand full comment

We used intent-to-treat + counterfactual exposures for all of the experiments. I'm glad to hear that you're doing the same. Such careful experimentation has not been as common in the industry as I wish it were, though things have changed a lot over the past 6 years.

Expand full comment

This is a nice confirmation of our findings in the paper--you're able to run useful experiments for "big spend advertisers" and the website you work with is unusually large and can run experiments in the 10s of millions of users.

Thanks for sharing your thoughts.

Expand full comment

Ferrari doesn't advertise? What do you call their spending on F1?

I would also note that when, like Ferrari, your annual volume is a bad day for Toyota, you can get away with living off of press release and car shows. Once you hit mass market, you need to advertise.

I will also note, if there's one company that would know the value of advertising, it would be Google, and they started to advertise as they went to the mass consumer market.

Expand full comment