I want to pivot off of some of Karl’s comments on experiments in economics. The value of experimental methods versus other econometric approaches in economics is a pretty hot topic, and some economists are pushing back in the same way that Karl does. I would argue that the value of randomization depends on the context of the problem. Sometimes they’re very useful, and sometimes they are perfect for answering a limited question. Other times non-experimental methods are equally or more valid.

One recent example of pushback comes from economist Michael Keane. His paper is interesting because of how it contrasts with Jim Manzi’s article in NRO today that Karl was responding to below, both of whom are writing about a similar problem in economics and marketing: estimating elasticities.  Let me quote Jim at length, since he explains it better then I could summarize:

Suppose… you wanted to predict the effect of “disrupting the supply” of Snickers bars on the sale of other candy in your chain of convenience stores. The “elasticities” here are how much more candy of what other types you would sell if you stopped selling Snickers….

The best, and most obvious, way to establish the elasticities, is to take a random sample of your stores, stop selling Snickers in them and measure what happens to other candy sales. This is the so-called scientific gold standard method for measuring them. Even this, of course, does not produce absolute philosophical certainty, but a series of replications of such experiments establishes what we mean by scientific validation.

Michael Keane discusses these and similar issues and talks about the need for structural models, and the limitations of the experimentalist approach:

Interestingly, it is easy to do natural experiments in marketing. Historically, firms were quite willing to manipulate prices experimentally to facilitate study of demand elasticities. But it is now widely accepted by firms and academics that such exercises are of limited use. Just knowing how much demand goes up when you cut prices is not very interesting. The interesting questions are things like: Of the increase in sales achieved by a temporary price cut, what fraction is due to stealing from competitors vs. category expansion vs. cannibalization of your own future sales? How much do price cuts reduce your brand equity? How would profits under an every-day-low-price policy compare to a policy of frequent promotion? It is widely accepted that these kinds of questions can only be addressed using structural models—meaning researchers actually need to estimate the structural parameters of consumers’ utility functions. As a result, the “experimentalist” approach has never caught on.

He goes on to describe how non-experimental data was used in an influential paper of his in marketing:

However, the fact that we should pay close attention to sources of identifying variation in the data is not an argument for abandoning structural econometrics. Plausibly exogenous variation in variables of interest is a desideratum in all empirical work—not an argument for one approach over another. Consider Erdem and Keane (1996). That paper introduced the structural approach into marketing, where it rapidly became quite pervasive. But why was the paper so influential? One factor is that many found the structural model appealing…

….But at least as important is that the paper produced a big result: it provided a reliable estimate of the long run effect of advertising on brand equity and consumer demand. This had been a “holy grail” of marketing research, but prior work had failed to uncover reliable evidence that advertising affected demand at all—an embarrassing state of affairs for marketers!…

Why did we find evidence of long-run advertising effects when others had not? Was it the use of a structural model? I think that helped, but the key reason is that we had great data. Specifically, we had scanner data where households were followed for years, and their televisions were monitored so we could see which commercials each household saw. If you are willing to believe that tastes for brands of detergent are uncorrelated with tastes for television shows (which seems fairly plausible), this is a great source of exogenous variation in ad exposures. I agree that all econometric work, whether structural or not, should ideally be based on such plausibly exogenous variation in the data.

So there is one influential example where non-experimental data was successful where previous researchers in a field where experiments are popular had long been unsuccessful. There are times when experimental data is much better suited to answering a question than non-experimental data, and there are times when the converse is true.

About these ads