Thursday, October 12, 2006

Some Thoughts on Field Experiments as a Research Methodology

All great institutions face the challenge of re-inventing themselves. Otherwise, they age and must rest on their past accomplishments. Yesterday, I had the chance to learn more about the research agenda of one of the University of Chicago's Economics Department's new leaders. John List gave a Departmental Lecture at MIT. The room was packed with MIT faculty and PHD students.

John did a great job explaining how his work builds on past lab experimental research but he also clearly outlined how his research challenges builds on the best features of lab research and non-experimental "natural occuring" data. His energy and ethusiasm for his research agenda were on clear display.

When I was a graduate student at Chicago, there were no courses on experimental economics. As I remember, Nat Wilcox was the only graduate student with research interests in this field. The faculty were showing some interest in ongoing work on the contingent valuation debate but revealed preference empirics remained the focus.

From my own perspective, I always thought that lab experiments based on college students was kind of silly. I viewed dude's responses in the lab as suffering from Hawthorne effects and fear of social stigma. Inference from this non-representative population couldn't generalize to the greater population. Unfortunately for me, I couldn't offer more constructive comments than these.

To his credit, John List is offering more constructive comments. His work attempts to bridge the gap between experimentalits and empiricists such as me who rely on day to day "naturally occuring" data. The problem with guys like me is that to make the jump from correlation to causation in our studies we need to make the untestable assumption that E(U|X,Z) = 0. While we can declare that the error term is uncorrelated with key explanatory variables, how do we know that this is the case? The natural experiment instrumental variables literature has certainly uncovered some settings where this is plausible. I'm thinking about Josh Angrist's work and Chay and Greenstone's work. But in other cases, this is less clear.

Enter the field experimenters! What John List can do is that he can randomize the incentive scheme that different economic actors face. These economic actors DO NOT know that they are participating in an experiment. It is true that John can only "experiment" on the self select set of people who choose to participate in the market. A criticism I have had of this field experiment literature is that it cannot make more progress on the participation equation. Under different incentive schemes would the extensive margin of "who participates" change? The field experiments literature must focus on partial equilibrium effects but in this setting John List has demonstrated the power of his ability to control the treatments that different real world economic actors experience and then he can trace out how they respond. So, my point is that John List is doing some of the best causality research of any other social scientist not named Jim Heckman.

While many of John's early field experiments focused on distinctive traders such as baseball card markets, his new experiments appear to be more "mainstream". He told the MIT audience that in his new experiments he is working with for profit companies to study how consumer behavior is affected by different treatments concerning product attributes.

I hope that these companies allow John to publish such results because such field experiments that reveal consumer demand will offer a nice horse race against non-experimental structural IO studies of consumer demand.

Listening to John's talk, the only challenge I forsee for field experiments is that they cannot be used to establish time trends or interesting dynamics. They are mainly useful for tracing out short run substitution effects.