Modelling without theory

In Economics, we have adopted the scientific method much like other sciences. As we teach our students, it consists of the following steps
  1. Observe regularities in the data.
  2. Formulate a theory.
  3. Generate predictions from the theory (hypotheses).
  4. Test your theory (is it consistent with data?)
In the context of Economics, the goal of the procedure is not only to explain why regularities in the data happened, but also to build a theory that is useful in predicting the consequences of particular policies or institutional designs.

David Hendry just published a paper about the scientific method in Economics that appears to fly in the face of what I just described. Here is an attempt to summarize his stand, and I apologize for quoting quite liberally:
  1. Specify the object for modeling, usually based on a prior theoretical analysis in Economics. An example of such an object is y=f(z).
  2. Defining the target for modeling by the choice of the variables to analyze, y and z, again usually based on prior theory. This is about deriving the data-generating process of the variables of interest, or fitting an equation with some statistical procedure.
  3. Embed that target in a general unrestricted model (GUM), to attenuate the unrealistic assumptions that the initial theory is correct and complete. The idea is to add other variables, lags, dummies, shift variables and functional forms to improve the empirical accuracy of the initial model.
  4. Search for the simplest acceptable representation of the information in that GUM. Or, now that the model has become huge (and may contain more variables than data points), let us get rid of some of them without losing too much in accuracy.
  5. Rigorously evaluate the final selection: (a) by going outside the initial GUM in step three, using standard mis-specification tests for the ‘goodness’ of its specification; (b) applying tests not used during the selection process; and (c) by testing the underlying theory in terms of which of its features remained significant after selection.
In other words, this amounts to take some linearized version of some theory, through data at it, whether theoretically relevant or not, then massage it until it fits the data.

A part from the fact that this is really the blueprint for an automated data mining exercise that is not driven in any way to answering a particular policy question, this procedure not only disregards the scientific method, but also Occam's Razor and the Lucas Critique. What use is it to learn that the CPI follows a polynomial of degree five with three lags on exports of cabbage, the number of sunny days, 25 other variables and three structural breaks (not an actual example used by Hendry, but it could)? If you want to make some very short term forecasts, that may be accurate, and this method is abundantly used in the City or Wall Street by neural networks "experts." But when it comes to advising policymakers, you need to have some Economics, and by that I mean economic theory, to explain why economic agents behave in such a way and what an intervention would lead to.

The scientific method starts with the observation of the data. Hendry dismisses this with a slight of hand, stating that stylized facts are "an oxymoron in the non-constant world of economic data." What if there are constants in economic data? In fact there are plenty, and this is what theories are trying to explain. Has Hendry never observed something in his surrounding that he then tried to explain? Or does he really spend his days feeding linear equations into his computer to see what it can come up with with his database?

Such papers, especially by people who enjoy respect like Hendry does in the UK, deeply upset me. To top it off, there are 33 self-citations.

No comments:

Post a Comment